Home
Jobs
Companies
Resume

643 Sagemaker Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Purpose: The Job holder will be responsible Coding, designing, deploying, and debugging development projects. Take part in analysis, requirement gathering and design. Owning and delivering the automation of data engineering pipelines. Roles and Responsibilities: Solid understanding of backend performance optimization and debugging. Formal training or certification on software engineering concepts and proficient applied experience Strong hands-on experience with Python Experience in developing microservices using Python with FastAPI. Commercial experience in both backend and frontend engineering Hands-on experience with AWS Cloud-based applications development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Strong engineering background in machine learning, deep learning, and neural networks. Experience with containerized stack using Kubernetes or ECS for development, deployment, and configuration. Experience with Single Sign-On/OIDC integration and a deep understanding of OAuth, JWT/JWE/JWS. Knowledge of AWS SageMaker and data analytics tools. Proficiency in frameworks TensorFlow, PyTorch, or similar. Familiarity with LangChain, Langgraph, or any Agentic Frameworks is a strong plus. Python engineering experience Education Qualification: Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA) Experience: 3-8 Years. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

We are seeking a dynamic person to join our AI and Data Science team. This position will work on delivering innovative AI and data-driven solutions. The candidate must have strong ML fundamentals, Hands-on experience on GenAI and RAG. On the other hand we’re looking for good engineering skills (Python, Docker etc.) and exposure to cloud technologies is a plus. Required skills Generative Ai: Experience with RAG: Particularly retrieval and reranking Working Experience of different indexing algorithms ( Flat / HSNW) Experience in working with different LLM based Embedding Models ( ada / bge etc) LLM Parameter tuning experience Experience of different prompt engineering techniques Python: Experience with OOps Python Experience with Type hinting Exp with API Frameworks like Flaks / Fast APi is a must. Experience with Docker is important Artificial Intelligence: Experience with Different use-cases (Multi-class /MultiLabel classification) in NLP is Important. Experience in Transformers architecture is Important. Working Understanding of attention and implementation of transformers. Working Understanding of Embeddings ( Word2Vec / Encoder based EMbeddings) is a must. Experience with different cost function / Optimization algorithms in Deep Learning. Cloud Providers Aws AWS : Sagemaker / ECS / S3 / Lambda Machine Learning Generics: Candidate should have used or work on: Transformers RNN (LSTM/Bi-LSTM) Candidate should have knowledge on Machine Learning basics like Linear and Logistic Regression / Random forests Understanding of ML /NLP metrics like (Precision/RecallF1 score) Hyper Parameter tuning , model training / selection Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

We are seeking a dynamic person to join our AI and Data Science team. This position will work on delivering innovative AI and data-driven solutions. The candidate must have strong ML fundamentals, Hands-on experience on GenAI and RAG. On the other hand we’re looking for good engineering skills (Python, Docker etc.) and exposure to cloud technologies is a plus. Required skills Generative Ai: Experience with RAG: Particularly retrieval and reranking Working Experience of different indexing algorithms ( Flat / HSNW) Experience in working with different LLM based Embedding Models ( ada / bge etc) LLM Parameter tuning experience Experience of different prompt engineering techniques Python: Experience with OOps Python Experience with Type hinting Exp with API Frameworks like Flaks / Fast APi is a must. Experience with Docker is important Artificial Intelligence: Experience with Different use-cases (Multi-class /MultiLabel classification) in NLP is Important. Experience in Transformers architecture is Important. Working Understanding of attention and implementation of transformers. Working Understanding of Embeddings ( Word2Vec / Encoder based EMbeddings) is a must. Experience with different cost function / Optimization algorithms in Deep Learning. Cloud Providers Aws AWS : Sagemaker / ECS / S3 / Lambda Machine Learning Generics: Candidate should have used or work on: Transformers RNN (LSTM/Bi-LSTM) Candidate should have knowledge on Machine Learning basics like Linear and Logistic Regression / Random forests Understanding of ML /NLP metrics like (Precision/RecallF1 score) Hyper Parameter tuning , model training / selection Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description And Requirements CareerArc Code CA-PS Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The DSOM product line includes BMC’s industry-leading Digital Services and Operation Management products. We have many interesting SaaS products, in the fields of: Predictive IT service management, Automatic discovery of inventories, intelligent operations management, and more! We continuously grow by adding and implementing the most cutting-edge technologies and investing in Innovation! Our team is a global and versatile group of professionals, and we LOVE to hear our employees’ innovative ideas. So, if Innovation is close to your heart – this is the place for you! BMC is looking for an experienced Data Science Engineer with hands-on experience with Classical ML, Deep Learning Networks and Large Language Models, knowledge to join us and design, develop, and implement microservice based edge applications, using the latest technologies. In this role, you will be responsible for End-to-end design and execution of BMC Data Science tasks, while acting as a focal point and expert for our data science activities. You will research and interpret business needs, develop predictive models, and deploy completed solutions. You will provide expertise and recommendations for plans, programs, advance analysis, strategies, and policies. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Machine Learning and Generative AI Capabilities, using mainly Python Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to AI/ML features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Work closely with customers and partners to analyze time-series data and suggest the right approaches to drive adoption. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders. To ensure you’re set up for success, you will bring the following skillset & experience: You have 8+ years of hands-on experience in data science or machine learning roles. You have experience working with sensor data, time-series analysis, predictive maintenance, anomaly detection, or similar IoT-specific domains. You have strong understanding of the entire ML lifecycle: data collection, preprocessing, model training, deployment, monitoring, and continuous improvement. You have proven experience designing and deploying AI/ML models in real-world IoT or edge computing environments. You have strong knowledge of machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Whilst these are nice to have, our team can help you develop in the following skills: Experience with digital twins, real-time analytics, or streaming data systems. Contribution to open-source ML/AI/IoT projects or relevant publications. Experience with Agile development methodology and best practice in unit testin Experience with Kubernetes (kubectl, helm) will be an advantage. Experience with cloud platforms (AWS, Azure, GCP) and tools for ML deployment (SageMaker, Vertex AI, MLflow, etc.). BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 8,047,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Min salary 6,035,850 Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. Mid point salary 8,047,800 Max salary 10,059,750 Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Elatre is a growth-focused digital company powering global brands with end-to-end marketing, web, and technology solutions. We’re currently scaling a powerful AI-first healthcare platform and are looking for a skilled DevOps Engineer to help us build a secure, scalable, and robust backend infrastructure using AWS cloud. What You’ll Do Design and manage AWS infrastructure for scalable web and mobile applications Set up and maintain CI/CD pipelines and automation (GitHub Actions or AWS CodePipeline) Deploy and manage services using AWS Elastic Beanstalk, ECS (Fargate), or EC2 Set up and optimize Amazon RDS (PostgreSQL) with Multi-AZ, backups, and monitoring Manage S3, CloudFront, IAM, and security policies across environments Monitor performance, health, and logs using CloudWatch and X-Ray Handle deployment automation, cost optimization, and resource scaling Collaborate with backend and AI teams to manage real-time API endpoints, Lambda functions, and storage pipelines Set up caching layers with ElastiCache (Redis) for high-performance response Implement infrastructure-as-code using Terraform or CloudFormation Must-Have Skills 2+ years of hands-on experience with AWS Strong knowledge of EC2, RDS (PostgreSQL), S3, CloudFront, IAM, CloudWatch, Lambda Experience with Docker and container orchestration (ECS or Kubernetes) CI/CD pipeline setup and version control workflows (Git, GitHub Actions, AWS CodeBuild) Familiarity with Terraform or AWS CloudFormation Good understanding of system security and DevOps best practices Comfortable managing scalable infrastructure (targeting 20K+ active users) Problem-solving mindset with proactive monitoring and incident response skills Bonus (Not Mandatory) Experience with AppSync (GraphQL), WebSockets, or API Gateway Exposure to AI/ML pipelines in AWS (SageMaker, Polly, etc.) Experience in high-availability mobile backend systems Familiarity with HIPAA or GDPR-compliant infrastructure What We Offer Competitive pay based on experience Fully remote, flexible working hours Opportunity to work on global products with cutting-edge technology Transparent and collaborative work culture Direct impact on infrastructure design decisions Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a dynamic person to join our AI and Data Science team. This position will work on delivering innovative AI and data-driven solutions. The candidate must have strong ML fundamentals, Hands-on experience on GenAI and RAG. On the other hand we’re looking for good engineering skills (Python, Docker etc.) and exposure to cloud technologies is a plus. Required skills Generative Ai: Experience with RAG: Particularly retrieval and reranking Working Experience of different indexing algorithms ( Flat / HSNW) Experience in working with different LLM based Embedding Models ( ada / bge etc) LLM Parameter tuning experience Experience of different prompt engineering techniques Python: Experience with OOps Python Experience with Type hinting Exp with API Frameworks like Flaks / Fast APi is a must. Experience with Docker is important Artificial Intelligence: Experience with Different use-cases (Multi-class /MultiLabel classification) in NLP is Important. Experience in Transformers architecture is Important. Working Understanding of attention and implementation of transformers. Working Understanding of Embeddings ( Word2Vec / Encoder based EMbeddings) is a must. Experience with different cost function / Optimization algorithms in Deep Learning. Cloud Providers Aws AWS : Sagemaker / ECS / S3 / Lambda Machine Learning Generics: Candidate should have used or work on: Transformers RNN (LSTM/Bi-LSTM) Candidate should have knowledge on Machine Learning basics like Linear and Logistic Regression / Random forests Understanding of ML /NLP metrics like (Precision/RecallF1 score) Hyper Parameter tuning , model training / selection Show more Show less

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Compelling Opportunity for ML Engineer with Innovative Entity in Insurance Industry Employment | Immediate Location: Hyderabad, India Reporting Manager: Head of Analytics Work Pattern: Full Time, 5 days in the office Minimum Experience as a ML Engineer : 3 to 5 years Position Overview: The Innovative Entity in Insurance Industry is seeking an experienced Machine Learning Engineer with 3 to 5 years of hands-on experience in designing, developing, and deploying machine learning models and systems. The ideal candidate will work closely with data scientists, software engineers, and product teams to create solutions that drive business value. You will be responsible for building scalable and efficient machine learning pipelines, optimizing model performance, and integrating models into production environments. Key Responsibilities: · Model Development & Training: Develop and train machine learning models, including supervised, unsupervised, and deep learning algorithms, to solve business problems. · Data Preparation: Collaborate with data engineers to clean, preprocess, and transform raw data into usable formats for model training and evaluation. · Model Deployment & Monitoring: Deploy machine learning models into production environments, ensuring seamless integration with existing systems and monitoring model performance. · Feature Engineering: Create and test new features to improve model performance, and optimize feature selection to reduce model complexity. · Algorithm Optimization: Research and implement state-of-the-art algorithms to improve model accuracy, efficiency, and scalability. · Collaborative Development: Work closely with data scientists, engineers, and other stakeholders to understand business requirements, develop ML models, and integrate them into products and services. · Model Evaluation: Conduct model evaluation using statistical tests, cross-validation, and A/B testing to ensure reliability and generalizability. · Documentation & Reporting: Maintain thorough documentation of processes, models, and systems. Provide insights and recommendations based on model results to stakeholders. · Code Reviews & Best Practices: Participate in peer code reviews, and ensure adherence to coding best practices, including version control (Git), testing, and continuous integration. · Stay Updated on Industry Trends: Keep abreast of new techniques and advancements in the field of machine learning, and suggest improvements for internal processes and models. Required Skills & Qualifications · Education : Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related field. · Experience: 3 to 5 years of hands-on experience working as a machine learning engineer or in a related role. · Programming Languages: Proficiency in Python (preferred), R, or Java. Experience with ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. · Data Manipulation: Strong knowledge of SQL and experience working with large datasets (e.g., using tools like Pandas, NumPy, Spark). · Cloud Services: Experience with cloud platforms like AWS, Google Cloud, or Azure, particularly with ML services such as SageMaker or AI Platform. · Model Deployment: Hands-on experience with deploying ML models using Docker, Kubernetes, and CI/CD pipelines. · Problem-Solving Skills: Strong analytical and problem-solving skills with the ability to understand complex data problems and implement effective solutions. · Mathematics and Statistics: A solid foundation in mathematical concepts related to ML, such as linear algebra, probability, statistics, and optimization techniques. · Communication Skills: Strong verbal and written communication skills to collaborate effectively with cross-functional teams and stakeholders. Preferred Qualifications: · Experience with deep learning frameworks (e.g., TensorFlow, PyTorch). · Exposure to natural language processing (NLP), computer vision, or recommendation systems. · Familiarity with version control systems (e.g., Git) and collaborative workflows. · Experience with model interpretability and fairness techniques. · Familiarity with big data tools (e.g., Hadoop, Spark, Kafka). Screening Criteria · Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related field. · 3 to 5 years of hands-on experience working as a machine learning engineer or in a related role. · Proficiency in Python · Experience with ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. · Strong knowledge of SQL · Experience working with large datasets (e.g., using tools like Pandas, NumPy, Spark). · Experience with cloud platforms like AWS, Google Cloud, or Azure, particularly with ML services such as SageMaker or AI Platform · Hands-on experience with deploying ML models using Docker, Kubernetes, and CI/CD pipelines · A solid foundation in mathematical concepts related to ML, such as linear algebra, probability, statistics, and optimization techniques. · Available to work from office in Hyderabad · Available to join within 30 days Considerations · Location – Hyderabad · Working from office · 5 day working Evaluation Process Round 1 – HR Round Round 2 & 3 – Technical Round Round 4 – Discussion with CEO Interested Profiles, Kindly Apply Note o Additional inputs to be gathered from the candidate to put together the application Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description PayPay is looking for an experienced Cloud-Based AI and ML Engineer. This role involves leveraging cloud-based AI/ML Services to build infrastructure as well as developing, deploying, and maintaining ML models, collaborating with cross-functional teams, and ensuring scalable and efficient AI solutions particularly on Amazon Web Services (AWS). Main Responsibilities 1. Cloud Infrastructure Management : - Architect and maintain cloud infrastructure for AI/ML projects using AWS tools. - Implement best practices for security, cost management, and high-availability. - Monitor and manage cloud resources to ensure seamless operation of ML services. 2. Model Development and Deployment : - Design, develop, and deploy machine learning models using AWS services such as SageMaker. - Collaborate with data scientists and data engineers to create scalable ML workflows. - Optimize models for performance and scalability on AWS infrastructure. - Implement CI/CD pipelines to streamline and accelerate the model development and deployment process. - Set up a cloud-based development environment for data engineers and data scientists to facilitate model development and exploratory data analysis - Implement monitoring, logging, and observability to streamline operations and ensure efficient management of models deployed in production. 3. Data Management : - Work with structured and unstructured data to train robust ML models. - Use AWS data storage and processing services like S3, RDS, Redshift, or DynamoDB. - Ensure data integrity and compliance with set Security regulations and standards. 4. Collaboration and Communication : - Collaborate with cross-functional teams including DevOps, Data Engineering, and Product Management teams. - Communicate technical concepts effectively to non-technical stakeholders. 5. Continuous Improvement and Innovation : - Stay updated with the latest advancements in AI/ML technologies and AWS services. - Provide through Automations means for developers to easily develop and deploy their AI/ML models on AWS. Tech Stack - AWS: - VPC, EC2, ECS, EKS, Lambda, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK,Amazon Kinesis, CodeCommit, CodeBuild, CodeDeploy, CodePipeline, AWS Lake Formation, AWS Glue, SageMaker and other AI Services. - Terraform, Github Actions, Prometheus, Grafana, Atlantis - OSS (Administration experience on these tools) - Jupyter, MLFlow, Argo Workflows, Airflow Required Skills and Experiences - More than 5+ years of technical experience in cloud-based infrastructure with a focus on AI and ML platforms - Extensive technical hands-on experience with computing, storage, and analytical services on AWS. - Demonstrated skill in programming and scripting languages, including Python, Shell Scripting, Go, and Rust. - Experience with infrastructure as code (IAC) tools in AWS, such as Terraform, CloudFormation, and CDK. - Proficient in Linux internals and system administration. - Experience in production level infrastructure change management and releases for business-critical systems. - Experience in Cloud infrastructure and platform systems availability, performance and cost management. - Strong understanding of cloud security best practices and payment industry compliance standards. - Experience with cloud services monitoring, detection, and response, as well as performance tuning and cost control. - Familiarity with cloud infrastructure service patching and upgrades. - Excellent oral, written, and interpersonal communication skills. Preferred Qualifications - Bachelor’s degree and above in a technology related field - Experience with other cloud service providers (e.g GCP, Azure) - Experience with Kubernetes - Experience with Event-Driven Architecture (Kafka preferred) - Experience using and contributing to Open Source tools - Experience in managing IT compliance and security risk - Published papers / blogs / articles - Relevant and verifiable certifications Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team. Show more Show less

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About Us: Traya is an Indian direct-to-consumer hair care brand platform provides a holistic treatment for consumers dealing with hairloss. The Company provides personalized consultations that help determine the root cause of hair fall among individuals, along with a range of hair care products that are curated from a combination of Ayurveda, Allopathy, and Nutrition. Traya's secret lies in the power of diagnosis. Our unique platform diagnoses the patient’s hair & health history, to identify the root cause behind hair fall and delivers customized hair kits to them right at their doorstep. We have a strong adherence system in place via medically-trained hair coaches and proprietary tech, where we guide the customer across their hair growth journey, and help them stay on track. Traya is founded by Saloni Anand, a techie-turned-marketeer and Altaf Saiyed, a Stanford Business School alumnus. Our Vision: Traya was created with a global vision to create awareness around hair loss, de-stigmatise it while empathizing with the customers that it has an emotional and psychological impact. Most importantly, to combine 3 different sciences (Ayurveda, Allopathy and Nutrition) to create the perfect holistic solution for hair loss patients. Responsibilities: Data Analysis and Exploration: Conduct in-depth analysis of large and complex datasets to identify trends, patterns, and anomalies. Perform exploratory data analysis (EDA) to understand data distributions, relationships, and quality. Machine Learning and Statistical Modeling: Develop and implement machine learning models (e.g., regression, classification, clustering, time series analysis) to solve business problems. Evaluate and optimize model performance using appropriate metrics and techniques. Apply statistical methods to design and analyze experiments and A/B tests. Implement and maintain models in production environments. Data Engineering and Infrastructure: Collaborate with data engineers to ensure data quality and accessibility. Contribute to the development and maintenance of data pipelines and infrastructure. Work with cloud platforms (e.g., AWS, GCP, Azure) and big data technologies (e.g., Spark, Hadoop). Communication and Collaboration: Effectively communicate technical findings and recommendations to both technical and non-technical audiences. Collaborate with product managers, engineers, and other stakeholders to define and prioritize projects. Document code, models, and processes for reproducibility and knowledge sharing. Present findings to leadership. Research and Development: Stay up-to-date with the latest advancements in data science and machine learning. Explore and evaluate new tools and techniques to improve data science capabilities. Contribute to internal research projects. Qualifications: Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field. 3-5 years of experience as a Data Scientist or in a similar role. Leverage SageMaker's features, including SageMaker Studio, Autopilot, Experiments, Pipelines, and Inference, to optimize model development and deployment workflows. Proficiency in Python and relevant libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch). Solid understanding of statistical concepts and machine learning algorithms. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience deploying models to production. Experience with version control (Git) Preferred Qualifications: Experience with specific industry domains (e.g., e-commerce, finance, healthcare). Experience with natural language processing (NLP) or computer vision. Experience with building recommendation engines. Experience with time series forecasting. Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask . Implement gRPC services , event-driven systems ( Kafka, PubSub ), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development : data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow , Dagster , SageMaker , and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation , A/B testing , and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python , FastAPI/Flask , gRPC, and event-driven architectures. Experience with CI/CD , infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices : feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration . Familiarity with tools like Airflow/Dagster , SageMaker , and data pipeline architecture . Show more Show less

Posted 6 days ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Lead Agentic AI Developer Gurgaon, India; Hyderabad, India; Bangalore, India Information Technology 316524 Job Description About The Role: Grade Level (for internal use): 12 Lead Agentic AI Developer Location: Gurgaon, Hyderabad and Bangalore Job Description: A Lead Agentic AI Developer will drive the design, development, and deployment of autonomous AI systems that enable intelligent, self-directed decision-making. Their day-to-day operations focus on advancing AI capabilities, leading teams, and ensuring ethical, scalable implementations. Responsibilities AI System Design and Development : Architect and build autonomous AI systems that integrate with enterprise workflows, cloud platforms, and LLM frameworks. Develop APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Team Leadership and Mentorship : Lead cross-functional teams of AI engineers, data scientists, and developers. Mentor junior staff in agentic AI principles, reinforcement learning, and ethical AI governance. Customization and Advancement : Optimize autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Fine-tune LLMs, multi-agent frameworks, and feedback loops to align with business goals. Ethical AI Governance : Monitor AI behavior, audit decision-making processes, and implement safeguards to ensure transparency, fairness, and compliance with regulatory standards. Innovation and Research : Spearhead R&D initiatives to advance agentic AI capabilities. Experiment with emerging frameworks (e.g.,Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems. Documentation and Thought Leadership : Publish technical white papers, case studies, and best practices for autonomous AI. Share insights at conferences and contribute to open-source AI communities. System Validation : Oversee rigorous testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation. Validate alignment with ethical and performance benchmarks. Stakeholder Leadership : Collaborate with executives, product teams, and compliance officers to align AI initiatives with strategic objectives. Advocate for AI-driven innovation across the organization. What We’re Looking For : REQUIRED SKILLS/QUALIFICATIONS Technical Expertise : 8+ years as a Senior AI Engineer , ML Architect , or AI Solutions Lead , with 5+ years focused on autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js . Hands-on experience with autonomous AI tools : LangChain, Autogen, CrewAI, or custom agentic frameworks. Proficiency in cloud platforms : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI. Experience with MLOps pipelines (e.g., Kubeflow, MLflow) and scalable deployment of AI agents. Leadership : Proven track record of leading AI/ML teams, managing complex projects, and mentoring technical staff. Ethical AI : Familiarity with AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and bias mitigation techniques. Communication : Exceptional ability to translate technical AI concepts for non-technical stakeholders. Nice to have : Contributions to AI research (published papers, patents) or open-source AI projects (e.g., TensorFlow Agents, AutoGen). Experience with DevOps/MLOps tools: Kubeflow, MLflow, Docker, or Terraform. Expertise in NLP, computer vision, or graph-based AI systems. Familiarity with quantum computing or neuromorphic architectures for AI. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316524 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Agentic AI Architect Gurgaon, India; Hyderabad, India; Bangalore, India Information Technology 316525 Job Description About The Role: Grade Level (for internal use): 13 Location: Gurgaon, Hyderabad and Bangalore Job Description: We are seeking a highly skilled and visionary Agentic AI Architect to lead the strategic design, development, and scalable implementation of autonomous AI systems within our organization. This role demands an individual with deep expertise in cutting-edge AI architectures, a strong commitment to ethical AI practices, and a proven ability to drive innovation. The ideal candidate will architect intelligent, self-directed decision-making systems that integrate seamlessly with enterprise workflows and propel our operational efficiency forward. Key Responsibilities As an Agentic AI Architect, you will: AI Architecture and System Design: Architect and design robust, scalable, and autonomous AI systems that seamlessly integrate with enterprise workflows, cloud platforms, and advanced LLM frameworks. Define blueprints for APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Strategic AI Leadership: Provide technical leadership and strategic direction for AI initiatives focused on agentic systems. Guide cross-functional teams of AI engineers, data scientists, and developers in the adoption and implementation of advanced AI architectures. Framework and Platform Expertise: Evaluate, recommend, and implement leading AI tools and frameworks, with a strong focus on autonomous AI solutions (e.g., multi-agent frameworks, self-optimizing systems, LLM-driven decision engines). Drive the selection and utilization of cloud platforms (AWS SageMaker preferred, Azure ML, Google Cloud Vertex AI) for scalable AI deployments. Customization and Optimization: Design strategies for optimizing autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Define methodologies for fine-tuning LLMs, multi-agent frameworks, and feedback loops to align with overarching business goals and architectural principles. Innovation and Research Integration: Spearhead the integration of R&D initiatives into production architectures, advancing agentic AI capabilities. Evaluate and prototype emerging frameworks (e.g., Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems for architectural viability. Documentation and Architectural Blueprinting: Develop comprehensive technical white papers, architectural diagrams, and best practices for autonomous AI system design and deployment. Serve as a thought leader, sharing architectural insights at conferences and contributing to open-source AI communities. System Validation and Resilience: Design and oversee rigorous architectural testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation strategies, ensuring alignment with compliance, ethical and performance benchmarks for robust production systems. Stakeholder Collaboration & Advocacy: Collaborate with executives, product teams, and compliance officers to align AI architectural initiatives with strategic objectives. Advocate for AI-driven innovation and architectural best practices across the organization. Qualifications: Technical Expertise: 12+ years of progressive experience in AI/ML, with a strong track record as an AI Architect , ML Architect, or AI Solutions Lead. 7+ years specifically focused on designing and architecting autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js for architectural integrations. Extensive hands-on experience with autonomous AI tools and frameworks : LangChain, Autogen, CrewAI, or architecting custom agentic frameworks. Proficiency in cloud platforms for AI architecture : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI, with a deep understanding of their AI service offerings. Demonstrable experience with MLOps pipelines (e.g., Kubeflow, MLflow) and designing scalable deployment strategies for AI agents in production environments. Leadership & Strategic Acumen: Proven track record of leading the architectural direction of AI/ML teams, managing complex AI projects, and mentoring senior technical staff. Strong understanding and practical application of AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and advanced bias mitigation techniques within AI architectures. Exceptional ability to translate complex technical AI concepts into clear, concise architectural plans and strategies for non-technical stakeholders and executive leadership. Ability to envision and articulate a long-term strategy for AI within the business, aligning AI initiatives with business objectives and market trends. Foster collaboration across various practices, including product management, engineering, and marketing, to ensure cohesive implementation of AI strategies that meet business goals. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316525 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

About the Role: Grade Level (for internal use): 13 Location: Gurgaon, Hyderabad and Bangalore Job Description: We are seeking a highly skilled and visionary Agentic AI Architect to lead the strategic design, development, and scalable implementation of autonomous AI systems within our organization. This role demands an individual with deep expertise in cutting-edge AI architectures, a strong commitment to ethical AI practices, and a proven ability to drive innovation. The ideal candidate will architect intelligent, self-directed decision-making systems that integrate seamlessly with enterprise workflows and propel our operational efficiency forward. Key Responsibilities As an Agentic AI Architect, you will: AI Architecture and System Design: Architect and design robust, scalable, and autonomous AI systems that seamlessly integrate with enterprise workflows, cloud platforms, and advanced LLM frameworks. Define blueprints for APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Strategic AI Leadership: Provide technical leadership and strategic direction for AI initiatives focused on agentic systems. Guide cross-functional teams of AI engineers, data scientists, and developers in the adoption and implementation of advanced AI architectures. Framework and Platform Expertise: Evaluate, recommend, and implement leading AI tools and frameworks, with a strong focus on autonomous AI solutions (e.g., multi-agent frameworks, self-optimizing systems, LLM-driven decision engines). Drive the selection and utilization of cloud platforms (AWS SageMaker preferred, Azure ML, Google Cloud Vertex AI) for scalable AI deployments. Customization and Optimization: Design strategies for optimizing autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Define methodologies for fine-tuning LLMs, multi-agent frameworks, and feedback loops to align with overarching business goals and architectural principles. Innovation and Research Integration: Spearhead the integration of R&D initiatives into production architectures, advancing agentic AI capabilities. Evaluate and prototype emerging frameworks (e.g., Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems for architectural viability. Documentation and Architectural Blueprinting: Develop comprehensive technical white papers, architectural diagrams, and best practices for autonomous AI system design and deployment. Serve as a thought leader, sharing architectural insights at conferences and contributing to open-source AI communities. System Validation and Resilience: Design and oversee rigorous architectural testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation strategies, ensuring alignment with compliance, ethical and performance benchmarks for robust production systems. Stakeholder Collaboration & Advocacy: Collaborate with executives, product teams, and compliance officers to align AI architectural initiatives with strategic objectives. Advocate for AI-driven innovation and architectural best practices across the organization. Qualifications: Technical Expertise: 12+ years of progressive experience in AI/ML, with a strong track record as an AI Architect , ML Architect, or AI Solutions Lead. 7+ years specifically focused on designing and architecting autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js for architectural integrations. Extensive hands-on experience with autonomous AI tools and frameworks : LangChain, Autogen, CrewAI, or architecting custom agentic frameworks. Proficiency in cloud platforms for AI architecture : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI, with a deep understanding of their AI service offerings. Demonstrable experience with MLOps pipelines (e.g., Kubeflow, MLflow) and designing scalable deployment strategies for AI agents in production environments. Leadership & Strategic Acumen: Proven track record of leading the architectural direction of AI/ML teams, managing complex AI projects, and mentoring senior technical staff. Strong understanding and practical application of AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and advanced bias mitigation techniques within AI architectures. Exceptional ability to translate complex technical AI concepts into clear, concise architectural plans and strategies for non-technical stakeholders and executive leadership. Ability to envision and articulate a long-term strategy for AI within the business, aligning AI initiatives with business objectives and market trends. Foster collaboration across various practices, including product management, engineering, and marketing, to ensure cohesive implementation of AI strategies that meet business goals. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316525 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India

Posted 6 days ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

About the Role: Grade Level (for internal use): 12 Lead Agentic AI Developer Location: Gurgaon, Hyderabad and Bangalore Job Description: A Lead Agentic AI Developer will drive the design, development, and deployment of autonomous AI systems that enable intelligent, self-directed decision-making. Their day-to-day operations focus on advancing AI capabilities, leading teams, and ensuring ethical, scalable implementations. Responsibilities AI System Design and Development : Architect and build autonomous AI systems that integrate with enterprise workflows, cloud platforms, and LLM frameworks. Develop APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Team Leadership and Mentorship : Lead cross-functional teams of AI engineers, data scientists, and developers. Mentor junior staff in agentic AI principles, reinforcement learning, and ethical AI governance. Customization and Advancement : Optimize autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Fine-tune LLMs, multi-agent frameworks, and feedback loops to align with business goals. Ethical AI Governance : Monitor AI behavior, audit decision-making processes, and implement safeguards to ensure transparency, fairness, and compliance with regulatory standards. Innovation and Research : Spearhead R&D initiatives to advance agentic AI capabilities. Experiment with emerging frameworks (e.g.,Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems. Documentation and Thought Leadership : Publish technical white papers, case studies, and best practices for autonomous AI. Share insights at conferences and contribute to open-source AI communities. System Validation : Oversee rigorous testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation. Validate alignment with ethical and performance benchmarks. Stakeholder Leadership : Collaborate with executives, product teams, and compliance officers to align AI initiatives with strategic objectives. Advocate for AI-driven innovation across the organization. What We’re Looking For : REQUIRED SKILLS/QUALIFICATIONS Technical Expertise : 8+ years as a Senior AI Engineer , ML Architect , or AI Solutions Lead , with 5+ years focused on autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js . Hands-on experience with autonomous AI tools : LangChain, Autogen, CrewAI, or custom agentic frameworks. Proficiency in cloud platforms : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI. Experience with MLOps pipelines (e.g., Kubeflow, MLflow) and scalable deployment of AI agents. Leadership : Proven track record of leading AI/ML teams, managing complex projects, and mentoring technical staff. Ethical AI : Familiarity with AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and bias mitigation techniques. Communication : Exceptional ability to translate technical AI concepts for non-technical stakeholders. Nice to have : Contributions to AI research (published papers, patents) or open-source AI projects (e.g., TensorFlow Agents, AutoGen). Experience with DevOps/MLOps tools: Kubeflow, MLflow, Docker, or Terraform. Expertise in NLP, computer vision, or graph-based AI systems. Familiarity with quantum computing or neuromorphic architectures for AI. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316524 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India

Posted 6 days ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bangalore,Karnataka,India Job ID 766481 Join our Team About this Opportunity The complexity of running and optimizing the next generation of wireless networks, such as 5G with distributed edge compute, will require Machine Learning (ML) and Artificial Intelligence (AI) technologies. Ericsson is setting up an AI Accelerator Hub in India to fast-track our strategy execution, using Machine Intelligence (MI) to drive thought leadership, automate, and transform Ericsson’s offerings and operations. We collaborate with academia and industry to develop state-of-the-art solutions that simplify and automate processes, creating new value through data insights. What you will do As a Senior Data Scientist, you will apply your knowledge of data science and ML tools backed with strong programming skills to solve real-world problems. Responsibilities: 1. Lead AI/ML features/capabilities in product/business areas 2. Define business metrics of success for AI/ML projects and translate them into model metrics 3. Lead end-to-end development and deployment of Generative AI solutions for enterprise use cases 4. Design and implement architectures for vector search, embedding models, and RAG systems 5. Fine-tune and evaluate large language models (LLMs) for domain-specific tasks 6. Collaborate with stakeholders to translate vague problems into concrete Generative AI use cases 7. Develop and deploy generative AI solutions using AWS services such as SageMaker, Bedrock, and other AWS AI tools. Provide technical expertise and guidance on implementing GenAI models and best practices within the AWS ecosystem. 8. Develop secure, scalable, and production-grade AI pipelines 9. Ensure ethical and responsible AI practices 10. Mentor junior team members in GenAI frameworks and best practices 11. Stay current with research and industry trends in Generative AI and apply cutting-edge techniques 12. Contribute to internal AI governance, tooling frameworks, and reusable components 13. Work with large datasets including petabytes of 4G/5G networks and IoT data 14. Propose/select/test predictive models and other ML systems 15. Define visualization and dashboarding requirements with business stakeholders 16. Build proof-of-concepts for business opportunities using AI/ML 17. Lead functional and technical analysis to define AI/ML-driven business opportunities 18. Work with multiple data sources and apply the right feature engineering to AI models 19. Lead studies and creative usage of new/existing data sources What you will bring Required Experience - min 7 years 1. Bachelors/Masters/Ph.D. in Computer Science, Data Science, AI, ML, Electrical Engineering, or related disciplines from reputed institutes 2. 3+ years of applied ML/AI production-level experience 3. Strong programming skills (R/Python) 4. Proven ability to lead AI/ML projects end-to-end 5. Strong grounding in mathematics, probability, and statistics 6. Hands-on experience with data analysis, visualization techniques, and ML frameworks (Python, R, H2O, Keras, TensorFlow, Spark ML) 7. Experience with semi-structured/unstructured data for AI/ML models 8. Strong understanding of building AI models using Deep Neural Networks 9. Experience with Big Data technologies (Hadoop, Cassandra) 10. Ability to source and combine data from multiple sources for ML models Preferred Qualifications: 1. Good communication skills in English 2. Certifying MI MOOCs, a plus 3. Domain knowledge in Telecommunication/IoT, a plus 4. Experience with data visualization and dashboard creation, a plus 5. Knowledge of Cognitive models, a plus 6. Experience in partnering and collaborative co-creation in a global matrix organization. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 6 days ago

Apply

0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

Job Description : AI/ML Specialist Overview We are looking for a highly skilled and experienced AI/ML to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF). Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills And Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

Salary - 10 to 25 LPA Title : Sr. Data Scientist/ML Engineer (4+ years & above) Required Technical Skillset Language : Python, PySpark Framework : Scikit-learn, TensorFlow, Keras, PyTorch Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills : Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques : Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis and Visualization : Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks : Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies : A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering : A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication and Collaboration : A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Overview We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving. Key Responsibilities Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems. Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker. Automate ML workflows using CI/CD best practices and tools. Ensure model reproducibility, governance, and performance tracking. Monitor deployed models for data drift, model decay, and performance metrics. Implement robust versioning and model registry systems. Apply security, performance, and compliance best practices across ML systems. Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities. Required Skills & Qualifications 4+ years of experience in Software Engineering or MLOps, preferably in a production environment. Proven experience with AWS services, especially AWS Sagemaker for model development and deployment. Working knowledge of AWS DataZone (preferred). Strong programming skills in Python, with exposure to R, Scala, or Apache Spark. Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes). Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools. Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Solid understanding of DevOps and cloud-native infrastructure practices. Excellent problem-solving skills and the ability to work collaboratively across teams. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title : Technical Delivery Manager Experience : 12+ years Location : Pune, Kharadi Employment Type : Full-time Job Summary We are seeking a seasoned Technical Delivery Manager with 12+ years of experience to lead and manage large-scale, complex programs. The ideal candidate will have a strong background in project and delivery management, with expertise in Agile methodologies, risk management, stakeholder communication, and cross-functional team leadership. Key Responsibilities Delivery & Execution: Oversee end-to-end project execution, ensuring alignment with business objectives, timelines, and quality standards. Agile & SCRUM Management: Drive Agile project delivery, coordinating across multiple teams and ensuring adherence to best practices. Risk & Dependency Management: Participate in design discussions to identify risks, dependencies, and mitigation strategies. Stakeholder Communication: Report and present program progress to senior management and executive leadership. Client Engagement: Lead customer presentations, articulations, and discussions, ensuring effective communication and alignment. Cross-Team Coordination: Collaborate with globally distributed teams to ensure seamless integration and delivery. Leadership & People Management: Guide, mentor, and motivate diverse teams, fostering a culture of innovation and excellence. Tool & Process Management: Utilize tools like JIRA, Confluence, MPP, and Smartsheet to drive project visibility and efficiency. Engineering & ALM Best Practices: Ensure adherence to engineering and Application Lifecycle Management (ALM) best practices for continuous improvement. Required Skills & Qualifications 12+ years of experience in IT project and delivery management. 5+ years of project management experience, preferably in Managed Services and Fixed-Price engagements. Proven experience in large-scale program implementation. Strong expertise in Agile/SCRUM methodologies and project execution. Excellent problem-solving, analytical, and risk management skills. Outstanding communication, articulation, and presentation skills. Experience in multi-team and cross-functional coordination, especially across different time zones. Good-to-Have Skills Exposure to AWS services (S3, Glue, Lambda, SNS, RDS MySQL, Redshift, Snowflake). Knowledge of Python, Jinja, Angular, APIs, Power BI, SageMaker, Flutter Dart. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

2.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for a highly skilled Generative AI Developer : Responsibilities We are looking for a highly skilled Generative AI Developer with expertise in Large Language Models (LLMs) to join our AI/ML innovation team. The ideal candidate will be responsible for building, fine-tuning, deploying, and optimizing generative AI models to solve complex real-world problems. You will collaborate with data scientists, machine learning engineers, product managers, and software developers to drive forward next-generation AI-powered Responsibilities : Design and develop AI-powered applications using large language models (LLMs) such as GPT, LLaMA, Mistral, Claude, or similar. Fine-tune pre-trained LLMs for specific tasks (e.g., text summarization, Q&A systems, chatbots, semantic search). Build and integrate LLM-based APIs into products and systems. Optimize inference performance, latency, and throughput of LLMs for deployment at scale. Conduct prompt engineering and design strategies for prompt optimization and output consistency. Develop evaluation frameworks to benchmark model quality, response accuracy, safety, and bias. Manage training data pipelines and ensure data privacy, compliance, and quality standards. Experiment with open-source LLM frameworks and contribute to internal libraries and tools. Collaborate with MLOps teams to automate deployment, CI/CD pipelines, and monitoring of LLM solutions. Stay up to date with state-of-the-art advancements in generative AI, NLP, and foundation Skills Required : LLMs & Transformers: Deep understanding of transformer-based architectures (e.g., GPT, BERT, T5, LLaMA, Falcon). Model Training/Fine-Tuning: Hands-on experience with training/fine-tuning large models using libraries such as Hugging Face Transformers, DeepSpeed, LoRA, PEFT. Prompt Engineering: Expertise in designing, testing, and refining prompts for specific tasks and outcomes. Python: Strong proficiency in Python with experience in ML and NLP libraries. Frameworks: Experience with PyTorch, TensorFlow, Hugging Face, LangChain, or similar frameworks. MLOps: Familiarity with tools like MLflow, Kubeflow, Airflow, or SageMaker for model lifecycle management. Data Handling: Experience with data pipelines, preprocessing, and working with structured and unstructured Desirable Skills : Deployment: Knowledge of deploying LLMs on cloud platforms like AWS, GCP, Azure, or edge devices. Vector Databases: Experience with FAISS, Pinecone, Weaviate, or ChromaDB for semantic search applications. LLM APIs: Experience integrating with APIs like OpenAI, Cohere, Anthropic, Mistral, etc. Containerization: Docker, Kubernetes, and cloud-native services for scalable model deployment. Security & Ethics: Understanding of LLM security, hallucination handling, and responsible AI : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 2-4 years of experience in ML/NLP roles with at least 12 years specifically focused on generative AI and LLMs. Prior experience working in a research or product-driven AI team is a plus. Strong communication skills to explain technical concepts and findings Skills : Analytical thinker with a passion for solving complex problems. Team player who thrives in cross-functional settings. Self-driven, curious, and always eager to learn the latest advancements in AI. Ability to work independently and deliver high-quality solutions under tight deadlines. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Technical Project Manager IT management professional with 10+ years of Exp Responsibilities 5+ years of project management experience Delivery executive experience (Managed Services, Fixed prices) Delivery management exp in working on executing projects extensively using SCRUM methodology. Planning, monitoring and risk management for multiple data and data science programs. Participating in design discussions to capture risks & dependencies. Reporting & presenting program progress to senior management Customer presentation, articulations, and communication skills Co-ordinate and integrate with multiple teams in different time zones. Leading, guiding, managing, and motivating diverse team Should have handled large & complex program implementation. Knowledge of working on tools like JIRA, Confluence, MPP, Smartsheet etc. Knowledge of Engineering and ALM best practices Good To Have Knowledge of AWS S3, Glue, Lambda, SNS etc., Python, Jinja, Angular, APIs, PowerBI, Sagemaker, Flutter Dart, RDS MySQL, DB Redshift, Snowflake. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Overview Of Job Role We are looking for a skilled and motivated DevOps Engineer to join our growing team. The ideal candidate will have expertise in AWS, CI/CD pipelines, and Terraform, with a passion for building and optimizing scalable, reliable, and secure infrastructure. This role involves close collaboration with development, QA, and operations teams to streamline deployment processes and enhance system & Responsibilities & Strategy : Lead and mentor a team of DevOps engineers, fostering a culture of automation, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business objectives to enhance scalability, security, and reliability. Collaborate with cross-functional teams, including software engineering, security, MLOps, and infrastructure teams, to drive DevOps best practices. Establish KPIs and performance metrics for DevOps operations, ensuring optimal system performance, cost efficiency, and high availability. Advocate for CPU throttling, auto-scaling, and workload optimization strategies to improve system efficiency and reduce costs. Drive MLOps adoption, integrating machine learning workflows into CI/CD pipelines and cloud infrastructure. Ensure compliance with ISO 27001 standards, implementing security controls and risk management measures. & Automation : Oversee the design, implementation, and management of scalable, secure, and resilient infrastructure on AWS. Lead the adoption of Infrastructure as Code (IaC) using Terraform, CloudFormation, and configuration management tools like Ansible or Chef. Spearhead automation efforts for infrastructure provisioning, deployment, and monitoring to reduce manual overhead and improve efficiency. Ensure high availability and disaster recovery strategies, leveraging multi-region architectures and failover mechanisms. Manage Kubernetes (or AWS ECS/EKS) clusters, optimizing container orchestration for large-scale applications. Drive cost optimization initiatives, implementing intelligent cloud resource allocation strategies. & Observability : Architect and oversee CI/CD pipelines, ensuring seamless automation of application builds, testing, and deployments. Enhance observability and monitoring by implementing tools like CloudWatch, Prometheus, Grafana, ELK Stack, or Datadog. Develop robust logging, alerting, and anomaly detection mechanisms to ensure proactive issue resolution. & Compliance (ISO 27001 Implementation) : Lead the implementation and enforcement of ISO 27001 security standards, ensuring compliance with information security policies and regulatory requirements. Develop and maintain an Information Security Management System (ISMS) to align with ISO 27001 guidelines. Implement secure access controls, encryption, IAM policies, and network security measures to safeguard infrastructure. Conduct risk assessments, vulnerability management, and security audits to identify and mitigate threats. Ensure security best practices are embedded into all DevOps workflows, following DevSecOps principles. Work closely with auditors and compliance teams to maintain SOC2, GDPR, and other regulatory frameworks. Skills and Qualifications : 5+ years of experience in DevOps, cloud infrastructure, and automation, with at least 3+ years in a managerial or leadership role. Proven experience managing AWS cloud infrastructure at scale, including EC2, S3, RDS, Lambda, VPC, IAM, and CloudFormation. Expertise in Terraform and Infrastructure as Code (IaC) principles. Strong background in CI/CD pipeline automation with tools like Jenkins, GitHub Actions, GitLab CI, or CircleCI. Hands-on experience with Docker and Kubernetes (or AWS ECS/EKS) for container orchestration. Experience in CPU throttling, auto-scaling, and performance optimization for cloud-based applications. Strong knowledge of Linux/Unix systems, shell scripting, and network configurations. Proven experience with ISO 27001 implementation, ISMS development, and security risk management. Familiarity with MLOps frameworks like Kubeflow, MLflow, or SageMaker, and integrating ML pipelines into DevOps workflows. Deep understanding of observability tools such as ELK Stack, Grafana, Prometheus, or Datadog. Strong stakeholder management, communication, and ability to collaborate across teams. Experience in regulatory compliance, including SOC2, ISO 27001, and GDPR. Attributes : Strong interpersonal and communication skills, being an effective team player, being able to work with individuals at all levels within the organization and building remote relationships. Excellent prioritization skills, the ability to work well under pressure, and the ability to multi-task. Qualification : Any technical degree MTech (CS), BTech (CS) will be preferred. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

68.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Key Responsibilities Design, develop, and optimize large-scale data pipelines using PySpark and Apache Spark. Build scalable and robust ETL workflows leveraging AWS services such as EMR, S3, Lambda, and Glue. Collaborate with data scientists, analysts, and other engineers to gather requirements and deliver clean, well-structured data solutions. Integrate data from various sources, ensuring high data quality, consistency, and reliability. Manage and schedule workflows using Apache Airflow. Work on ML model deployment pipelines using tools like SageMaker and Anaconda. Write efficient and optimized SQL queries for data processing and validation. Develop and maintain technical documentation for data pipelines and architecture. Participate in Agile ceremonies, sprint planning, and code reviews. Troubleshoot and resolve issues in production environments with minimal supervision. Required Skills And Qualifications Bachelor's or Masters degree in Computer Science, Engineering, or a related field. 68 years of experience in data engineering with a strong focus on : Python PySpark SQL AWS (EMR, EC2, S3, Lambda, Glue) Experience in developing and orchestrating pipelines using Apache Airflow. Familiarity with SageMaker for ML deployment and Anaconda for environment management. Proficiency in working with large datasets and optimizing Spark jobs. Experience in building data lakes and data warehouses on AWS. Strong understanding of data governance, data quality, and data lineage. Excellent documentation and communication skills. Comfortable working in a fast-paced Agile environment. Experience with Kafka or other real-time streaming platforms. Familiarity with DevOps practices and tools (e.g., Terraform, CloudFormation). Exposure to NoSQL databases such as DynamoDB or MongoDB. Knowledge of data security and compliance standards (GDPR, HIPAA). Work with cutting-edge technologies in a collaborative and innovative environment. Opportunity to influence large-scale data infrastructure. Competitive salary, benefits, and professional development support. Be part of a growing team solving real-world data challenges. (ref:hirist.tech) Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1097 02 May 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities Will Include Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You Will Have 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing More About The Opportunity The Data Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

Remote

Linkedin logo

Company Description Assent is the leading solution for supply chain sustainability tailored for the world’s top-tier, sustainability-driven manufacturers. Hidden risks riddle supply chains, many of which weren't built with sustainability in mind. That's where we step in. With insights from experts, Assent is the tool manufacturers trust for comprehensive sustainability. We are proud to announce that Assent has crossed the US$100M ARR milestone, granting us Centaur Status. This accomplishment, reached just 8 years following our Series A, makes us the first and only Certified B Corporation in North America's SaaS sustainability industry to celebrate this milestone. Our journey from $5 million to US$100M ARR in just eight years has been marked by significant growth and achievements. With our $350 million US funding led by Vista Equity Partners, we're poised for even greater expansion and are on the lookout for outstanding team members to join our mission. Hybrid Work Model At Assent, we proudly embrace a remote-first work model, valuing the flexibility and autonomy it provides our team. We also acknowledge the intangible benefits of occasional in-person workdays. For team members situated within 50 kms/31 miles of our five global offices in Ottawa, Eldoret, Penang, Columbus, Pune and Amsterdam, you can expect to come into the office one day a week. Similarly, those near our co-working spaces in Nairobi and Toronto are encouraged to work onsite once a month. Job Description We are seeking a Senior Data Scientist with deep expertise in Natural Language Processing (NLP) and Large Language Model (LLM) fine-tuning to join our AI and Machine Learning team. This role is ideal for a highly skilled individual with a PhD or Masters in Machine Learning, AI, or a related field, coupled with industry experience in developing and deploying NLP-driven AI solutions. As a Senior Data Scientist, you will lead the development, tuning, and maintenance of cutting-edge AI models, mentor junior data scientists, and drive innovation in AI-powered solutions. You will collaborate closely with cross-functional teams, transforming complex business challenges into intelligent, data-driven products and solutions. Additionally, you will play a key role in analyzing large-scale datasets, uncovering insights, and ensuring data-driven decision-making in our AI initiatives. The Senior Data Scientist is a data-oriented, out-of-the-box thinker who is passionate about data, machine learning, understanding the business, and driving business value. Lead the research, development, and fine-tuning of state-of-the-art LLMs and NLP models for real-world applications. Perform in-depth data analysis to extract actionable insights, improve model performance, and inform AI strategy. Design, implement, and evaluate LLM based systems to ensure model performance, efficiency, and scalability. Mentor and coach junior data scientists, fostering best practices in NLP, deep learning, and MLOps. Deploy and monitor models in production, ensuring robust performance, fairness, and explainability in AI applications. Stay ahead of advancements in NLP, generative AI, and ethical AI practices, incorporating them into our solutions. Ensure compliance with Responsible AI principles, aligning with industry standards and regulations such as the EU AI Act and Canada’s Voluntary Code of Conduct on Responsible AI. Collaborate with engineering and product teams to integrate AI-driven features into SaaS products. Be curious and not afraid to try unconventional ideas to find solutions to difficult problems Apply engineering principles to proactively identify issues, develop solutions, and recommend improvements Be self-motivated and highly proactive at exploring new technologies Find creative solutions to challenges involving data that is difficult to obtain, complex or ambiguous. Manage multiple concurrent projects, priorities and timelines Qualifications PhD (preferred) or Masters in Machine Learning, AI, NLP, or a related field, with a strong publication record in top-tier conferences and journals. Industry experience (2+ for PhD, 5+ for Masters) in building and deploying NLP/LLM solutions at scale. Proven ability to analyze large datasets, extract meaningful insights, and drive data-informed decision-making. Strong expertise in preparing data for fine-tuning and optimizing LLMs. Solid understanding of data engineering concepts, including data pipelines, feature engineering, and vector databases. Proficiency in deep learning frameworks (e.g., PyTorch, TensorFlow) and NLP libraries (e.g., Hugging Face Transformers, spaCy). Solid working knowledge of AWS systems and services; comfort working with SageMaker, Bedrock, EC2, S3, Lambda, Terraform. Familiar with MLOps best practices, including model versioning, monitoring, and CI/CD pipelines. Excellent organizational skills and ability to manage multiple priorities and timelines Additional Information Life at Assent Wellness: We believe that you and your family’s well being is important. As a result, we offer vacation time that increases with tenure, comprehensive benefits packages (details vary by country), life leave days and more. Financial Benefits: It’s not all about the money – well, it’s a little about the money. We understand that financial health is important and we offer a competitive base salary, a corporate bonus program, retirement savings options and more. Life at Assent: There is purpose beyond your work. We provide our team members with flexible work options, volunteer days and opportunities to get involved in corporate giving initiatives. Lifelong Learning: At Assent, curiosity is not only valued but encouraged. You will receive professional development days that are available to you the day you start. At Assent, we are committed to growing and sustaining an environment where our team members feel included, valued, and heard. Our diversity and equal opportunity practices are guided and championed by our Diversity and Inclusion Working Group and our Employee Resource Groups (ERGs). Our commitment to diversity, equity and inclusion includes recruiting and retaining team members from diverse backgrounds and experiences, and fostering a culture of belonging where all team members are included, treated with dignity and respect, promoted on their merits, and placed in positions to contribute to business success. If you require assistance or accommodation throughout any part of the interview and selection process, please contact talent@assent.com and we will be happy to help. Show more Show less

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies