Jobs
Interviews

102 Ml Deployment Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

3 - 8 Lacs

bengaluru, karnataka, india

Remote

We are seeking a skilled ML Ops Engineer to join our team and streamline the deployment, monitoring, and management of machine learning models at scale. You will collaborate closely with data scientists, engineers, and DevOps to build robust, scalable ML infrastructure and CI/CD pipelines for AI/ML workflows. ML Ops engineer/ML engineer Project Description The engineer is supposed to participate in various AI projects such as Demand Sensing and Forecasting, Price and Promotion Optimization and others. Details on Tech Stack Proficiency in Python. Competent knowledge of best practices for software development. Strong understanding of Data Science concepts such as supervised and unsupervised learning, feature engineering and ETL processes, classical DS models types and neural networks types, hyperparameters tuning, model evaluation and selection. Proficiency in usage of appropriate cloud services (AWS/GCP/Azure) for building end-to-end ML pipelines, e.g. GCP Vertex AI, BigQuery, Dataflow, Cloud SQL, Dataproc, Cloud Functions, Google Kubernetes Engine. Competent knowledge of MLOps paradigm and practices. Experience with MLOps tools (or appropriate cloud services), including model and data versioning and experiment tracking (e.g., DVC, MLflow, Weights & Biases), pipeline orchestration (e.g., Apache Airflow, Kubeflow, Dagster). Understanding of deployment strategies for different types of models and inference (batch/online). Knowledge and experience with big data processing frameworks (e.g., Apache Spark, Apache Kafka, Apache Hadoop, Dask). Competent SQL skills and experience with RDBMS databases (like MySQL, Postgres) Experience in developing and integrating RESTful APIs for ML model serving (e.g., Flask and FastAPI). Experience with containerization technologies like Docker and orchestration tools (e.g., Kubernetes). Nice to Have Requirements Knowledge of monitoring and logging tools (e.g., Grafana, ELK Stack or appropriate cloud services). Understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI) for automating the testing and deployment of machine learning models and applications. Experience with Cloud Identity and Access Management. Experience with Cloud Load Balancing. Knowledge of Infrastructure as Code (IaC) tools such as Terraform and Ansible. Experience with observability tools like Evidently, Arize, Weights & Biases Perks & Benefits: Competitive salary & performance-based bonuses Remote work flexibility / Hybrid options Continuous learning budget & GenAI certifications Opportunity to work on cutting-edge AI projects Dynamic and collaborative team environment

Posted 5 days ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

chennai, tamil nadu, india

Remote

We are seeking a skilled ML Ops Engineer to join our team and streamline the deployment, monitoring, and management of machine learning models at scale. You will collaborate closely with data scientists, engineers, and DevOps to build robust, scalable ML infrastructure and CI/CD pipelines for AI/ML workflows. ML Ops engineer/ML engineer Project Description The engineer is supposed to participate in various AI projects such as Demand Sensing and Forecasting, Price and Promotion Optimization and others. Details on Tech Stack Proficiency in Python. Competent knowledge of best practices for software development. Strong understanding of Data Science concepts such as supervised and unsupervised learning, feature engineering and ETL processes, classical DS models types and neural networks types, hyperparameters tuning, model evaluation and selection. Proficiency in usage of appropriate cloud services (AWS/GCP/Azure) for building end-to-end ML pipelines, e.g. GCP Vertex AI, BigQuery, Dataflow, Cloud SQL, Dataproc, Cloud Functions, Google Kubernetes Engine. Competent knowledge of MLOps paradigm and practices. Experience with MLOps tools (or appropriate cloud services), including model and data versioning and experiment tracking (e.g., DVC, MLflow, Weights & Biases), pipeline orchestration (e.g., Apache Airflow, Kubeflow, Dagster). Understanding of deployment strategies for different types of models and inference (batch/online). Knowledge and experience with big data processing frameworks (e.g., Apache Spark, Apache Kafka, Apache Hadoop, Dask). Competent SQL skills and experience with RDBMS databases (like MySQL, Postgres) Experience in developing and integrating RESTful APIs for ML model serving (e.g., Flask and FastAPI). Experience with containerization technologies like Docker and orchestration tools (e.g., Kubernetes). Nice to Have Requirements Knowledge of monitoring and logging tools (e.g., Grafana, ELK Stack or appropriate cloud services). Understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI) for automating the testing and deployment of machine learning models and applications. Experience with Cloud Identity and Access Management. Experience with Cloud Load Balancing. Knowledge of Infrastructure as Code (IaC) tools such as Terraform and Ansible. Experience with observability tools like Evidently, Arize, Weights & Biases Perks & Benefits: Competitive salary & performance-based bonuses Remote work flexibility / Hybrid options Continuous learning budget & GenAI certifications Opportunity to work on cutting-edge AI projects Dynamic and collaborative team environment

Posted 5 days ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

hyderabad, telangana, india

Remote

We are seeking a skilled ML Ops Engineer to join our team and streamline the deployment, monitoring, and management of machine learning models at scale. You will collaborate closely with data scientists, engineers, and DevOps to build robust, scalable ML infrastructure and CI/CD pipelines for AI/ML workflows. ML Ops engineer/ML engineer Project Description The engineer is supposed to participate in various AI projects such as Demand Sensing and Forecasting, Price and Promotion Optimization and others. Details on Tech Stack Proficiency in Python. Competent knowledge of best practices for software development. Strong understanding of Data Science concepts such as supervised and unsupervised learning, feature engineering and ETL processes, classical DS models types and neural networks types, hyperparameters tuning, model evaluation and selection. Proficiency in usage of appropriate cloud services (AWS/GCP/Azure) for building end-to-end ML pipelines, e.g. GCP Vertex AI, BigQuery, Dataflow, Cloud SQL, Dataproc, Cloud Functions, Google Kubernetes Engine. Competent knowledge of MLOps paradigm and practices. Experience with MLOps tools (or appropriate cloud services), including model and data versioning and experiment tracking (e.g., DVC, MLflow, Weights & Biases), pipeline orchestration (e.g., Apache Airflow, Kubeflow, Dagster). Understanding of deployment strategies for different types of models and inference (batch/online). Knowledge and experience with big data processing frameworks (e.g., Apache Spark, Apache Kafka, Apache Hadoop, Dask). Competent SQL skills and experience with RDBMS databases (like MySQL, Postgres) Experience in developing and integrating RESTful APIs for ML model serving (e.g., Flask and FastAPI). Experience with containerization technologies like Docker and orchestration tools (e.g., Kubernetes). Nice to Have Requirements Knowledge of monitoring and logging tools (e.g., Grafana, ELK Stack or appropriate cloud services). Understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI) for automating the testing and deployment of machine learning models and applications. Experience with Cloud Identity and Access Management. Experience with Cloud Load Balancing. Knowledge of Infrastructure as Code (IaC) tools such as Terraform and Ansible. Experience with observability tools like Evidently, Arize, Weights & Biases Perks & Benefits: Competitive salary & performance-based bonuses Remote work flexibility / Hybrid options Continuous learning budget & GenAI certifications Opportunity to work on cutting-edge AI projects Dynamic and collaborative team environment

Posted 5 days ago

Apply

9.0 - 14.0 years

35 - 70 Lacs

gurugram

Hybrid

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail one of the worlds most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We are seeking a talented Engineering Manager with ML Ops expertise to lead a team of engineers in developing product that help Retailers transform their Retail Media business in a way that helps them achieve maximum ad revenue and enable massive scale. As an Engineering Manager, you will play a pivotal role in designing and delivering high-quality software solutions. You will be responsible for leading a team, mentoring engineers, contributing to system architecture, and ensuring adherence to engineering best practices. Your technical expertise, leadership skills, and ability to drive results will be key to the success of our products. What you will be doing? You will lead the charge in ensuring operational efficiency and delivering high-value solutions . You’ll mentor and develop a high-performing team of Big Data and MLOps engineers, driving best practices in software development, data management, and model deployment. With a focus on robust technical design, you’ll ensure solutions are secure, scalable, and efficient. Your role will involve hands-on development to tackle complex challenges, collaborating across teams to define requirements, and delivering innovative solutions. You’ll keep stakeholders and senior management informed on progress, risks, and opportunities while staying ahead of advancements in AI/ML technologies and driving their application. With an agile mindset, you will overcome challenges and deliver impactful solutions that make a difference. Technical Expertise: Proven experience in microservices architecture, with hands-on knowledge of Docker and Kubernetes for orchestration. Proficiency in MLOps and Machine Learning workflows using tools like Spark. Strong command of SQL and PySpark programming. Expertise in Big Data solutions such as Spark and Hive, with advanced Spark optimizations and tuning skills. Hands-on experience with Big Data orchestrators like Airflow. Proficiency in Python programming, particularly with frameworks like FastAPI or equivalent API development tools. Experience in data engineering is added advantage Experience in unit testing, code quality assurance, and the use of Git or other version control systems. Cloud and Infrastructure: Practical knowledge of cloud-based data stores, such as Redshift and BigQuery (preferred). Experience in cloud solution architecture, especially with GCP and Azure. Familiarity with GitLab CI/CD pipelines is a bonus. Monitoring and Scalability: Solid understanding of logging, monitoring, and alerting systems for production-level big data pipelines. Prior experience with scalable architectures and distributed processing frameworks. Soft Skills and Additional Plus Points: A collaborative approach to working within cross-functional teams. Ability to troubleshoot complex systems and provide innovative solutions. Familiarity with GitLab for CI/CD and infrastructure automation tools is an added advantage. What you can expect from us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process.

Posted 6 days ago

Apply

3.0 - 6.0 years

10 - 18 Lacs

bengaluru

Work from Office

Role & responsibilities Design and manage end-to-end ML pipelines for deployment and monitoring. Automate model training, retraining, and deployment workflows . Implement CI/CD for ML models to ensure reliability and scalability. Build real-time dashboards and APIs for model outputs and business consumption. Optimize cloud resources (Azure Databricks, ADF, Azure ML) for performance and cost efficiency. End-to-end model lifecycle management from development to deployment. Collaborate with data scientists, engineers, and business teams to operationalize ML solutions. Preferred candidate profile 3+ years of experience in MLOps / Data Science and model deployment . Strong skills in Python, PySpark, SQL and automation frameworks. Hands-on experience with Azure ML, Databricks, Azure Data Factory . Knowledge of containerization (Docker, Kubernetes) and model serving frameworks (MLflow, FastAPI, Flask). Retail or e-commerce industry experience required. Experience in end-to-end model development and deployment .

Posted 6 days ago

Apply

5.0 - 8.0 years

12 - 16 Lacs

chennai

Work from Office

Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Collaborate with sales, pre-sales &consulting team to assist in creating solutions and propositions for proactive demand generation ii. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for recruitment, joint research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options c. Integrate model performance management tools into the current business infrastructure 3. Team Management a. Resourcing i. Support recruitment process to on-board right resources for the team b. Talent Management i. Support on boarding and training for the team members to enhance capability & effectiveness ii. Manage team attrition c. Performance Management i. Conduct timely performance reviews and provide constructive feedback to own direct reports ii. Be a role model to team for five habits iii. Ensure that the Performance Nxt is followed for the entire team d. Employee Satisfaction and Engagement i. Lead and drive engagement initiatives for the team Mandatory Skills: Python for Data Science . Experience: 5-8 Years .

Posted 6 days ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

chennai

Work from Office

Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions. Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options 3. Team Management a. Talent Management i. Support on boarding and training to enhance capability & effectiveness Mandatory Skills: Python for Data Science . Experience: 3-5 Years .

Posted 6 days ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

pune

Work from Office

MLOps Engineer (Exempt) Enterprise AI/ML Organization OVERVIEW We are looking for an experienced MLOps Engineer to support our AI and ML initiatives, including GenAI platform development, deployment automation, and infrastructure optimization. You will play a critical role in building and maintaining scalable, secure, and observable systems that power scalable RAG solutions, model training platforms, and agentic AI workflows across the enterprise. RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks If anybody interested can share their updated CV on "shashwat.pa@peoplefy,com"" or feel free to reach me out at " +918660547469" .

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

pune

Remote

Role & responsibilities Understand workflows/processes and devise strategies for the areas applicable to AI-based implementation. Translate eLearning requirements into AI-driven automation strategies. Define data requirements and provide datasets for AI model training specific to eLearning. Design and maintain data pipelines (Azure Data Factory, Databricks, Synapse Analytics) to ensure clean, structured content for training. Apply governance frameworks (Azure Policy, Purview, DLP) to ensure responsible AI practices. Ensure fairness and transparency in AI models using Fair learn and Interpret ML. Develop and optimize AI models for automating eLearning processes, including interactive storyboards, content creation/development, and adaptive/customised learning paths. Leverage Azure Cognitive Services, Machine Learning Studio, and Azure OpenAI for scalable eLearning solutions. Use Adobe/other AI tools for automating audio narration, translation, video editing, and image creation for eLearning modules. Implement AI-driven workflows for converting text-based content into engaging eLearning assets. Design guardrails for ethical AI use in eLearning, ensuring compliance with GDPR and ISO 27001. Implement secure environments with Azure Key Vault, Security Center, and RBAC. Deploy AI models using Azure Machine Learning Pipelines and monitor performance. Set up CI/CD workflows for continuous optimization and scaling of AI solutions. Integrate AI into LMS and CMS primarily through APIs and custom integrations (with an extensibility consideration for integrating it with other platforms like Project Management Tool, Dashboards, etc.) Build AI-powered plugins for automating course creation, learner analytics, and feedback. Document AI workflows and provide training, architecture diagrams, and code snippets for team members. Preferred candidate profile At least 5-8 years of experience deploying Machine Learning and Generative AI models into production with operational support. Advanced knowledge of Machine Learning and LLMs (e.g., GPT, Llama), with experience in LangChain, RAG, and model tuning. Proficient in MLOps tools (e.g., ML flow) for lifecycle management of machine learning models. Strong programming skills in Python, JavaScript, and RESTful APIs, with clean, efficient coding for AI Assistant and platform integrations. Experience with cloud computing platforms (AWS, Google Cloud, Azure) and cloud-based application deployment for model scaling. Deep knowledge of AI, NLP, and Conversational AI platforms (e.g., Bot Framework, PVA, Cognigy). Proficient in Azure AI Suite, including Cognitive Services, Machine Learning Studio, and other tools. Experience with Fairlearn, InterpretML, and adherence to GDPR, HIPAA, and ISO 27001 standards. Exposure to multimedia automation tools (e.g., Adobe AI tools) for video/image editing workflows. Exposure to eLearning development tools, and processes.

Posted 1 week ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

pune, chennai, bengaluru

Hybrid

Mandatory keyskills : Python Programming Strong proficiency is essential for building and deploying ML models. Cloud Platforms (Azure, GCP, AWS) – Hands-on experience with at least one major cloud provider. Linux & Docker – Solid understanding of containerization and operating systems for scalable deployment. ML/Ops Tools (e.g., Azure ML) – Experience in deploying and managing machine learning models in production.

Posted 1 week ago

Apply

6.0 - 11.0 years

30 - 45 Lacs

hyderabad, pune

Work from Office

Hello Candidate, Greetings from Hungry Bird IT Consulting Services Pvt Ltd. We are hiring Senior AI Infrastructure Management Engineer for our client. Job Title: Senior AI Infrastructure Management Engineer Location: Hyderabad Job Type: Full-Time Work Mode: Hybrid E xperience Required: 6+ Years Job Summary: We are seeking a highly skilled Senior AI Infrastructure Management Engineer with expertise in Azure , AWS , and AI/ML deployment environments. This role demands deep technical knowledge of Linux , DevOps practices , cloud architecture , and AI/ML operations (MLOps/AIOps) . The ideal candidate will be responsible for architecting, deploying, and maintaining scalable and secure infrastructure for enterprise AI applications. Key Responsibilities: Linux System Expertise Manage and optimize Linux systems (CentOS, Ubuntu, Red Hat) Perform kernel tuning, file system configuration, and network optimization Develop shell scripts for automation and system management. Cloud Infrastructure (AWS & Azure) Design and implement secure, scalable cloud architectures on AWS and Azure Use services like EC2, S3, Lambda, Azure VMs, Blob Storage, and Functions Manage hybrid and multi-cloud environments and ensure seamless integration Infrastructure as Code (IaC) Automate infrastructure provisioning using Terraform , CloudFormation , or similar tools Maintain infrastructure versioning and ensure traceability of changes Enforce DevSecOps best practices and secure configurations AI/ML Infrastructure Management Deploy and manage cloud infrastructure for AI/ML workloads including GPUs Scale resources (GPU/CPU) for training and inference workloads Deploy AI/ML apps using Docker and Kubernetes Ensure high availability, performance, and reliability of AI applications Work on MLOps/AIOps pipelines for model deployment and monitoring Qualifications: Bachelor's degree in Computer Science, Engineering, or related field 6+ years of experience in Infrastructure/Cloud/DevOps roles Strong experience with AWS , Azure , and Linux systems Experience in AI/ML infrastructure setup and management Proficient in scripting: Python , Bash , PowerShell Hands-on with Kubernetes , Docker , and cloud-native services Experience with DevSecOps principles and CI/CD tools Certifications (preferred): AWS Solution Architect Associate / Cloud Practitioner Azure DevOps Engineer / Administrator Certified Kubernetes Administrator (CKA) Preferred Skills: Experience with GPU cluster management for AI workloads Strong knowledge of cloud security and compliance Familiarity with real-time monitoring and logging tools Exposure to modern data stacks and AI lifecycle management What We Offer: Work on high-impact, global AI infrastructure projects Access to continuous learning via online platforms and sponsored certifications Participate in Tech Talks, Hackathons, and R&D initiatives Comprehensive benefits: health insurance, retirement plans, flexible hours Supportive work environment promoting innovation and personal growth (Interested candidates can share their CV with us at or reach us at aradhana@hungrybird.in .) PLEASE MENTION THE RELEVANT POSITION IN THE SUBJECT LINE OF THE EMAIL. Example: KRISHNA, HR MANAGER, 7 YEARS, 20 20DAYS NOTICE. Name: Position applying for: Total experience: Notice period: Current Salary: Expected Salary: Thanks and Regards Aradhana +91 9959417171 aradhana@hungrybird.in

Posted 1 week ago

Apply

9.0 - 14.0 years

40 - 65 Lacs

gurugram

Hybrid

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We are seeking a talented Engineering Manager with ML Ops expertise to lead a team of engineers in developing product that help Retailers transform their Retail Media business in a way that helps them achieve maximum ad revenue and enable massive scale. As an Engineering Manager, you will play a pivotal role in designing and delivering high-quality software solutions. You will be responsible for leading a team, mentoring engineers, contributing to system architecture, and ensuring adherence to engineering best practices. Your technical expertise, leadership skills, and ability to drive results will be key to the success of our products. What you will be doing? You will lead the charge in ensuring operational efficiency and delivering high-value solutions . You’ll mentor and develop a high-performing team of Big Data and MLOps engineers, driving best practices in software development, data management, and model deployment. With a focus on robust technical design, you’ll ensure solutions are secure, scalable, and efficient. Your role will involve hands-on development to tackle complex challenges, collaborating across teams to define requirements, and delivering innovative solutions. You’ll keep stakeholders and senior management informed on progress, risks, and opportunities while staying ahead of advancements in AI/ML technologies and driving their application. With an agile mindset, you will overcome challenges and deliver impactful solutions that make a difference. Technical Expertise: Proven experience in microservices architecture, with hands-on knowledge of Docker and Kubernetes for orchestration. Proficiency in ML Ops and Machine Learning workflows using tools like Spark. Strong command of SQL and PySpark programming. Expertise in Big Data solutions such as Spark and Hive, with advanced Spark optimizations and tuning skills. Hands-on experience with Big Data orchestrators like Airflow. Proficiency in Python programming, particularly with frameworks like FastAPI or equivalent API development tools. Experience in data engineering is added advantage Experience in unit testing, code quality assurance, and the use of Git or other version control systems. Cloud and Infrastructure: Practical knowledge of cloud-based data stores, such as Redshift and BigQuery (preferred). Experience in cloud solution architecture, especially with GCP and Azure. Familiarity with GitLab CI/CD pipelines is a bonus. Monitoring and Scalability: Solid understanding of logging, monitoring, and alerting systems for production-level big data pipelines. Prior experience with scalable architectures and distributed processing frameworks. Soft Skills and Additional Plus Points: A collaborative approach to working within cross-functional teams. Ability to troubleshoot complex systems and provide innovative solutions. Familiarity with GitLab for CI/CD and infrastructure automation tools is an added advantage. What you can expect from us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process.

Posted 1 week ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

hyderabad, ahmedabad, gurugram

Work from Office

About the Role: Grade Level (for internal use): 09 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will build and maintain enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. Whats in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestrationCelery, Redis, Airflow Strong AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)

Posted 1 week ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

chennai

Hybrid

What you will do: ACVs Machine Learning (ML) team is looking to grow its MLOps team. Multiple ACV operations and product teams rely on the ML teams solutions. Current deployments drive opportunities in the marketplace, in operations, and sales, to name a few. As ACV has experienced hyper growth over the past few years, the volume, variety, and velocity of these deployments has grown considerably. Thus, the training, deployment, and monitoring needs of the ML team has grown as weve gained traction. MLOps is a critical function to help ourselves continue to deliver value to our partners and our customers. Successful candidates will demonstrate excellent skill and maturity, be self-motivated as well as team-oriented, and have the ability to support the development and implementation of end-to-end ML-enabled software solutions to meet the needs of their stakeholders. Those who will excel in this role will be those who listen with an ear to the overarching goal, not just the immediate concern that started the query. They will be able to show their recommendations are contextually grounded in an understanding of the practical problem, the data, and theory as well as what product and software solutions are feasible and desirable. The core responsibilities of this role are: Working with fellow machine learning engineers to build, automate, deploy, and monitor ML applications. Developing data pipelines that feed ML models. Deploy new ML models into production. Building REST APIs to serve ML models predictions. Monitoring performance of models in production. Required Qualifications: Graduate education in a computationally intensive domain or equivalent work experience. 9+ years of prior relevant work or lab experience in ML projects/research Advanced proficiency with Python, SQL etc. Experience with building and deploying REST APIs (FastAPI, Flask) Experience with distributed caching technologies (Redis) Experience with real-time data streaming and processing (Kafka) Experience with cloud services (AWS / GCP) and kubernetes, docker, CI/CD. Preferred Qualifications: Experience with MLOps-specific tooling like Vertex AI, Ray, Feast, Kubeflow, or MLFlow, etc. are a plus. Experience with building data pipelines Experience with training ML models.

Posted 1 week ago

Apply

8.0 - 13.0 years

16 - 20 Lacs

pune

Work from Office

Project description New settlements and confirmation system for FX trades. Responsibilities We are seeking an AI-ML Tech Engineer who has a minimum of 8+ years of strong background in machine learning, data science, and software engineering. As a Machine Learning Engineer, you will develop and deploy machine learning models, work with large datasets, and collaborate with cross-functional teams to solve real-world problems. Skills Must have Proficient in building ML (Machine Learning) & NLP (Natural Language Processing) solutions using common ML libraries and frameworks. Proficient with Python language and worked on various ML toolkits like TensorFlow, PyTorch, Keras, Scikit Learn. Theoretical understanding of statistical models such as regression, clustering, and ML algorithms such as decision trees, Random Forests, neural networks, etc. Strong understanding of cloud computing and cloud AI services. Experience in deploying AI/ML models in production environments. Experience in data analytics, feature creation, model selection and ensemble methods, performance metrics and visualization. Experience working with large datasets and distributed computing systems. Experience in fine-tuning DL models including LLMs, SLMs. Knowledge of large language models from OpenAI such as GPT 3.5,GPT 4,C odex etc. Experience with Vector Stores and RAG pipelines. Proficient with Data-Modelling Tools. Proficient with multiple ML deployment strategies including static and dynamic. Excellent knowledge of CI, CD Pipelines for ML algorithms, training, prediction pipelines. Experience in translating ML-based outcomes to business-digestible insights. Excellent communication skills and experience in managing stakeholders at various levels. Nice to have Knowledge of ML Ops like continuous integration, continuous deployment. Knowledge of full stack development. Knowledge of Databricks, Data Mesh. Knowledge of ETL (Extract, Transform, Load) pipeline. Familiarity with agile methodologies, such as Scrum or Kanban, and project management tools such as JIRA/GITLAB. Experience with frameworks like flask/Django. Knowledge of Docker containers.

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 16 Lacs

bengaluru

Work from Office

Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Collaborate with sales, pre-sales &consulting team to assist in creating solutions and propositions for proactive demand generation ii. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for recruitment, joint research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options c. Integrate model performance management tools into the current business infrastructure 3. Team Management a. Resourcing i. Support recruitment process to on-board right resources for the team b. Talent Management i. Support on boarding and training for the team members to enhance capability & effectiveness ii. Manage team attrition c. Performance Management i. Conduct timely performance reviews and provide constructive feedback to own direct reports ii. Be a role model to team for five habits iii. Ensure that the Performance Nxt is followed for the entire team d. Employee Satisfaction and Engagement i. Lead and drive engagement initiatives for the team Mandatory Skills: Data Science . Experience: 5-8 Years .

Posted 1 week ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

pune, bengaluru

Work from Office

Greeting From BMW Techworks !! Experience: 3 to 8 Years Location: Bangalore, Pune Notice period: immediate to 60days Job Description :- We are seeking a skilled Software Engineer with expertise in AWS cloud technologies to design, develop, and deploy scalable applications using core AWS services. Familiarity with computer vision/deep learning is a plus and will enhance your ability to contribute to cutting edge projects in Automated Driving technologies. Key Responsibilities: Design, develop, and maintain scalable, high-performance applications using AWS services such as EC2, S3, Lambda, RDS, DynamoDB, Batch and ECS/EKS. Monitor and optimize application performance using AWS tools like CloudWatch, X-Ray. Integrate microservices, RESTful APIs, and serverless architectures into applications. Collaborate with product managers and DevOps engineers to deliver end-to-end solutions. Required Skills and Qualifications: Bachelors degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience as a Software Engineer Strong proficiency in AWS core services (e.g., EC2, S3, Lambda, RDS, DynamoDB, Batch, CloudFormation). Experience with programming languages such as Python, Java Knowledge of cloud architecture principles, microservices, and serverless computing. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes or ECS. Understanding of security best practices for cloud-based applications (IAM, VPC). Experience with CI/CD tools and version control systems (e.g., Git, AWS CodePipeline). Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Excellent communication and teamwork skills. Nice-to-Have Skills: Experience with computer vision technologies (e.g., OpenCV, TensorFlow, PyTorch) Familiarity with AWS AI/ML services, such as SageMaker. Knowledge of machine learning pipelines and integrating computer vision models into production environments.

Posted 1 week ago

Apply

8.0 - 10.0 years

12 - 17 Lacs

pune

Work from Office

Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Collaborate with sales, pre-sales &consulting team to assist in creating solutions and propositions for proactive demand generation ii. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for recruitment, joint research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options c. Integrate model performance management tools into the current business infrastructure 3. Team Management a. Resourcing i. Support recruitment process to on-board right resources for the team b. Talent Management i. Support on boarding and training for the team members to enhance capability & effectiveness ii. Manage team attrition c. Performance Management i. Conduct timely performance reviews and provide constructive feedback to own direct reports ii. Be a role model to team for five habits iii. Ensure that the Performance Nxt is followed for the entire team d. Employee Satisfaction and Engagement i. Lead and drive engagement initiatives for the team Mandatory Skills: Python for Data Science . Experience: 8-10 Years .

Posted 1 week ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

pune

Work from Office

Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions. Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options 3. Team Management a. Talent Management i. Support on boarding and training to enhance capability & effectiveness Mandatory Skills: Data Analysis . Experience: 3-5 Years .

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 16 Lacs

bengaluru

Work from Office

Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Collaborate with sales, pre-sales &consulting team to assist in creating solutions and propositions for proactive demand generation ii. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for recruitment, joint research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options c. Integrate model performance management tools into the current business infrastructure 3. Team Management a. Resourcing i. Support recruitment process to on-board right resources for the team b. Talent Management i. Support on boarding and training for the team members to enhance capability & effectiveness ii. Manage team attrition c. Performance Management i. Conduct timely performance reviews and provide constructive feedback to own direct reports ii. Be a role model to team for five habits iii. Ensure that the Performance Nxt is followed for the entire team d. Employee Satisfaction and Engagement i. Lead and drive engagement initiatives for the team Mandatory Skills: Data Analysis .Experience: 5-8 Years .

Posted 1 week ago

Apply

8.0 - 13.0 years

0 - 0 Lacs

chennai

Work from Office

Role & responsibilities Be an expert in Python, with solid working knowledge of at least one Python web framework such as Django, Flask, etc... Good familiarity with ORM (Object Relational Mapper) libraries. Ability to integrate multiple data sources and databases into one system. Understanding of the threading limitations of Python, and multi-process architecture. Good understanding of server-side templating languages such as Jinja 2, Mako, etc Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Understanding of accessibility and security compliance. Good knowledge of user authentication and authorization between multiple systems, servers, and environments. Familiarity with event-driven programming in Python. Understanding of the differences between multiple delivery platforms, such as mobile vs desktop, and optimizing output to match the specific platform. Able to create database schemas that represent and support business processes. Proficient understanding of code versioning tools such as Git, Mercurial or SVN. Preferred Qualifications: Degree in Computer Science, Engineering or a related field. 6 years of experience, 5+ years in bringing to life web applications, mobile applications, and machine learning frameworks. Hands on work experience with: Python, R, Django, Flask, JavaScript (ES6), React, Node.js, MongoDB, Elasticsearch, Azure, Docker, Kubernetes, Microservices Good knowledge of caching technologies like Redis and queues like Kafka, SQS, MQ etc. Knowledge of Big Data, Hadoop would be a plus. You should be a creative problem-solver who demonstrates clear and thoughtful approaches to challenging technical problems that solve real business needs.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

bengaluru, karnataka, india

Remote

Accel Portfolio company Position: Senior AI Engineer Location: Bangalore (Preferred) Or Remote Role Overview: We&aposre looking for an exceptional AI/ML Technical Lead to spearhead the development of our advanced AI shopping assistant. You&aposll be responsible for building and scaling our AI capabilities while working directly with the founding team. Key Responsibilities: Architect and develop our core AI shopping assistant Lead the implementation of natural language processing for shopping interactions Design and implement recommendation systems for products and deals Develop AI models for price optimization and reward maximization Work with Data providers and embed data to RAG vector stores Requirements: 5 - 10 years of experience Strong expertise in NLP, recommendation systems, and ML deployment Experience with large language models (LLMs) and conversational AI Experience in building RAG systems Strong programming skills in Python and modern AI frameworks Experience with cloud infrastructure (AWS/GCP) and MLOps Technical Skills: Deep Learning: PyTorch/TensorFlow NLP: Transformers, LLMs, BERT/GPT models MLOps: Model deployment, monitoring, and scaling Cloud: AWS/GCP AI services Version Control: Git Show more Show less

Posted 2 weeks ago

Apply

7.0 - 10.0 years

10 - 15 Lacs

gurugram

Work from Office

Hiring a Senior GenAI Engineer with 7-12 years of experience in Python, Machine Learning, and Large Language Models (LLMs) for a 6-month engagement based in Gurugram This hands-on role involves building intelligent systems using Langchain and RAG, developing agent workflows, and defining technical roadmaps The ideal candidate will be proficient in LLM architecture, prompt engineering, vector databases, and cloud platforms (AWS, Azure, GCP) The position demands strong collaboration skills, a system design mindset, and a focus on production-grade AI/ML solutions

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

kolkata

Work from Office

Were Hiring | Python Developer – AI/ML (OCR/NLP) Are you passionate about building intelligent systems that solve real-world challenges? Join our team and work on cutting-edge AI/ML solutions with expertise in Python, OCR/NLP, and MLOps. Role: Python Developer – AI/ML (OCR/NLP) Experience: 4–5 years Location: [Insert Location / Remote / Hybrid] What you’ll work on: Developing Python-based applications & microservices. Building & deploying OCR/NLP solutions (OpenCV, Tesseract, spaCy, Hugging Face). Integrating AI/ML models (TensorFlow, PyTorch, Scikit-learn) into production. Working with APIs, Flask/Django, Docker/Kubernetes, and cloud platforms (AWS/Azure/GCP). What we’re looking for: 4–5 years of Python development experience. Hands-on in OCR/NLP & AI/ML frameworks. Familiarity with MLOps, CI/CD, Git, and databases (PostgreSQL, MySQL, MongoDB). Strong problem-solving & collaboration skills. If you’re excited about working with unstructured data, building scalable AI systems, and collaborating with a dynamic team — we’d love to hear from you!

Posted 3 weeks ago

Apply

2.0 - 3.0 years

3 - 4 Lacs

hyderabad

Work from Office

We are hiring an AI/ML Engineer (6-month contract) to develop, deploy, and optimize ML/DL models. Work with TensorFlow, PyTorch, Hugging Face, and cloud platforms. Apply MLOps, explore LLMs & generative AI, and drive innovation.

Posted 3 weeks ago

Apply
Page 1 of 5
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies