2 - 7 years

4 - 8 Lacs

Mumbai, Delhi / NCR, Bengaluru

Posted:22 hours ago| Platform: Naukri logo

Apply

Skills Required

MLOps Airflow MLflow Hugging Face GenAI Deployment DVC Kubeflow Model Drift Docker GPU Infrastructure CI/CD Model Versioning Kubernetes

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Summary: We are looking for a highly capable and automation-driven MLOps Engineer with 2+ years of experience in building and managing end-to-end ML infrastructure. This role focuses on operationalizing ML pipelines using tools like DVC, MLflow, Kubeflow, and Airflow, while ensuring efficient deployment, versioning, and monitoring of machine learning and Generative AI models across GPU-based cloud infrastructure (AWS/GCP). The ideal candidate will also have experience in multi-modal orchestration, model drift detection, and CI/CD for ML systems. Key Responsibilities: Develop, automate, and maintain scalable ML pipelines using tools such as Kubeflow, MLflow, Airflow, and DVC. Set up and manage CI/CD pipelines tailored to ML workflows, ensuring reliable model training, testing, and deployment. Containerize ML services using Docker and orchestrate them using Kubernetes in both development and production environments. Manage GPU infrastructure and cloud-based deployments (AWS, GCP) for high-performance training and inference. Integrate Hugging Face models and multi-modal AI systems into robust deployment frameworks. Monitor deployed models for drift, performance degradation, and inference bottlenecks, enabling continuous feedback and retraining. Ensure proper model versioning, lineage, and reproducibility for audit and compliance. Collaborate with data scientists, ML engineers, and DevOps teams to build reliable and efficient MLOps systems. Support Generative AI model deployment with scalable architecture and automation-first practices. Qualifications: 2+ years of experience in MLOps, DevOps for ML, or Machine Learning Engineering. Hands-on experience with MLflow, DVC, Kubeflow, Airflow, and CI/CD tools for ML. Proficiency in containerization and orchestration using Docker and Kubernetes. Experience with GPU infrastructure, including setup, scaling, and cost optimization on AWS or GCP. Familiarity with model monitoring, drift detection, and production-grade deployment pipelines. Good understanding of model lifecycle management, reproducibility, and compliance. Preferred Qualifications : Experience deploying Generative AI or multi-modal models in production. Knowledge of Hugging Face Transformers, model quantization, and resource-efficient inference. Familiarity with MLOps frameworks and observability stacks. Experience with security, governance, and compliance in ML environments. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Mock Interview

Practice Video Interview with JobPe AI

Start Mlops Interview Now
Lericon Informatics
Lericon Informatics

Information Technology

Tech City

50-100 Employees

47 Jobs

    Key People

  • John Doe

    CEO & Founder
  • Jane Smith

    CTO

RecommendedJobs for You

Mumbai, Delhi / NCR, Bengaluru

Pune, Maharashtra, India

Chennai, Tamil Nadu, India

Kolkata, Hyderabad, Bengaluru

Hyderabad, Bengaluru