MLOps Engineer — AWS SageMaker Client: A large global enterprise (name not disclosed) Location: India Work Model: 100% Remote Contract: 6 months (initial) with possibility of extension Start Date: ASAP Engagement: Full-time / Long-term contract Role Overview You will work within a global data & analytics team to design, deploy, and maintain robust ML pipelines using AWS SageMaker and associated cloud services. The role requires strong experience in production-grade MLOps, automation, and cloud engineering. Key Responsibilities Build, deploy, and maintain ML models using AWS SageMaker (Pipelines, Endpoints, Model Registry) Develop automated CI/CD workflows using CodePipeline, CodeBuild , or GitHub Actions Implement model monitoring, logging, and drift detection (CloudWatch, SageMaker Model Monitor) Create and maintain infrastructure using Terraform or CloudFormation Manage secure, scalable, cost-optimized AWS environments (IAM, VPC, networking) Collaborate with data scientists, cloud engineering teams, and solution architects Troubleshoot issues in high-availability, production ML setups Required Experience 4–8 years total experience in MLOps / ML Engineering Hands-on experience with SageMaker in enterprise-scale environments Strong Python skills & familiarity with ML frameworks Experience with Docker, Kubernetes (EKS preferred) Experience building CI/CD pipelines Deep practical knowledge of AWS ecosystem Nice to Have Experience implementing model governance Experience with multi-model endpoints Familiarity with enterprise security standards and compliance
MLOps Engineer — Databricks Client: A large global enterprise (name not disclosed) Location: India Work Model: 100% Remote Contract: 6 months (initial) with possibility of extension Start Date: ASAP Engagement: Full-time / Long-term contract Role Overview We are seeking an experienced Databricks MLOps Developer to design, build, and manage scalable machine learning operations on the Databricks Lakehouse Platform. The role involves automating ML workflows, operationalizing models, enabling reproducible pipelines, and ensuring governance and monitoring across the ML lifecycle. Key Responsibilities 1. Develop Scalable MLOps Pipelines Build automated ML pipelines for training, validation, deployment, and batch/real-time inference. Use Databricks Workflows, Jobs, Repos , and Delta Live Tables where applicable. Implement distributed training and inference pipelines using MLflow + PySpark . 2. Model Lifecycle Management Manage model versioning and promotion across dev → staging → production using MLflow Model Registry . Create reproducible workflows for model packaging, deployment, and rollback. 3. CI/CD Integration Build and integrate ML pipelines with CI/CD using Azure DevOps, GitHub Actions, or Jenkins . Automate testing, validation, and deployment for ML artifacts, notebooks, and infrastructure. 4. Feature Engineering & Data Pipelines Collaborate with Data Engineering teams to build optimized Delta Lake pipelines (Bronze/Silver/Gold architecture). Implement feature engineering workflows and support feature reuse at scale. 5. Monitoring & Governance Set up model monitoring for performance, drift, data quality, and lineage. Use Databricks-native tools, MLflow metrics, and cloud monitoring services (Azure/AWS). Ensure compliance through logging, auditing, permissions, and environment governance. 6. Cross-Functional Collaboration Work closely with Data Scientists, Data Engineers, Cloud teams, and Product teams. Document workflows, best practices, and MLOps reusable components. Required Skills & Qualifications Strong hands-on experience with Databricks (Workflows, Repos, Jobs, Compute) Proficiency with MLflow (Tracking, Registry, Model Deployment) Expertise in Delta Lake , PySpark, and distributed data pipelines Solid programming skills in Python and SQL Experience with CI/CD tools: Azure DevOps, GitHub Actions, Jenkins Familiarity with cloud platforms: Azure, AWS, or GCP Understanding of containerization (Docker) and orchestration (Kubernetes) Background in ML model training, serving, and observability Preferred Qualifications Databricks certifications: Databricks Certified Machine Learning Professional Databricks Certified Data Engineer Associate/Professional Experience with Unity Catalog for governance Experience implementing feature stores Knowledge of ML observability tools (WhyLabs, Monte Carlo, Arize AI, etc.)