Industry & Sector:
Enterprise AI / Cloud Data Platforms—focused on building scalable, production-grade machine-learning pipelines and data workflows for analytics and decisioning. We operate in a cloud-first environment supporting real-time model deployment, monitoring, and operational governance across AWS-powered infrastructures.Role & Responsibilities
- Design, build, and operate end-to-end MLOps pipelines on Databricks: data ingestion, model training, model registry, and production deployment.
- Author and maintain CI/CD pipelines for ML code and infrastructure using Jenkins, Bitbucket, and automation IaC patterns to enable repeatable, auditable releases.
- Integrate code-quality and security gates using SonarQube and enforce branching and release strategies in Bitbucket for collaborative delivery.
- Manage model artifact storage, versioning, and lifecycle on AWS (S3) and Databricks Model Registry; automate promotion from staging to production.
- Operationalize model monitoring and alerting—metric collection, drift detection, and automated retraining triggers to ensure SLA-driven model reliability.
- Troubleshoot operational issues, optimize performance of Spark/Databricks jobs, and collaborate closely with Data Science and Data Engineering to productionize models.
Skills & Qualifications
Must-Have
- Databricks (Mandatory): Strong hands-on experience with data pipelines, model training, and deployment workflows.
- Jenkins: Practical knowledge of CI/CD pipeline setup, configuration, and maintenance.
- AWS (S3 Buckets): Experience managing model artifacts, datasets, and configurations.
- SonarQube: Understanding of code quality metrics and best practices for bug fixing.
- Bitbucket (Important): Proficiency in version control and branching strategies for collaborative projects.
- Python (Optional): Basic understanding for reading and modifying ML-related scripts.
- Modification Understanding: Ability to analyze and adapt existing workflows, pipelines, or configurations as per project needs.
Good to Have
- Familiarity with ML lifecycle management tools such as MLflow.
- Understanding of containerization technologies (e.g., Docker, Kubernetes) for ML model deployment.
- Experience working in cloud-based MLOps environments (AWS / Azure).
Qualifications
- Bachelor's degree in Computer Science, IT, Engineering or equivalent practical experience.
- Proven experience delivering production ML systems and CI/CD for ML in a cloud environment (AWS preferred).
- Strong understanding of ML lifecycle, model governance, observability, and reproducible training pipelines.
Benefits & Culture Highlights
- Fast-paced, collaborative engineering culture with strong emphasis on automation, observability and engineering excellence.
- Opportunity to work on large-scale Databricks/Spark workloads and shape MLOps best practices across product lines.
- Competitive compensation, upskilling budget, and flexible hybrid work options (role-dependent).
Skills: python software foundation,pipelines,docker,bitbucket,aws,s3,sonarqube,databrick,jenkins,kubernetes,mlops,mlflow