Home
Jobs

MLOps Engineer (NVIDIA Containerization Specialist) - Contractual

0 years

0 Lacs

Posted:3 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Temporary

Job Description

About the Role: We are seeking an experienced MLOps Engineer with a strong background in NVIDIA GPU-based containerization and scalable ML infrastructure ( Contractual - Assignment Basis) . You will work closely with data scientists, ML engineers, and DevOps teams to build, deploy, and maintain robust, high-performance machine learning pipelines using NVIDIA NGC containers, Docker, Kubernetes , and modern MLOps practices. Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for training, validation, deployment, and monitoring of ML models. Implement GPU-accelerated workflows using NVIDIA NGC containers, CUDA, and RAPIDS . Containerize ML workloads using Docker and deploy on Kubernetes (preferably with GPU support like NVIDIA device plugin for K8s) . Integrate model versioning, reproducibility, CI/CD, and automated model retraining using tools like MLflow, DVC, Kubeflow, or similar . Optimize model deployment for inference on NVIDIA hardware using TensorRT, Triton Inference Server , or ONNX Runtime-GPU . Manage cloud/on-prem GPU infrastructure and monitor resource utilization and model performance in production. Collaborate with data scientists to transition models from research to production-ready pipelines. Required Skills: Proficiency in Python and ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Strong experience with Docker , Kubernetes , and NVIDIA GPU containerization (NGC, nvidia-docker) . Familiarity with NVIDIA Triton Inference Server , TensorRT , and CUDA . Experience with CI/CD for ML (GitHub Actions, GitLab CI, Jenkins, etc.). Deep understanding of ML lifecycle management , monitoring, and retraining. Experience working with cloud platforms (AWS/GCP/Azure) or on-prem GPU clusters. Preferred Qualifications: Experience with Kubeflow , Seldon Core , or similar orchestration tools. Exposure to Airflow , MLflow , Weights & Biases , or DVC . Knowledge of NVIDIA RAPIDS and distributed GPU workloads. MLOps certifications or NVIDIA Deep Learning Institute training (preferred but not mandatory). Show more Show less

Mock Interview

Practice Video Interview with JobPe AI

Start Containerization Interview Now

My Connections KYROTICS

Download Chrome Extension (See your connection in the KYROTICS )

chrome image
Download Now
KYROTICS

1 Jobs

RecommendedJobs for You