Posted:6 days ago|
Platform:
On-site
Full Time
We are opening a priority requirement for Senior AI/Machine Learning Engineer . Please note that we need quality over quantity —only share profiles that strictly match the requirements below. Bulk submissions without alignment will not be considered.
Position: Senior AI/Machine Learning Engineer (LLM Fine-Tuning & Training)
Experience: 7+ years in ML/AI engineering with recent (last 8–12 months) hands-on expertise in fine-tuning and large-scale training of LLMs/VLMs
Budget :- 1.6 LPM
Skills:
• Strong expertise in PyTorch, Hugging Face Transformers, PEFT (LoRA/QLoRA)
• Hands-on experience with distributed training (DDP, FSDP, DeepSpeed, Accelerate)
• Familiarity with evaluation frameworks (lm-eval-harness, custom metrics)
Engagement: Full-time Contract (2 months)
Locations: India
Availability: Full-time, with 4 hours overlap in PST timezone
Start Date : Within 1 week
Evaluation Process: : Technical Interview(Flocareer) + Delivery Round
Important Note:
We are looking for active ICs who have directly worked on model fine-tuning/training in the last 8–12 months. Managerial profiles will not be considered.
Kindly prioritize this role and share only qualified profiles with immediate availability at the earliest.
Role Overview:
We are looking for a Senior AI/Machine Learning Engineer (7+ years of experience) who is a stellar individual contributor with recent hands-on expertise in fine-tuning and large-scale training of modern models (LLMs/VLMs).
This is a hands-on role where you will :
- Lead fine-tuning workflows, large-scale training runs, and evaluation design.
- Collaborate closely with researchers to bring cutting-edge approaches from papers into production.
- Work directly with customers to align performance metrics, validation, and deployment readiness.
Note: This is not a managerial role. We are seeking candidates who are currently active ICs, heavily involved in model fine-tuning/training in the last 8–12 months.
What You’ll Do Day-to-Day
• Fine-tune and train LLMs/VLMs at scale (LoRA/QLoRA, PEFT methods).
• Build reproducible training pipelines with distributed training and mixed precision.
• Design and run robust evaluation frameworks (task-specific + lm-eval-harness).
• Translate research papers into working implementations, collaborating with researchers.
• Work with customers to validate models against business and performance needs.
• Optimize training runs with profiling, performance tuning, and efficiency improvements.
• Maintain experiment tracking, reproducibility, and structured model artifacts.
Requirements
• 7+ years in ML/AI engineering, with a strong recent focus on fine-tuning and training large models.
• Expert in PyTorch, Hugging Face Transformers, and PEFT (LoRA/QLoRA).
• Strong experience with distributed training (DDP, FSDP, DeepSpeed, Accelerate).
• Skilled with evaluation frameworks (lm-eval-harness, custom metrics, task benchmarks).
• Proven ability to reproduce and improve results from recent research papers.
• Strong coding practices in Python, with modular, clean implementations.
• Familiarity with experiment tracking tools (Weights & Biases, MLflow).
• Ability to interact with customers and researchers to translate requirements into engineering solutions
[Bonus] Preferred Qualifications:
• Experience with other ML libraries (e.g., PyTorch, Flax)
• Background in ML research or scientific computing
• Experience with production model monitoring and governance
Job Type: Full-time
Pay: ₹500,000.00 - ₹1,200,000.00 per year
Work Location: In person
Prodigy Placement LLP
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now5.0 - 12.0 Lacs P.A.
5.0 - 12.0 Lacs P.A.