Posted:5 days ago|
Platform:
Hybrid
Full Time
Job Description As an LLM (Large Language Model) Engineer, you will be responsible for designing, optimizing, and standardizing the architecture, codebase, and deployment pipelines of LLM-based systems. Your primary mission will focus on modernizing legacy machine learning codebases (including 40+ models) for a major retail clientenabling consistency, modularity, observability, and readiness for GenAI-driven innovation. You’ll work at the intersection of ML, software engineering, and MLOps to enable seamless experimentation, robust infrastructure, and production-grade performance for language-driven systems. This role requires deep expertise in NLP, transformer-based models, and the evolving ecosystem of LLM operations (LLMOps), along with a hands-on approach to debugging, refactoring, and building unified frameworks for scalable GenAI workloads. Responsibilities: Lead the standardization and modernization of legacy ML codebases by aligning to current LLM architecture best practices. Re-architect code for 40+ legacy ML models, ensuring modularity, documentation, and consistent design patterns. Design and maintain pipelines for fine-tuning, evaluation, and inference of LLMs using Hugging Face, OpenAI, or opensource stacks (e.g., LLaMA, Mistral, Falcon). Build frameworks to operationalize prompt engineering, retrieval-augmented generation (RAG), and few-shot/incontext learning methods. Collaborate with Data Scientists, MLOps Engineers, and Platform teams to implement scalable CI/CD pipelines, feature stores, model registries, and unified experiment tracking. Benchmark model performance, latency, and cost across multiple deployment environments (on-premise, GCP, Azure). Develop governance, access control, and audit logging mechanisms for LLM outputs to ensure data safety and compliance. Mentor engineering teams in code best practices, versioning, and LLM lifecycle maintenance. Key Skills: Deep understanding of transformer architectures, tokenization, attention mechanisms, and training/inference optimization Proven track record in standardizing ML systems using OOP design, reusable components, and scalable service APIs Hands-on experience with MLflow, LangChain, Ray, Prefect/Airflow, Docker, K8s, Weights & Biases, and modelserving platforms. Strong grasp of prompt tuning, evaluation metrics, context window management, and hybrid search strategies using vector databases like FAISS, pgvector, or Milvus Proficient in Python (must), with working knowledge of shell scripting, YAML, and JSON schema standardization Experience managing compute, memory, and storage requirements of LLMs across GCP, Azure, or AWS environments Qualifications & Experience: 5+ years in ML/AI engineering with at least 2 years working on LLMs or NLP-heavy systems. Able to reverse-engineer undocumented code and reimagine it with strong documentation and testing in mind. Clear communicator who collaborates well with business, data science, and DevOps teams. Familiar with agile processes, JIRA, GitOps, and confluence-based knowledge sharing. Curious and future-facing—always exploring new techniques and pushing the envelope on GenAI innovation. Passionate about data ethics, responsible AI, and building inclusive systems that scale
Factspan Analytics
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Factspan Analytics
India
Experience: Not specified
Salary: Not disclosed
Noida
7.0 - 11.0 Lacs P.A.
Bengaluru
20.0 - 35.0 Lacs P.A.
Visakhapatnam
15.0 - 25.0 Lacs P.A.
Gautam Buddha Nagar, Uttar Pradesh, India
Salary: Not disclosed
Bengaluru
45.0 - 60.0 Lacs P.A.
Bengaluru
45.0 - 95.0 Lacs P.A.
Salary: Not disclosed
Mumbai, Pune, Bengaluru
8.0 - 13.0 Lacs P.A.
Mumbai, Pune, Bengaluru
8.0 - 13.0 Lacs P.A.