Mid-Level Machine Learning Engineer Job Role: Mid-Level Machine Learning Engineer Total Experience: 6+ years Relevant Experience: Min 1 year in Machine Learning / LLMs / Agentic Workflows Job Location: Kolkata / Indore (hybrid) Education: Bachelor’s or Master’s in Computer Science, Engineering, AI, Data Science, or equivalent Job Description We are seeking a hands-on, mid-level Machine Learning Engineer to join our core AI platform team responsible for building scalable, reusable AI services and agentic automation frameworks that power products across multiple industries. The ideal candidate is strong in ML/LLM fundamentals, comfortable working in a fast-paced environment, and able to mentor junior engineers. This role will contribute to the design, development, and deployment of multi-modal ML pipelines , LLM-driven workflows , and agent-based systems while ensuring the platform is generic, extensible, and robust. AI/ML Roles and Responsibilities • Design, develop, and deploy machine learning models including NLP/LLM-based components, classification pipelines, embeddings, and generative models. • Build and optimize LLM-powered features such as summarization, reasoning, classification, extraction, recommendations, or conversational flows. • Develop agentic workflows using frameworks like LangChain, LangGraph, DSPy, or custom orchestration mechanisms. • Implement multi-step reasoning, tool-calling, memory structures, and task automation using agent-based architectures. Reusable AI Services Engineering • Build scalable, reusable ML microservices that can be consumed by multiple product teams. • Design generic APIs, schemas, prompts, and configuration-driven patterns to support multi-vertical use cases. • Build shared components that abstract domain-specific logic behind configurable templates and rules. • Collaborate with architects to ensure services follow best practices for performance, reliability, and maintainability. Data Pipelines & Model Lifecycle • Design and maintain data processing pipelines for training, evaluation, and inference. • Participate in model evaluation, fine-tuning, benchmarking, and experimentation. • Implement confidence scoring, model monitoring, drift detection, and quality assurance practices. • Collaborate with MLOps engineers to package, deploy, scale, and monitor ML models in production. Colaboration & Mentorship • Work closely with backend, DevOps, product, and domain teams to integrate ML capabilities into the platform. • Mentor junior developers and help them grow in ML engineering best practices. • Participate actively in design reviews, code reviews, and platform-level architecture discussions. • Communicate technical ideas clearly and work collaboratively in a cross-functional environment. Documentation & Process • Document models, prompts, APIs, workflows, experiments, and platform components. • Follow best practices in version control, testing, evaluation, and observability for ML components. • Contribute to continuously improving engineering processes, coding standards, and platform guidelines. Must Have (Core Skills) • Strong hands-on experience in Python , PyTorch/TensorFlow, and ML model development. • Experience with LLMs (OpenAI, Llama, Mistral, DeepSeek, etc.) for building intelligent features or workflows. • Practical exposure to agentic frameworks (LangChain, LangGraph, ReAct, DSPy, or similar). • Solid understanding of NLP techniques and experience working with embeddings, RAG, or prompt engineering. • Ability to build ML-driven microservices and APIs for consumption by other teams. • Familiarity with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes). • Strong analytical, debugging, and problem-solving skills. • Ability to guide and mentor junior team members on technical tasks. Good to have: • Experience with multi-modal models (text, audio, image, video). • Hands-on experience with vector databases (Pinecone, Weaviate, Milvus, etc.). • Knowledge of distributed systems, event-driven architectures, or real-time inference pipelines. • Exposure to MLOps tools such as MLflow, Weights & Biases, KServe, Triton Server. • Basic knowledge of domain-driven design or building platform-level shared services. • Experience in designing evaluation frameworks or automated testing for LLMs/agents.