Jobs
Interviews

78 Model Monitoring Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Python Developer within our Information Technology department, your primary responsibility will be to leverage your expertise in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. We are seeking a candidate who possesses hands-on experience with GPT-4, transformer models, and deep learning frameworks, along with a profound comprehension of model fine-tuning, deployment, and inference. Your key responsibilities will include designing, developing, and maintaining Python applications that are specifically tailored towards AI/ML and generative AI. You will also be involved in building and refining transformer-based models such as GPT, BERT, and T5 for various NLP and generative tasks. Working with extensive datasets for training and evaluation purposes will be a crucial aspect of your role. Moreover, you will be tasked with implementing model inference pipelines and scalable APIs utilizing FastAPI, Flask, or similar technologies. Collaborating closely with data scientists and ML engineers will be essential in creating end-to-end AI solutions. Staying updated with the latest research and advancements in the realms of generative AI and ML is imperative for this position. From a technical standpoint, you should demonstrate a strong proficiency in Python and its relevant libraries like NumPy, Pandas, and Scikit-learn. With at least 7+ years of experience in AI/ML development, hands-on familiarity with transformer-based models, particularly GPT-4, LLMs, or diffusion models, is required. Experience with frameworks like Hugging Face Transformers, OpenAI API, TensorFlow, PyTorch, or JAX is highly desirable. Additionally, expertise in deploying models using Docker, Kubernetes, or cloud platforms like AWS, GCP, or Azure will be advantageous. Having a knack for problem-solving and algorithmic thinking is crucial for this role. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) would be a valuable asset. Any contributions to open-source AI/ML projects, experience with vector databases, building AI chatbots, copilots, or creative content generators, and knowledge of MLOps and model monitoring will be considered as added advantages. In terms of educational qualifications, a Bachelor's degree in Science (B.Sc), Technology (B.Tech), or Computer Applications (BCA) is required. A Master's degree in Science (M.Sc), Technology (M.Tech), or Computer Applications (MCA) would be an added benefit for this role.,

Posted 2 months ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Design, develop, implement AI and Generative AI solutions to address business problems and achieveobjectives. Gather, clean, and prepare large datasets to ensure readiness for AI model training Train, fine-tune, evaluate, andoptimizeAI models for specific use cases, ensuring accuracy, performance, cost-effectiveness, and scalability. Seamlessly integrate AI models and autonomous agent solutions into cloud-based & on-prem products to drive smarter workflows and improved productivity. Develop reusable tools, libraries, and components that standardize and accelerate the development of AI solutions across the organization. Monitor andmaintaindeployed models, ensuring consistent performance and reliability in production environments Stay up to date with the latest AI/ML advancements, exploringnew technologies, algorithms, and methodologies to enhance product capabilities. Effectively communicate technical concepts, research findings, and AI solution strategies to both technical and non-technical stakeholders. Understand the IBM tool and model landscape and work closely with cross-functional teams toleveragethese tools, driving innovation and alignment. Lead and mentor team members to improve performance. Collaborate with operations, architects, and product teams to resolve issues and define product designs. Exercise best practices in agile development and software engineering.Code, unit test, debug and perform integration tests of software components Participatein software design reviews, code reviews and project planning. Write and review documentation and technical blog posts. Contribute to department attainment of organizationalobjectivesand high customer satisfaction Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 6 years of hands-on experience developing AI-based applications using Python. 2+ years in Performance testing, Reliability testing 2+ years of experience using deep learning frameworks (TensorFlow,PyTorch, orKeras) Solid understanding of ML/AI conceptsEDA, preprocessing, algorithm selection, machine learning frameworks, model efficiency metrics, model monitoring. Familiarity with Natural Language Processing (NLP) techniques. Deep understanding of Large Language Models (LLM) Architectures, theircapabilitiesand limitations. Provenexpertisein integrating and working with LLMs to build robust AI solutions. Skilled in crafting effective prompts to guide LLMs to provide desired outputs. Hands-on experience with LLM frameworks such asLangchain,Langraph,CrewAIetc., Experience in LLM application development based on Retrieval-Augmented Generation (RAG) concept, familiarity with vector databases, and fine-tuning large language models (LLMs) to enhance performance and accuracy. Proficient in microservices development using Python (Django/Flask or similar technologies). Experience in Agile development methodologies Familiarity with platforms like Kubernetes and experience building on top of the native platforms Experience with cloud-based data platforms and services (e.g., IBM, AWS, Azure, Google Cloud). Experience designing, building, andmaintainingdata processing systems working in containerized environments (Docker, OpenShift, k8s) Excellent communication skills with the ability to effectively collaborate with technical and non-technical stakeholders Preferred technical and professional experience Experience in MLOPs frameworks (BentoML,Kubefloworsimilar technologies) and exposure to LLMOPs Experience in cost optimisation initiatives Experience with end-to-end chatbot development, including design, deployment, and ongoing optimization,leveragingNLPand integrating with backend systems and APIs. Understanding of security and ethical best practices for data and model development Contributions toopen sourceprojects

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

At OP, we are a people-first, high-touch organization committed to delivering cutting-edge AI solutions with integrity and passion. We are looking for a Senior AI Developer with expertise in AI model development, Python, AWS, and scalable tool-building. In this role, you will be responsible for designing and implementing AI-driven solutions, developing AI-powered tools and frameworks, and integrating them into enterprise environments, including mainframe systems. As a Senior AI Developer at OP, your key responsibilities will include developing and deploying AI models using Python and AWS for enterprise applications, building scalable AI-powered tools, designing and optimizing machine learning pipelines, implementing NLP and GenAI models, developing and integrating RAG systems for enterprise knowledge retrieval, maintaining AI frameworks and APIs, architecting cloud-based AI solutions using AWS services, writing high-performance Python code for AI applications, and ensuring scalability, security, and performance of AI solutions in production. The required qualifications for this role include 5+ years of experience in AI/ML development with expertise in Python and AWS, a strong background in machine learning and deep learning, experience in LLMs, NLP, and RAG systems, hands-on experience in building and deploying AI models in production, proficiency in cloud-based AI solutions, experience in developing AI-powered tools and frameworks, knowledge of mainframe integration and enterprise AI applications, and strong coding skills with a focus on software development best practices. Preferred qualifications for this role include familiarity with MLOps, CI/CD pipelines, and model monitoring, a background in developing AI-based enterprise tools and automation, and experience with vector databases and AI-powered search technologies. At OP, we offer health insurance, accident insurance, and competitive salaries based on various factors including location, education, qualifications, experience, technical skills, and business needs. In addition to the core responsibilities, you will also be expected to participate in OP monthly team meetings, contribute to technical discussions, peer reviews, and collaborate via the OP-Wiki/Knowledge Base, as well as provide status reports to OP Account Management as requested. OP is a technology consulting and solutions company that offers advisory and managed services, innovative platforms, and staffing solutions across various fields, including AI, cyber security, and enterprise architecture. Our team consists of dynamic, creative thinkers who are passionate about quality work, and as a member of the OP team, you will have access to industry-leading consulting practices, strategies, technologies, training, and education. An ideal OP team member is a technology leader with a proven track record of technical excellence and a strong focus on process and methodology.,

Posted 2 months ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/ GenAI models on AWS using GitHub Actions and CodePipeline . (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

9.0 - 12.0 years

60 - 80 Lacs

Pune

Hybrid

Staff Engineer Pattern is a leading e-commerce company that helps brands grow their business on platforms like Amazon. We are committed to innovation and excellence, leveraging data-driven insights to drive partner success. Our team is composed of passionate individuals who are dedicated to making a difference in the e-commerce landscape. The Staff Engineer leads and oversees the engineering function in developing, releasing, and maintaining AI workflows and agentic systems according to business needs. You will play a crucial role in setting and promoting engineering standards and practices that are used throughout the company. As a Staff Engineer, together with term's Data Science and Engineering teams, you will lead a team that creates and maintains impactful solutions for our brands across the world. From traditional machine learning to large language models, you will work and lead throughout the model lifecycle. Responsibilities: • Pipeline Management: Architect, implement, and maintain scalable ML pipelines, with seamless integration from data ingestion to production deployment. • Model Monitoring: Lead the operationalization of machine learning models, ensuring hundreds of models are continuously monitored, retrained, and optimized in real-time environments • Deployment: Deploy machine learning solutions in the cloud, securely and cost effectively. • Reporting: Effectively communicate actionable insights across teams using both automatic (e.g., alerts) and non-automatic methods. • Leadership: MLOps is a team sport, and we require a leader who can elevate everyone in the MLOps organization. While technical skills and vision are required, your leadership skills will take AI and machine learning from theoretical to operational, delivering tangible value to both customers and internal teams. The type of game changing candidate we are looking for: • Seasoned: Demonstrated experience successfully leading teams both formally and informally. • Transparent: Willingness to identify and admit errors and seek out opportunities to continually improve both in their own work and across the team. • Communication: MLOps is a central node in a complex system. Clear, actionable, and concise communication, both written and verbal is a must. • Coaching and Team Advancement: An MLOps leader is continually developing team members and fostering a constant flow of communication and improvement across team members. • Master's/PhD degree or a strong demonstration of technical expertise in Computer Science, Machine Learning, Data Science, or a related field • Multiple years of direct extensive experience with AWS • Multiple years of experience with MLOps monitoring and testing tools • Ability to prioritize projects effectively once clear vision and goals are identified • Excited to empower DS with tools, practices, and training that simplify MLOps enough for Data Science to increasingly practice MLOps on their own and own products in production. Our Core Values • Data Fanatics: Our edge is always found in the data. • Partner Obsessed: We are obsessed with partner success. • Team of Doers: We have a bias for action. • Gamechangers: We encourage innovation. Join Us at Pattern At Pattern, we believe in fostering a culture of innovation and growth. We are looking for talented individuals who are passionate about making an impact in the e-commerce industry. If you are ready to take on new challenges and be part of a dynamic team, we invite you to apply and join us on our journey to redefine e-commerce success.

Posted 2 months ago

Apply

5.0 - 10.0 years

22 - 30 Lacs

Pune

Hybrid

We are looking for a Machine Learning Engineer with expertise in MLOps (Machine Learning Operations) or LLMOps (Large Language Model Operations) to design, deploy, and maintain scalable AI/ML systems. You will work on automating ML workflows, optimizing model deployment, and managing large-scale AI applications, including LLMs (Large Language Models) , ensuring they run efficiently in production. Key Responsibilities: Design and implement end-to-end MLOps pipelines for training, validation, deployment, monitoring, and retraining of ML models. Optimize and fine-tune large language models (LLMs) for various applications, ensuring performance and efficiency. Develop CI/CD pipelines for ML models to automate deployment and monitoring in production. Monitor model performance, detect drift , and implement automated retraining mechanisms. Work with cloud platforms ( AWS, GCP, Azure ) and containerization technologies ( Docker, Kubernetes ) for scalable deployments. Implement best practices in data engineering , feature stores, and model versioning. Collaborate with data scientists, engineers, and product teams to integrate ML models into production applications. Ensure compliance with security, privacy, and ethical AI standards in ML deployments. Optimize inference performance and cost of LLMs using quantization, pruning, and distillation techniques . Deploy LLM-based APIs and services, integrating them with real-time and batch processing pipelines. Key Requirements: Technical Skills: Strong programming skills in Python, with experience in ML frameworks ( TensorFlow, PyTorch, Hugging Face, JAX ). Experience with MLOps tools (MLflow, Kubeflow, Vertex AI, SageMaker, Airflow). Deep understanding of LLM architectures , prompt engineering, and fine-tuning. Hands-on experience with containerization (Docker, Kubernetes) and orchestration tools . Proficiency in cloud services (AWS/GCP/Azure) for ML model training and deployment. Experience with monitoring ML models (Prometheus, Grafana, Evidently AI). Knowledge of feature stores (Feast, Tecton) and data pipelines (Kafka, Apache Beam). Strong background in distributed computing (Spark, Ray, Dask) . Soft Skills: Strong problem-solving and debugging skills. Ability to work in cross-functional teams and communicate complex ML concepts to stakeholders. Passion for staying updated with the latest ML and LLM research & technologies . Preferred Qualifications: Experience with LLM fine-tuning , Reinforcement Learning with Human Feedback ( RLHF ), or LoRA/PEFT techniques . Knowledge of vector databases (FAISS, Pinecone, Weaviate) for retrieval-augmented generation ( RAG ). Familiarity with LangChain, LlamaIndex , and other LLMOps-specific frameworks. Experience deploying LLMs in production (ChatGPT, LLaMA, Falcon, Mistral, Claude, etc.) .

Posted 2 months ago

Apply

2.0 - 7.0 years

7 - 17 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Model Monitoring/Model Validation EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning methodologies, that integrate advanced analytics, data management, digital, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 30,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Home EXL Service is a global analytics and digital solutions company serving industries including insurance, healthcare, banking and financial services, media, retail, and others Role Details : We are seeking a strong credit risk model professional with experience in model monitoring, validation, implementation and maintenance of regulatory models. Responsibilities: Helping with various aspects of model validation Perform all required tests (e.g. model performance, sensitivity, back-testing, etc.) Interact with model governance team on model build and model monitoring Work closely with cross functional teams including business stakeholders, model validation and governance teams Deliver high quality client services, including model documentations, within expected timeframes Requirements : Minimum 2+ years of experience in executing end to end monitoring/validation/production/implementation of risk model validation/monitoring understanding with respect to marketing/general analytics problems Managing assigned projects in a timely manner, ensuring accuracy and that deliverables are met. Training, coaching and development of team members Qualifications: Previous experience (2+ years) in analytics, preferably in BFSI Good knowledge in General Analytics, Fraud Analytics Past experience in problem solving roles, strategic initiatives Good problem-solving skills

Posted 2 months ago

Apply

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

About the Team and Our Scope We are a forward-thinking tech organization within Swiss Re, delivering transformative AI/ML solutions that redefine how businesses operate. Our mission is to build intelligent, secure, and scalable systems that deliver real-time insights, automation, and high-impact user experiences to clients globally. You'll join a high-velocity AI/ML team working closely with product managers, architects, and engineers to create next-gen enterprise-grade solutions. Our team is built on a startup mindset - bias to action, fast iterations, and ruthless focus on value delivery. We're not only shaping the future of AI in business - we're shaping the future of talent. This role is ideal for someone passionate about advanced AI engineering today and curious about evolving into a product leadership role tomorrow. You'll get exposure to customer discovery, roadmap planning, and strategic decision-making alongside your technical contributions. Role Overview As an AI/ML Engineer, you will play a pivotal role in the research, development, and deployment of next-generation GenAI and machine learning solutions . Your scope will go beyond retrieval-augmented generation (RAG) to include areas such as prompt engineering, long-context LLM orchestration, multi-modal model integration (voice, text, image, PDF), and agent-based workflows. You will help assess trade-offs between RAG and context-native strategies, explore hybrid techniques, and build intelligent pipelines that blend structured and unstructured data. You'll work with technologies such as LLMs, vector databases, orchestration frameworks, prompt chaining libraries, and embedding models, embedding intelligence into complex, business-critical systems. This role sits at the intersection of rapid GenAI prototyping and rigorous enterprise deployment, giving you hands-on influence over both the technical stack and the emerging product direction. Key Responsibilities Build Next-Gen GenAI Pipelines : Design, implement, and optimize pipelines across RAG, prompt engineering, long-context input handling, and multi-modal processing. Prototype, Validate, Deploy : Rapidly test ideas through PoCs, validate performance against real-world business use cases, and industrialize successful patterns. Ingest, Enrich, Embed: Construct ingestion workflows including OCR, chunking, embeddings, and indexing into vector databases to unlock unstructured data. Integrate Seamlessly: Embed GenAI services into critical business workflows, balancing scalability, compliance, latency, and observability. Explore Hybrid Strategies: Combine RAG with context-native models, retrieval mechanisms, and agentic reasoning to build robust hybrid architectures. Drive Impact with Product Thinking : Collaborate with product managers and UX designers to shape user-centric solutions and understand business context. Ensure Enterprise-Grade Quality: Deliver solutions that are secure, compliant (e.g., GDPR), explainable, and resilient - especially in regulated environments. What Makes You a Fit Must-Have Technical Expertise Proven experience with GenAI techniques and LLMs , including RAG, long-context inference, prompt tuning, and multi-modal integration. Strong hands-on skills with Python , embedding models, and orchestration libraries (e.g., LangChain, Semantic Kernel, or equivalents). Comfort with MLOps practices , including version control, CI/CD pipelines, model monitoring, and reproducibility. Ability to operate independently, deliver iteratively, and challenge assumptions with data-driven insight. Understanding of vector search optimization and retrieval tuning. Exposure to multi-modal models Nice-To-Have Qualifications Experience building and operating AI systems in regulated industries (e.g., insurance, finance, healthcare). Familiarity with Azure AI ecosystem (e.g., Azure OpenAI, Azure AI Document Intelligence, Azure Cognitive Search) and deployment practices in cloud-native environments. Experience with agentic AI architectures , tools like AutoGen, or prompt chaining frameworks. Familiarity with data privacy and auditability principles in enterprise AI. Bonus: You Think Like a Product Manager While this role is technical at its core, we highly value candidates who are curious about how AI features become products . If you're excited by the idea of influencing roadmaps, shaping requirements, or owning end-to-end value delivery - we'll give you space to grow into it. This is a role where engineering and product are not silos . If you're keen to move in that direction, we'll mentor and support your evolution. Why Join Us You'll be part of a team that's pushing AI/ML into uncharted, high-value territory. We operate with urgency, autonomy, and deep collaboration. You'll prototype fast, deliver often, and see your work shape real-world outcomes - whether in underwriting, claims, or data orchestration. And if you're looking to transition from deep tech to product leadership , this role is a launchpad. Swiss Re is an equal opportunity employer . We celebrate diversity and are committed to creating an inclusive environment for all employees. Keywords: Reference Code: 134317

Posted 3 months ago

Apply

1.0 - 2.0 years

3 - 4 Lacs

Pune

Work from Office

- Hands-on experience with Jupyter Notebooks, Google Colab, Git & GitHub .- Solid understanding of Data Visualization Tools and Dashboard Creation. - Prior teaching/training experience (online/offline) is a plus .- Excellent communication and presentati

Posted 3 months ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Design, develop, implement AI and Generative AI solutions to address business problems and achieveobjectives. Gather, clean, and prepare large datasets to ensure readiness for AI model training Train, fine-tune, evaluate, andoptimizeAI models for specific use cases, ensuring accuracy, performance, cost-effectiveness, and scalability. Seamlessly integrate AI models and autonomous agent solutions into cloud-based & on-prem products to drive smarter workflows and improved productivity. Develop reusable tools, libraries, and components that standardize and accelerate the development of AI solutions across the organization. Monitor andmaintaindeployed models, ensuring consistent performance and reliability in production environments Stay up to date with the latest AI/ML advancements, exploringnew technologies, algorithms, and methodologies to enhance product capabilities. Effectively communicate technical concepts, research findings, and AI solution strategies to both technical and non-technical stakeholders. Understand the IBM tool and model landscape and work closely with cross-functional teams toleveragethese tools, driving innovation and alignment. Lead and mentor team members to improve performance. Collaborate with operations, architects, and product teams to resolve issues and define product designs. Exercise best practices in agile development and software engineering.Code, unit test, debug and perform integration tests of software components Participatein software design reviews, code reviews and project planning. Write and review documentation and technical blog posts. Contribute to department attainment of organizationalobjectivesand high customer satisfaction Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Minimum 6 years of hands-on experience developing AI-based applications using Python. 2+ years in Performance testing, Reliability testing 2+ years of experience using deep learning frameworks (TensorFlow,PyTorch, orKeras) Solid understanding of ML/AI conceptsEDA, preprocessing, algorithm selection, machine learning frameworks, model efficiency metrics, model monitoring. Familiarity with Natural Language Processing (NLP) techniques. Deep understanding of Large Language Models (LLM) Architectures, theircapabilitiesand limitations. Provenexpertisein integrating and working with LLMs to build robust AI solutions. Skilled in crafting effective prompts to guide LLMs to provide desired outputs. Hands-on experience with LLM frameworks such asLangchain,Langraph,CrewAIetc., Experience in LLM application development based on Retrieval-Augmented Generation (RAG) concept, familiarity with vector databases, and fine-tuning large language models (LLMs) to enhance performance and accuracy. Proficient in microservices development using Python (Django/Flask or similar technologies). Experience in Agile development methodologies Familiarity with platforms like Kubernetes and experience building on top of the native platforms Experience with cloud-based data platforms and services (e.g., IBM, AWS, Azure, Google Cloud). Experience designing, building, andmaintainingdata processing systems working in containerized environments (Docker, OpenShift, k8s) Excellent communication skills with the ability to effectively collaborate with technical and non-technical stakeholders Preferred technical and professional experience Experience in MLOPs frameworks (BentoML,Kubefloworsimilar technologies) and exposure to LLMOPs Experience in cost optimisation initiatives Experience with end-to-end chatbot development, including design, deployment, and ongoing optimization,leveragingNLPand integrating with backend systems and APIs. Understanding of security and ethical best practices for data and model development Contributions toopen sourceprojects

Posted 3 months ago

Apply

4.0 - 8.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Role & responsibilities : Job Title : AI Engineer (AI-Powered Agents, Knowledge Graphs, & MLOps) Location: Hyderabad Job Type : Full-time Hands-on Gen AI development in GCP and Azure stack Job Summary : We seek an AI Engineer with deep expertise in building AI-powered agents, designing and implementing knowledge graphs, and optimizing business processes through AI-driven solutions. The role also requires hands-on experience in AI Operations (AI Ops), including continuous integration/deployment (CI/CD), model monitoring, and retraining. The ideal candidate will have experience working with open-source or commercial large language models (LLMs) and be proficient in using platforms like Azure Machine Learning Studio or Google Vertex AI to scale AI solutions effectively. Key Responsibilities : AI Agent Development : Design, build, and deploy AI-powered agents for applications such as virtual assistants, customer service bots, and task automation systems using LLMs and other AI models. Knowledge Graph Implementation : Develop and implement knowledge graphs for enterprise data integration, enhancing the retrieval, structuring, and management of large datasets to support decision-making. AI-Driven Process Optimization : Collaborate with business units to optimize workflows using AI-driven solutions, automating decision-making processes and improving operational efficiency. AI Ops (MLOps) : Implement robust AI/ML pipelines that follow CI/CD best practices to ensure continuous integration and deployment of AI models across different environments. Model Monitoring and Maintenance : Establish processes for real-time model monitoring, including tracking performance, drift detection, and accuracy of models in production environments. Model Retraining and Optimization : Develop automated or semi-automated pipelines for model retraining based on changes in data patterns or model performance. Implement processes to ensure continuous improvement and accuracy of AI solutions. Cloud and ML Platforms : Utilize platforms such as Azure Machine Learning Studio, Google Vertex AI, and open-source frameworks for end-to-end model development, deployment, and monitoring. Collaboration : Work closely with data scientists, software engineers, and business stakeholders to deploy scalable AI solutions that deliver business impact. MLOps Tools : Leverage MLOps tools for version control, model deployment, monitoring, and automated retraining processes to ensure operational stability and scalability of AI systems. Performance Optimization : Continuously optimize models for scalability and performance, identifying bottlenecks and improving efficiencies. Qualifications : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 3+ years of experience as an AI Engineer, focusing on AI-powered agent development, knowledge graphs, AI-driven process optimization, and MLOps practices. Proficiency in working with large language models (LLMs) such as GPT-3/4, GPT-J, BLOOM, or similar, including both open-source and commercial variants. Experience with knowledge graph technologies, including ontology design and graph databases (e.g., Neo4j, AWS Neptune). AI Ops/MLOps Expertise : Hands-on experience with AI/ML CI/CD pipelines, automated model deployment, and continuous model monitoring in production environments. Familiarity with tools and frameworks for model lifecycle management, such as MLflow, Kubeflow, or similar. Strong skills in Python, Java, or similar languages, and proficiency in building, deploying, and monitoring AI models. Solid experience in natural language processing (NLP) techniques, including building conversational AI, entity recognition, and text generation models. Model Monitoring & Retraining : Expertise in setting up automated pipelines for model retraining, monitoring for drift, and ensuring the continuous performance of deployed models. Experience in using cloud platforms like Azure Machine Learning Studio, Google Vertex AI, or similar cloud-based AI/ML tools. Preferred Skills : Experience with building or integrating conversational AI agents using platforms like Microsoft Bot Framework, Rasa, or Dialogflow. Familiarity with AI-driven business process automation and RPA integration using AI/ML models. Knowledge of advanced AI-driven process optimization tools and techniques, including AI orchestration for enterprise workflows. Experience with containerization technologies (e.g., Docker, Kubernetes) to support scalable AI/ML model deployment. Certification in Azure AI Engineer Associate, Google Professional Machine Learning Engineer, or relevant MLOps-related certifications is a plus. Preferred candidate profile Perks and benefits

Posted 3 months ago

Apply

12.0 - 16.0 years

40 - 50 Lacs

Pune, Chennai, Bengaluru

Hybrid

AI Ops Senior Architect 12 -17 Years Work Location - Pune/ Bengaluru/Hyderabad/Chennai/ Gurugram Tredence is Data science, engineering, and analytics consulting company that partners with some of the leading global Retail, CPG, Industrial and Telecom companies. We deliver business impact by enabling last mile adoption of insights by uniting our strengths in business analytics, data science and data engineering. Headquartered in the San Francisco Bay Area, we partner with clients in US, Canada, and Europe. Bangalore is our largest Centre of Excellence with skilled analytics and technology teams serving our growing base of Fortune 500 clients. JOB DESCRIPTION At Tredence, you will lead the evolution of Industrializing AI ” solutions for our clients by implementing ML/LLM/GenAI & Agent Ops best practices. You will lead the Architecture , Design & development of large scale ML/LLMOps platforms for our clients. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll be a trusted advisor to our clients in ML/GenAI/Agent Ops space & coach to the ML engineering practitioners to build effective solutions to Industrialize AI solutions THE IDEAL CANDIDATE WILL BE RESPONSIBLE FOR AI Ops Strategy, Innovation, Research and Technical Standards 1. Conduct research and experiment with emerging AI Ops technologies and trends. Create POV’s, POC’s & present Proof of Technology to use latest tools, Technologies & services from Hyper scalers focussed on ML, GenAI & Agent Ops 2. Define and propose new technical standards and best practices for the organization's AI Ops environment. 3. Lead the evaluation and adoption of innovative MLOps solutions to address critical business challenges. 4. Conduct meet ups, attend & present in Industry events, conferences, etc 5. Ideate & develop accelerators to strengthen service offerings of AI Ops practice Solution Design & Architectural Development 6. Lead Design & architecture of scalable model training & deployment pipelines for large-scale deployments 7. Architect & Design large scale ML & GenAI Ops platforms 8. Collaborate with Data science & GenAI practice to define and implement strategies of AI solutions for model explainability and interpretability 9. Mentor and guide senior architects in crafting cutting-edge AI Ops solutions 10. Lead architecture reviews and identify opportunities for significant optimizations and improvements. Documentation and Best Practices 11. Develop and maintain comprehensive documentation of AIOps architectures designs and best practices. 12. Lead the development and delivery of training materials and workshops on AIOps tools and techniques. 13. Actively participate in sharing knowledge and expertise with the MLOps team through internal presentations and code reviews. Qualifications and Skills: 1. Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field with minimum 12 years of experience 2. Proven experience in architecting & developing AIOps solutions – to streamline Machine Learning & GenAI development lifecycle 3. Proven experience as an AI Ops Architect – ML & GenAI in architecting & design of ML & GenAI platforms 4. Hands on experience in Model deployment strategies, Designing ML & GenAI model pipelines to scale in production, Model Observability techniques used to monitor performance of ML & LLM’s 5. Strong coding skills with experience in implementing best coding practices Technical Skills & Expertise Python, PySpark, PyTorch ,Java, Micro Services, API’s LLMOps – Vector DB, RAG, LLM Orchestration tools, LLM Observability, LLM Guardrails, Responsible AI MLOps - MLFlow, ML/DL libraries, Model & Data Drift Detection libraries & techniques Real Time & Batch Streaming Container Orchestration Platforms Cloud platforms – Azure/ AWS/ GCP, Data Platforms – Databricks/ Snowflake Nice to Have: Understanding of Agent Ops Exposure to Databricks platform You can expect to – Work with world’s biggest Retailers, CPG’s, HealthCare, Banking & Manufacturing customers and help them solve some of their most critical problems Create multi-million Dollar business opportunities by leveraging impact mindset, cutting edge solutions and industry best practices. Work in a diverse environment that keeps evolving Hone your entrepreneurial skills as you contribute to growth of the organization

Posted 3 months ago

Apply

3.0 - 5.0 years

15 - 18 Lacs

Mumbai, Pune

Work from Office

Job Description Designation: AI Engineer Experience range: 2-5 years relevant experience Location: Pune Key Responsibilities: Develop and deploy scalable AI-powered applications using the Python stack. Leverage cutting-edge AI/ML technologies such as LangChain, LangGraph, AutoGen, Phidata, CrewAI Hugging Face, OpenAI APIs, PyTorch, TensorFlow, and other advanced frameworks to build innovative AI applications. Write clean, efficient, and well-documented code adhering to best practic Build and manage robust APIs for integrating AI solutions into applications. Research and experiment with emerging technologies to discover new AI-driven use cases. Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP), ensuring security, scalability, and performance. Collaborate with product managers, engineers, and UX/UI designers to define AI application requirements and ali them with business objectives. Apply MLOps principles to streamline AI model deployment, monitoring, and optimization. Solve complex problems using foundational knowledge of generative AI, machine learning, and data processing techniques. Contribute to continuous improvement of development processes and practices. Resolve production issues by conducting effective troubleshooting and root cause analysis (RCA) within SLA Work with operations teams to support product deployment and issue resolution. Requirements Educational Background: Bachelors or Masters degree in Computer Science or related fields with a strong academic track record. Must have graduated from NIT, IIIT, IIT, or BITS Pilani colleges only. Experience Technical Skills: 3+ years of hands-on experience in building and deploying AI applications on Python stack. Strong knowledge of Python and related frameworks. Good knowledge of few of the AI/ML frameworks and agentic frameworks and platforms like LangChain, LangGraph, AutoGen, Hugging Face, CrewAI, OpenAI APIs, PyTorch, and TensorFlow etc. Experience with AI/ML workflows, including data preparation, model deployment, and optimizations. Proficiency in building and consuming RESTful APIs for connecting AI models with web applications. Knowledge of MLOps tools and practices, including model lifecycle management, CI/CD for AI, and model monitoring. Familiarity with cloud platforms like AWS, Azure, or Google Cloud, including containerization (Docker) and orchestration (Kubernetes). Experience with CI/CD pipelines, version control (Git), and automation frameworks. Strong understanding of algorithms, AI/ML fundamentals, and data preprocessing techniques. Soft Skills: Passion for exploring, experimenting and implementing emerging AI technologies. Self-starter who can independently and collaboratively in a fast-paced environment. Excellent problem-solving and analytical abilities to tackle complex challenges. Effective communication skills to explain AI concepts and strategies to stakeholders Why Join Us? Be part of a forward-thinking company revolutionizing air and port cargo logistics with AI. Collaborate with a team passionate about innovation and excellence. Gain exposure to cutting-edge AI technologies and frameworks. Enjoy opportunities for professional growth and upskilling in the latest tech stack. Receive a competitive compensation and benefits package.

Posted 3 months ago

Apply

3.0 - 4.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Key Responsibilities AI Model Deployment & Integration: Deploy and manage AI/ML models, including traditional machine learning and GenAI solutions (e.g., LLMs, RAG systems). Implement automated CI/CD pipelines for seamless deployment and scaling of AI models. Ensure efficient model integration into existing enterprise applications and workflows in collaboration with AI Engineers. Optimize AI infrastructure for performance and cost efficiency in cloud environments (AWS, Azure, GCP). Monitoring & Performance Management: Develop and implement monitoring solutions to track model performance, latency, drift, and cost metrics. Set up alerts and automated workflows to manage performance degradation and retraining triggers. Ensure responsible AI by monitoring for issues such as bias, hallucinations, and security vulnerabilities in GenAI outputs. Collaborate with Data Scientists to establish feedback loops for continuous model improvement. Automation & MLOps Best Practices: Establish scalable MLOps practices to support the continuous deployment and maintenance of AI models. Automate model retraining, versioning, and rollback strategies to ensure reliability and compliance. Utilize infrastructure-as-code (Terraform, CloudFormation) to manage AI pipelines. Security & Compliance: Implement security measures to prevent prompt injections, data leakage, and unauthorized model access. Work closely with compliance teams to ensure AI solutions adhere to privacy and regulatory standards (HIPAA, GDPR). Regularly audit AI pipelines for ethical AI practices and data governance. Collaboration & Process Improvement: Work closely with AI Engineers, Product Managers, and IT teams to align AI operational processes with business needs. Contribute to the development of AI Ops documentation, playbooks, and best practices. Continuously evaluate emerging GenAI operational tools and processes to drive innovation. Qualifications & Skills Education: Bachelors or Masters degree in Computer Science, Data Engineering, AI, or a related field. Relevant certifications in cloud platforms (AWS, Azure, GCP) or MLOps frameworks are a plus. Experience: 3+ years of experience in AI/ML operations, MLOps, or DevOps for AI-driven solutions. Hands-on experience deploying and managing AI models, including LLMs and GenAI solutions, in production environments. Experience working with cloud AI platforms such as Azure AI, AWS SageMaker, or Google Vertex AI. Technical Skills: Proficiency in MLOps tools and frameworks such as MLflow, Kubeflow, or Airflow. Hands-on experience with monitoring tools (Prometheus, Grafana, ELK Stack) for AI performance tracking. Experience with containerization and orchestration tools (Docker, Kubernetes) to support AI workloads. Familiarity with automation scripting using Python, Bash, or PowerShell. Understanding of GenAI-specific operational challenges such as response monitoring, token management, and prompt optimization. Knowledge of CI/CD pipelines (Jenkins, GitHub Actions) for AI model deployment. Strong understanding of AI security principles, including data privacy and governance considerations.

Posted 3 months ago

Apply

6.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Curious about the role? What your typical day would look like? 6+ years of relevant DS experience Proficient in a structured Python Proficient in any one of cloud technologies is mandatory (AWS/ Azure/GCP) Follows good software engineering practices and has an interest in building reliable and robust software Good understanding of DS concepts and DS model lifecycle Working knowledge of Linux or Unix environments ideally in a cloud environment Working knowledge of Spark/ PySpark is desirable Model deployment / model monitoring experience is mandatory CI/CD pipeline creation is good to have Excellent written and verbal communication skills

Posted 3 months ago

Apply

7.0 - 12.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Build and deploy scalable ML models and MLOps pipelines in collaboration with data scientists Required Candidate profile 6–12 yrs in ML development, Python, model tuning, and enterprise AI deployment.

Posted 3 months ago

Apply

10 - 20 years

25 - 40 Lacs

Bengaluru

Work from Office

**We are looking for an AI Advisor with 10+ years of experience, based in Bangalore.** Key Responsibilities: Strategic AI Guidance : Advise the leadership and project teams on the responsible and effective integration of AI across engagements. Provide deep insights into the AI and GenAI model landscape, including open-source and commercial offerings. Assess risks related to AI implementation (e.g., bias, misuse, data privacy) and develop mitigation strategies. Model Evaluation & Use Case Realization Support model selection and evaluation for specific use cases, especially in low-resource or domain-specific contexts (e.g., agriculture, governance). Offer guidance on data strategies for model fine-tuning, including training data sufficiency, preprocessing, and adaptation. Work closely with technical teams to help translate domain needs into technical requirements, and vice versa. Help conceptualize and refine real-world use cases from ideation to implementation, including AI workflows and impact pathways. Cross-Functional Collaboration Engage with ecosystem of partners governments, development agencies, academic institutionsto drive AI thinking across projects. Communicate complex AI concepts clearly to non-technical stakeholders, enabling better alignment and decision-making. Collaborate with cross-functional teams to define requirements for AI components in DPGs and platforms. Ethical AI & Data Governance Ensure all AI solutions adhere to ethical AI principles, including fairness, transparency, explainability, and accountability. Provide strategic inputs on data governance, especially in contexts involving sensitive or multilingual datasets. Align recommendations with emerging AI regulations and standards, both global and India-specific. Qualifications & Skills: Bachelors or Masters degree in a relevant field (e.g., Computer Science, Data Science, AI, NLP, or related). 10+ years of experience in AI, consulting, or technology roles, with a strong foundation in language technology and NLP. Proven ability to evaluate and fine-tune models, especially in low-resource or emerging domain contexts. Strong understanding of AI model lifecycles, including data sourcing, model training, validation, deployment, and feedback. Excellent communication skills and experience working with multi-stakeholder environments, especially in public sector or mission-driven settings Familiarity with data privacy frameworks, ethical AI standards, and responsible AI deployment practices. Ability to think strategically, act hands-on, and operate independently in a fast-moving, collaborative environment.

Posted 4 months ago

Apply

3 - 8 years

15 - 30 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Location: PAN India Immediate joiner required Experienced in end to end capital or Impairment process management RWA calculation engine using Standardized approach (BASEL 1/2/3) for Credit Risk Asset classification for Retail and Wholesale Products, Collateral Adjustment, on balance/ off balance exposure calculation Migration RWA calculation from BASEL 3 to 3.1 using Standardized or AIRB approach Knowledge of Banking domain/ banking products like Retail, corporate, banks, SME, Sovereign Project management / stakeholder management Keywords : Standardized approach - BASEL 1/2/3 Capital Reporting (Standardized approach) RWA Calculation Credit Risk (RWA) capital management

Posted 4 months ago

Apply

6 - 10 years

12 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

MLE/Sr. MLE Chennai, Bangalore, Hyderabad Who we are Tiger Analytics is a global leader in AI and analytics, helping Fortune 1000 companies solve their toughest challenges. We offer full-stack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. We are a Great Place to Work-Certified (2022-25), recognized by analyst firms such as Forrester, Gartner, HFS, Everest, ISG and others. We have been ranked among the Best and Fastest Growing analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine. Curious about the role? What your typical day would look like? We are looking for a Machine Learning Engineer/Sr MLE who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 6+ years of experience with at least 4+ years of relevant MLOps experience. Proficient in a structured Python (Mandate) Proficient in Azure Databricks Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Working knowledge of Spark/ PySpark is desirable. Model deployment / model monitoring experience is desirable. CI/CD pipeline creation is good to have. Excellent written and verbal communication skills. B.Tech from Tier-1 college / M.S or M. Tech is preferred. You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry. Additional Benefits: Health insurance (self & family), virtual wellness platform, and knowledge communities.

Posted 4 months ago

Apply

5 - 10 years

25 - 27 Lacs

Coimbatore, Bengaluru

Work from Office

TensorFlow, PyTorch Multi-GPU/TPU, distributed data pipelines Advanced techniques, RAG systems AWS, GCP, or Azure Terraform, Airflow, Kubeflow Prometheus, Grafana, or similar tools Version Control & CI/CD: Git, Jenkins, GitHub Actions, etc

Posted 4 months ago

Apply

4 - 9 years

30 - 45 Lacs

Noida, Bengaluru

Work from Office

Min exp 3 years in pd. lgd MODELS. IFRS9/ IRB model validation/ development/monitoring exp mandatory Package upto 55 lpa Depends on exp Required Candidate profile Bangalore/Noida location Please send cv's on supreetbakshi@imaginators.co or Call on 7042331616 Please send cv's on email only

Posted 4 months ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

bengaluru

Work from Office

Design, develop, implement AI and Generative AI solutions to address business problems and achieveobjectives. Gather, clean, and prepare large datasets to ensure readiness for AI model training Train, fine-tune, evaluate, andoptimizeAI models for specific use cases, ensuring accuracy, performance, cost-effectiveness, and scalability. Seamlessly integrate AI models and autonomous agent solutions into cloud-based & on-prem products to drive smarter workflows and improved productivity. Develop reusable tools, libraries, and components that standardize and accelerate the development of AI solutions across the organization. Monitor andmaintaindeployed models, ensuring consistent performance and reliability in production environments Stay up to date with the latest AI/ML advancements, exploringnew technologies, algorithms, and methodologies to enhance product capabilities. Effectively communicate technical concepts, research findings, and AI solution strategies to both technical and non-technical stakeholders. Understand the IBM tool and model landscape and work closely with cross-functional teams toleveragethese tools, driving innovation and alignment. Lead and mentor team members to improve performance. Collaborate with operations, architects, and product teams to resolve issues and define product designs. Exercise best practices in agile development and software engineering.Code, unit test, debug and perform integration tests of software components Participatein software design reviews, code reviews and project planning. Write and review documentation and technical blog posts. Contribute to department attainment of organizationalobjectivesand high customer satisfaction Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 6 years of hands-on experience developing AI-based applications using Python. 2+ years in Performance testing, Reliability testing 2+ years of experience using deep learning frameworks (TensorFlow,PyTorch, orKeras) Solid understanding of ML/AI conceptsEDA, preprocessing, algorithm selection, machine learning frameworks, model efficiency metrics, model monitoring. Familiarity with Natural Language Processing (NLP) techniques. Deep understanding of Large Language Models (LLM) Architectures, theircapabilitiesand limitations. Provenexpertisein integrating and working with LLMs to build robust AI solutions. Skilled in crafting effective prompts to guide LLMs to provide desired outputs. Hands-on experience with LLM frameworks such asLangchain,Langraph,CrewAIetc., Experience in LLM application development based on Retrieval-Augmented Generation (RAG) concept, familiarity with vector databases, and fine-tuning large language models (LLMs) to enhance performance and accuracy. Proficient in microservices development using Python (Django/Flask or similar technologies). Experience in Agile development methodologies Familiarity with platforms like Kubernetes and experience building on top of the native platforms Experience with cloud-based data platforms and services (e.g., IBM, AWS, Azure, Google Cloud). Experience designing, building, andmaintainingdata processing systems working in containerized environments (Docker, OpenShift, k8s) Excellent communication skills with the ability to effectively collaborate with technical and non-technical stakeholders Preferred technical and professional experience Experience in MLOPs frameworks (BentoML,Kubefloworsimilar technologies) and exposure to LLMOPs Experience in cost optimisation initiatives Experience with end-to-end chatbot development, including design, deployment, and ongoing optimization,leveragingNLPand integrating with backend systems and APIs. Understanding of security and ethical best practices for data and model development Contributions toopen sourceprojects

Posted Date not available

Apply

6.0 - 9.0 years

30 - 45 Lacs

pune

Hybrid

Senior Machine Learning Engineer As a Senior MLOps engineer, together with Pattern's Data Science and Engineering teams, you will create and maintain impactful solutions for our brands across the world. From traditional machine learning to cutting-edge AI, you will work and lead throughout the model lifecycle. Responsibilities: Teamwork: MLOps is a team sport, and we require a contributor who can elevate everyone in the MLOps organization. While technical skills are required, your communication and teamwork skills will deliver tangible value to our teams as well as elevate the teams capacity. Collaboration: Directly work with data science, engineering, and machine learning teams around the world, including evening IST. Pipeline Management: Architect, implement, and maintain scalable ML pipelines, with seamless integration from data ingestion to production deployment. Model Monitoring: Lead the operationalization of machine learning models, ensuring hundreds of models are continuously monitored, retrained, and optimized in real-time environments Deployment: Deploy machine learning platform solutions in the cloud, securely and cost effectively. Reporting: Effectively communicate actionable insights across teams using both automatic (e.g., alerts) and non-automatic methods. The type of game-changing candidate we are looking for: Hungry: The position is for those who want to become an MLOps thought leader, mastering and incorporating the latest best practices into an ML platform? Transparent: Willingness to identify and admit errors and seek out opportunities to continually improve both in their own work and across the team. Clear Communicator: MLOps is a central node in a complex system. Clear, actionable, and concise communication, both written and verbal is a must. Solution-Oriented: We focus on consistently delivering the best solutions with a solutionsoriented positive attitude. Strong demonstration of technical expertise in Computer Science, Machine Learning, Data Science, or a related field Multiple years of direct and extensive experience with AWS Multiple years of experience building and managing MLOps platform tools Excited to empower DS with tools, practices, and training that simplify MLOps enough for Data Science to increasingly practice MLOps on their own and own products in production.

Posted Date not available

Apply

5.0 - 10.0 years

19 - 34 Lacs

hyderabad, chennai, bengaluru

Hybrid

Curious about the role? What your typical day would look like? We are looking for a Senior Machine Learning Engineer who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 6+ years of experience with at least 4+ years of relevant MLOps experience Proficient in a structured Python Proficient in any one of cloud technologies is mandatory (AWS/ Azure/GCP) Model deployment experience is mandatory Follows good software engineering practices and has an interest in building reliable and robust software Good understanding of DS concepts and DS model lifecycle Working knowledge of Linux or Unix environments ideally in a cloud environment Working knowledge of Spark/ PySpark is desirable CI/CD pipeline creation is good to have Excellent written and verbal communication skills You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire.

Posted Date not available

Apply

5.0 - 8.0 years

15 - 27 Lacs

kolkata, pune, bengaluru

Hybrid

Role & responsibilities Design and implement end-to-end ML Ops pipelines for training, validation, deployment, monitoring, and retraining of ML models. Optimize and fine-tune large language models (LLMs) for various applications, ensuring performance and efficiency. Develop CI/CD pipelines for ML models to automate deployment and monitoring in production. Monitor model performance, detect drift , and implement automated retraining mechanisms. Work with cloud platforms ( AWS, GCP, Azure ) and containerization technologies ( Docker, Kubernetes ) for scalable deployments. Implement best practices in data engineering , feature stores, and model versioning. Collaborate with data scientists, engineers, and product teams to integrate ML models into production applications. Ensure compliance with security, privacy, and ethical AI standards in ML deployments. Optimize inference performance and cost of LLMs using quantization, pruning, and distillation techniques . Deploy LLM-based APIs and services, integrating them with real-time and batch processing pipelines. Key Requirements: Technical Skills: Strong programming skills in Python, with experience in ML frameworks ( TensorFlow, PyTorch, Hugging Face, JAX ). Experience with MLOps tools (MLflow, Kubeflow, Vertex AI, SageMaker, Airflow). Deep understanding of LLM architectures , prompt engineering, and fine-tuning. Hands-on experience with containerization (Docker, Kubernetes) and orchestration tools . Proficiency in cloud services (AWS/GCP/Azure) for ML model training and deployment. Experience with monitoring ML models (Prometheus, Grafana, Evidently AI). Knowledge of feature stores (Feast, Tecton) and data pipelines (Kafka, Apache Beam). Strong background in distributed computing (Spark, Ray, Dask) . Soft Skills: Strong problem-solving and debugging skills. Ability to work in cross-functional teams and communicate complex ML concepts to stakeholders. Passion for staying updated with the latest ML and LLM research & technologies . Preferred Qualifications: Experience with LLM fine-tuning , Reinforcement Learning with Human Feedback ( RLHF ), or LoRA/PEFT techniques . Knowledge of vector databases (FAISS, Pinecone, Weaviate) for retrieval-augmented generation ( RAG ). Familiarity with LangChain, LlamaIndex , and other LLMOps-specific frameworks. Experience deploying LLMs in production (ChatGPT, LLaMA, Falcon, Mistral, Claude, etc.) .

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies