Home
Jobs

717 Mlflow Jobs - Page 21

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

The AI, Data, and Analytics (AIDA) organization team, a Pfizer Digital organization, is responsible for the development and management of all data and analytics tools and platforms across the enterprise – from global product development, to manufacturing, to commercial, to point of patient care across over 100+ countries. One of the team’s top priorities is the development of Business Intelligence (BI), Reporting, and Visualization products which will serve as an enabler for the company’s digital transformation to bring innovative therapeutics to patients. Role Summary We are looking for a technically skilled and experienced Reporting Engineering Senior Manager who is passionate about developing BI and data visualization products for our Customer Facing and Sales Enablement Colleagues, totaling over 20,000 individuals. This role involves working across multiple business segments globally to deliver top-tier BI Reporting and Visualization capabilities that enable impactful business decisions and high engagement user experiences. This role will work across multiple business segments globally to deliver best in class BI Reporting and Visualization capabilities that enable impactful business decisions and cohesive high engagement user experiences. In this position, you will be accountable to have a thorough understanding of data, business, and analytic requirements to deliver high-impact, relevant interactive data visualizations products that drive company performance through continuously monitoring, measuring, identifying root cause, and proactively identifying patterns and triggers across the company to optimize performance. This role will also drive best practices and standards for BI & Visualization. This role will work closely with stakeholders to understand their needs and ensure that reporting assets are created with a focus on Customer Experience. This role requires working with complex and advanced data environments, employing the right architecture to build scalable semantic layers and contemporary reporting visualizations. The Reporting Manager will ensure data quality and integrity by validating the accuracy of KPIs and insights, resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data and analytics driven solutions for the pharmaceutical industry. Role Responsibilities Engineering expert in business intelligence and data visualization products in service of field force and HQ enabling functions. Act as a lead Technical BI & Visualization developer on projects and collaborate with global team members (e.g. other engineers, regional delivery and activation teams, vendors) to architect, design and create BI & Visualization products at scale. Responsible for BI solution architecture design and implementation. Thorough understanding of data, business, and analytic requirements (incl. BI Product Blueprints such as SMART) to deliver high-impact, relevant data visualizations products while respecting project or program budgets and timelines. Deliver quality Functional Requirements and Solution Design, adhering to established standards and best practices. Follow Pfizer Process in Portfolio Management, Project Management, Product Management Playbook following Agile, Hybrid or Enterprise Solution Life Cycle. Extensive technical and implementation knowledge of multitude of BI and Visualization platforms not limiting to Tableau, MicroStrategy, Business Objects, MS-SSRS, and etc. Experience of cloud-based architectures, cloud analytics products / solutions, and data products / solutions (eg: AWS Redshift, MS SQL, Snowflake, Oracle, Teradata). Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. 9+ years Relevant experience or knowledge in areas such as database management, data quality, master data management, metadata management, performance tuning, collaboration, and business process management. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Strong Business Analysis acumen to meet or exceed business requirements following User Center Design (UCD). Strong Experience with testing of BI and Analytics applications – Unit Testing (e.g. Phased or Agile Sprints or MVP), System Integration Testing (SIT) and User Integration Testing (UAT). Experience with technical solution management tools such as JIRA or Github. Stay abreast of customer, industry, and technology trends with enterprise Business Intelligence (BI) and visualization tools. Technical Skillset 9+ years of hands-on experience in developing BI capabilities using Microstrategy Proficiency in industry common BI tools, such as Tableau, PowerBI, etc. is a plus. Common Data Model (Logical & Physical), Conceptual Data Model validation to create Consumption Layer for Reporting (Dimensional Model, Semantic Layer, Direct Database Aggregates or OLAP Cubes) Develop using Design System for Reporting as well as Adhoc Analytics Template BI Product Scalability, Performance-tuning Platform Admin and Security, BI Platform tenant (licensing, capacity, vendor access, vulnerability testing) Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 3 weeks ago

Apply

9.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

The AI, Data, and Analytics (AIDA) organization team, a Pfizer Digital organization, is responsible for the development and management of all data and analytics tools and platforms across the enterprise – from global product development, to manufacturing, to commercial, to point of patient care across over 100+ countries. One of the team’s top priorities is the development of Business Intelligence (BI), Reporting, and Visualization products which will serve as an enabler for the company’s digital transformation to bring innovative therapeutics to patients. Role Summary We are looking for a technically skilled and experienced Reporting Engineering Senior Manager who is passionate about developing BI and data visualization products for our Customer Facing and Sales Enablement Colleagues, totaling over 20,000 individuals. This role involves working across multiple business segments globally to deliver top-tier BI Reporting and Visualization capabilities that enable impactful business decisions and high engagement user experiences. This role will work across multiple business segments globally to deliver best in class BI Reporting and Visualization capabilities that enable impactful business decisions and cohesive high engagement user experiences. In this position, you will be accountable to have a thorough understanding of data, business, and analytic requirements to deliver high-impact, relevant interactive data visualizations products that drive company performance through continuously monitoring, measuring, identifying root cause, and proactively identifying patterns and triggers across the company to optimize performance. This role will also drive best practices and standards for BI & Visualization. This role will work closely with stakeholders to understand their needs and ensure that reporting assets are created with a focus on Customer Experience. This role requires working with complex and advanced data environments, employing the right architecture to build scalable semantic layers and contemporary reporting visualizations. The Reporting Manager will ensure data quality and integrity by validating the accuracy of KPIs and insights, resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data and analytics driven solutions for the pharmaceutical industry. Role Responsibilities Engineering expert in business intelligence and data visualization products in service of field force and HQ enabling functions. Act as a lead Technical BI & Visualization developer on projects and collaborate with global team members (e.g. other engineers, regional delivery and activation teams, vendors) to architect, design and create BI & Visualization products at scale. Responsible for BI solution architecture design and implementation. Thorough understanding of data, business, and analytic requirements (incl. BI Product Blueprints such as SMART) to deliver high-impact, relevant data visualizations products while respecting project or program budgets and timelines. Deliver quality Functional Requirements and Solution Design, adhering to established standards and best practices. Follow Pfizer Process in Portfolio Management, Project Management, Product Management Playbook following Agile, Hybrid or Enterprise Solution Life Cycle. Extensive technical and implementation knowledge of multitude of BI and Visualization platforms not limiting to Tableau, MicroStrategy, Business Objects, MS-SSRS, and etc. Experience of cloud-based architectures, cloud analytics products / solutions, and data products / solutions (eg: AWS Redshift, MS SQL, Snowflake, Oracle, Teradata). Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. 9+ years Relevant experience or knowledge in areas such as database management, data quality, master data management, metadata management, performance tuning, collaboration, and business process management. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Strong Business Analysis acumen to meet or exceed business requirements following User Center Design (UCD). Strong Experience with testing of BI and Analytics applications – Unit Testing (e.g. Phased or Agile Sprints or MVP), System Integration Testing (SIT) and User Integration Testing (UAT). Experience with technical solution management tools such as JIRA or Github. Stay abreast of customer, industry, and technology trends with enterprise Business Intelligence (BI) and visualization tools. Technical Skillset 9+ years of hands-on experience in developing BI capabilities using Microstrategy Proficiency in industry common BI tools, such as Tableau, PowerBI, etc. is a plus. Common Data Model (Logical & Physical), Conceptual Data Model validation to create Consumption Layer for Reporting (Dimensional Model, Semantic Layer, Direct Database Aggregates or OLAP Cubes) Develop using Design System for Reporting as well as Adhoc Analytics Template BI Product Scalability, Performance-tuning Platform Admin and Security, BI Platform tenant (licensing, capacity, vendor access, vulnerability testing) Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Were seeking a Generative AI Engineer with 3-6 years of experience to design, develop, and deploy AI models that enhance our travel platforms. Youll work on LLMs, diffusion models, NLP, and other generative techniques to build solutions like dynamic content creation, conversational AI, and predictive travel recommendations. Skills & Qualifications 3-6 years of hands-on experience in AI/ML, with at least 1-2 years focused on generative AI. Proficiency in Python, PyTorch/TensorFlow, and frameworks like LangChain, Hugging Face, or LlamaIndex. Experience with LLM fine-tuning, prompt engineering, and RAG architectures. Knowledge of cloud platforms (AWS/GCP/Azure) and MLOps tools (MLflow, Kubeflow). Familiarity with travel industry data (e.g., booking systems, customer reviews) is a plus. Strong problem-solving skills and ability to translate business needs into AI solutions. Key Responsibilities Design, train, and fine-tune generative AI models (e.g., GPT, Llama, Stable Diffusion) for travel-specific use cases. Implement NLP pipelines for chatbots, personalized recommendations, and automated content generation. Optimize models for performance, scalability, and cost-efficiency (e.g., quantization, distillation). Collaborate with product teams to integrate AI into customer-facing applications (e.g., dynamic itineraries, virtual travel assistants). Stay ahead of industry trends (e.g., multimodal AI, RAG, autonomous agents) and prototype innovative solutions. Ensure ethical AI practices, bias mitigation, and compliance with data privacy regulations. Nice-to-Have Publications or projects in generative AI (GitHub, blogs, research papers). Experience with multimodal models (text + image/video generation). Exposure to graph neural networks (GNNs) for recommendation systems. What We Offer A fast-paced, collaborative, and growth-oriented environment. Direct impact on products used by millions of global travelers. Work on real-world AI challenges in a dynamic travel-tech environment. Competitive salary, and travel perks. Flexible work culture with a focus on innovation. Why Join Us ? At Thrillophilia, you will be part of a team that is dedicated to redefining the future of travel. We have millions of users, but to reach the next milestone, we need fresh perspectives and bold ideas to perfect every product and process. Here, you wont find the typical startup clichéstheres no excess, no fluff, just the raw, exhilarating challenge of creating the future of travel. At Thrillophilia, we dont just offer a job, we offer an experience! From Holis vibrant colors to Diwalis festive lights, every moment here is a celebration of life, energy, and creativity. We believe in empowering young minds to think big, innovate, and growbecause passion drives progress. Whether it's our grand festivals or recognizing and celebrating our top performers at the RnR, we make sure success never goes unnoticed. Forget the robotic 9-to-5; at Thrillophilia, we thrive on spontaneity, collaboration, and making every day feel like a grand event. (ref:hirist.tech) Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

About The Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Build robust document data extraction pipelines using NLP and OCR techniques Develop and optimize end-to-end workflows for parsing scanned/image-based documents (PDFs, JPGs, TIFFs) and structured files (MS Excel, MS Word). Leverage LLM models (OpenAI GPT, Claude, Gemini etc.) for advanced entity extraction, summarization, and classification tasks. Design and implement Python-based scripts for parsing, cleaning, and transforming data. Integrate with Azure Services for document storage, compute, and secured API hosting (e.g., Azure Blob, Azure Functions, Key Vault, Azure Cognitive Services). Deploy and orchestrate workflows in Azure Databricks (including Spark and ML pipelines). Build and manage API calls for model integration, rate-limiting, and token control using AI gateways. Automate results export into SQL/Oracle databases and enable downstream access for analytics/reporting. Handle diverse metadata requirements, and create reusable, modular code for different document types. Optionally visualize and report data using Power BI and export data into Excel for stakeholder review. Technical Skills Required Skills & Qualifications: Strong programming skills in Python (Pandas, Regex, Pytesseract, spaCy, LangChain, Transformers, etc.) Experience with Azure Cloud (Blob Storage, Function Apps, Key Vaults, Logic Apps) Hands-on with Azure Databricks (PySpark, Delta Lake, MLFlow) Familiarity with OCR tools like Tesseract, Azure OCR, AWS textract, or Google Vision API Proficient in SQL and experience with Oracle Database integration (using cx_Oracle, SQLAlchemy, etc.) Experience working with LLM APIs (OpenAI, Anthropic, Google, or Hugging Face models) Knowledge of API development and integration (REST, JSON, API rate limits, authentication handling) Excel data manipulation using Python (e.g., openpyxl, pandas, xlrd) Understanding of Power BI dashboards and integration with structured data sources Nice To Have Experience with LangChain, LlamaIndex, or similar frameworks for document Q&A and retrieval-augmented generation (RAG) Background in data science or machine learning CI/CD and version control (Git, Azure DevOps) Familiarity with Data Governance and PII handling in document processing Soft Skills Strong problem-solving skills and an analytical mindset Attention to detail and ability to work with messy/unstructured data Excellent communication skills to interact with technical and non-technical stakeholders Ability to work independently and manage priorities in a fast-paced environment Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🚨 We are Hiring 🚨 https://grhombustech.com/jobs/job-description-senior-test-automation-lead-playwright-ai-ml-focus/ Job Description Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Job Type: Full-Time Company Overview: GRhombus Technologies Pvt Ltd, a pioneer in Software Solutions – Especially on Test Automation, Cyber Security, Full Stack Development, DevOps, Salesforce, Performance Testing and Manual Testing. GRhombus delivery centres are located in India at Hyderabad, Chennai, Bengaluru and Pune. In the Middle East, we are located in Dubai. Our partner offices are located in the USA and the Netherlands. About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. Key Responsibilities: Test Automation Framework Design & Implementation Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage Lead the implementation of end-to-end automation for: Web interfaces (React, Angular, or other SPA frameworks) Backend services (REST, GraphQL, WebSockets) ML model integration endpoints (real-time inference APIs, batch pipelines) Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. Lead and mentor a team of automation and QA engineers across multiple projects. Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. Review feature specs, AI/ML model update notes, and data schemas for impact analysis. Required Skills and Qualifications: Technical Skills: Strong hands-on expertise with Playwright (TypeScript/JavaScript). Experience building custom automation frameworks and utilities from scratch. Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: Proven experience leading QA/Automation teams (4+ engineers). Strong documentation, code review, and stakeholder communication skills. Experience collaborating in Agile/SAFe environments with cross-functional teams. Preferred Qualifications: Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. Experience with GraphQL, Kafka, or event-driven architecture testing. QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. Why Join Us? At GRhombus, we are redefining quality assurance and software testing with cutting-edge methodologies and a commitment to innovation. As a test automation lead, you will play a pivotal role in shaping the future of automated testing, optimizing frameworks, and driving efficiency across our engineering ecosystem. Be part of a workplace that values experimentation, learning, and professional growth. Contribute to an organisation where your ideas drive innovation and make a tangible impact. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Why MResult? Founded in 2004, MResult is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. MResult’s expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value. As part of our team, you will collaborate with top minds in the industry to deliver cutting-edge solutions that solve real-world challenges. What We Offer: At MResult, you can leave your mark on projects at the world’s most recognized brands, access opportunities to grow and upskill, and do your best work with the flexibility of hybrid work models. Great work is rewarded, and leaders are nurtured from within. Our values — Agility, Collaboration, Client Focus, Innovation, and Integrity — are woven into our culture, guiding every decision. What This Role Requires In the role of Data Engineer, you will be a key contributor to MResult’s mission of empowering our clients with data-driven insights and innovative digital solutions. Each day brings exciting challenges and growth opportunities. Here is what you will do: • Project solutioning, including scoping, and estimation. • Data sourcing, investigation, and profiling. • Prototyping and design thinking. • Developing data pipelines & complex data workflows. • Actively contribute to project documentation and playbook, including but not limited to physical models, conceptual models, data dictionaries and data cataloguing. Key Skills to Succeed in This Role: • Overall 8+ years of experience, 4+ years of hands-on experience in working with Python in building data pipelines and processes. • Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. • 4+ years of hands-on experience delivering data lake/data warehousing projects. Driving Innovation, Empowering Insights • Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. • Experience in AWS services EC2, EMR, RDS, Spark is preferred. • Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD,GitHub MLflow Manage, Master, and Maximize with MResult MResult is an equal-opportunity employer committed to building an inclusive environment free of discrimination and harassment. Take the next step in your career with MResult — where your ideas help shape the future. Thanks & Regards, Surendra Singh Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

About The Chatterjee Group The Chatterjee Group (TCG) has an enviable track record as a strategic investor, with businesses in many sectors. Founded by Dr. Purnendu Chatterjee in 1989, the Group specializes in the Petrochemicals, Pharmaceuticals, Biotech, Financial Services, Real Estate and Technology sectors in the US, Europe and South Asia. It provides end-to-end product and service capabilities through its investments and companies in these sectors. TCG is one of the biggest PE groups in the country with significant brand presence abroad and in India. About The Company – First Livingspaces First Livingspaces is a fully owned subsidiary of the TCG group. First Livingspaces is an AI first company which intends to create an ecosystem for daily living needs and necessities. We want to simplify the life of an individual who intends to rent, buy or Co-live, and provide all amenities for a happy seamless living. We are building a large universe, powered by technology at the core. About The Role In this senior role, you will lead the development and implementation of MLOps strategies for our AI/ML systems within India's first real estate ecosystem. You will manage the entire ML model lifecycle, ensuring scalability, reliability, and performance from development to production. You will collaborate with data science and engineering teams, define and implement MLOps best practices, and take ownership of the ML infrastructure. Your technical expertise will be crucial in building robust MLOps systems that deliver reliable results at scale. Responsibilities Architect and implement CI/CD pipelines for ML models using PyTorch or TensorFlow. Design and build infrastructure for model versioning, deployment, monitoring, and scaling. Implement and maintain feature stores (Feast, Tecton) and experiment tracking(MLflow, Weights & Biases). Enable continuous training and monitoring with tools like SageMaker Model Monitor, Evidently, etc. Develop and deploy ML model registries and automated model serving solutions. Collaborate with data science and engineering teams to transition from experimentation to production. Establish infrastructure-as-code practices using Terraform, Pulumi, or CloudFormation. Technical Requirements 5+ years of experience in MLOps, DevOps, or related fields, with at least 2 years in a senior role. MLOps Tools: MLflow, Kubeflow Mastery of CI/CD tools (GitHub Actions, Jenkins, ArgoCD, Tekton). Expertise with infrastructure-as-code (Terraform, Pulumi, CloudFormation) and GitOps practices. Knowledge of containerization (Docker) and orchestration (Kubernetes, EKS, GKE, AKS). Proficiency in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face). Experience with ML deployment frameworks (TensorFlow Serving, TorchServe, KServe, BentoML, NVIDIA Triton). Why Join Us? Work on impactful projects leveraging cutting-edge AI technologies. Your work here won’t sit in a sandbox — see your models deployed at scale, making a tangible difference in the real world. Competitive compensation, perks, and a flexible, inclusive work environment. Inclusive and Equal Workplace First Livingspaces is dedicated to creating an inclusive workplace that promotes equality and fairness. We celebrate diversity and ensure that no one is discriminated against based on gender, caste, race, religion, sexual orientation, protected veteran status, disability, age, or any other characteristic, fostering an environment where everyone can thrive. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 15 Lacs

Hyderabad, Mumbai (All Areas)

Work from Office

Naukri logo

Job Description: We are seeking a skilled and innovative AI/ML Engineer to design, build, and deploy machine learning solutions that solve complex business challenges in mission-critical industries. You will work with multidisciplinary teams to apply AI, ML, and deep learning techniques to large-scale datasets and real-time operational systems. Key Responsibilities: Design and implement end-to-end ML pipelines: data ingestion, model training, evaluation, and deployment Build predictive and prescriptive models using structured, unstructured, and real-time data Develop and fine-tune deep learning models for NLP, computer vision, or time series forecasting Integrate ML models into enterprise platforms, APIs, and dashboards Work closely with domain experts, data engineers, and DevOps teams to ensure production-grade performance Conduct model validation, bias testing, and post-deployment monitoring Document workflows, architecture, and results for reproducibility and audits Research and evaluate new AI tools, techniques, and trends Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or related field 5-7 years of hands-on experience in developing and deploying ML models Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, XGBoost) Experience with NLP (BERT, spaCy) or CV frameworks (OpenCV, YOLO, MMDetection) Familiarity with data processing tools (Pandas, Dask, Apache Spark) Proficient in SQL and experience with time-series or sensor data Knowledge of ML Ops practices and tools (Docker, MLflow, Airflow, Kubernetes) Industry experience in Oil & Gas, Power Systems, or Urban Analytics Hands-on with edge AI (Jetson, Coral), GPU compute, and model optimization Cloud services: AWS SageMaker, Azure ML, or Google AI Platform Familiarity with REST APIs, OPC-UA integration, or SCADA data sources Knowledge of responsible AI practices (explainability, fairness, privacy).

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Highly skilled Senior Machine Learning Engineer with expertise in Deep Learning, Large Language Models (LLMs), and MLOps/LLMOps to design, optimize, and deploy cutting-edge AI solutions. The ideal candidate will have hands-on experience in developing and scaling deep learning models, fine-tuning LLMs/ (e.g., GPT, Llama), and implementing robust deployment pipelines for production environments. Responsibilities Model Development & Fine-Tuning: - Design, train, fine-tune and optimize deep learning models (CNNs, RNNs, Transformers) for NLP, computer vision, or multimodal applications. - Fine-tune and adapt Large Language Models (LLMs) for domain-specific tasks (e.g., text generation, summarization, semantic similarity). - Experiment with RLHF (Reinforcement Learning from Human Feedback) and other alignment techniques. Deployment & Scalability (MLOps/LLMOps): - Build and maintain end-to-end ML pipelines for training, evaluation, and deployment. - Deploy LLMs and deep learning models in production environments using frameworks like FastAPI, vLLM, or TensorRT. - Optimize models for low-latency, high-throughput inference (eg., quantization, distillation, etc.). - Implement CI/CD workflows for ML systems using tools like MLflow, Kubeflow. Monitoring & Optimization: - Set up logging, monitoring, and alerting for model performance (drift, latency, accuracy). - Work with DevOps teams to ensure scalability, security, and cost-efficiency of deployed models. Required Skills & Qualifications: - 5-7 years of hands-on experience in Deep Learning, NLP, and LLMs. - Strong proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers, and LLM frameworks. - Experience with model deployment tools (Docker, Kubernetes, FastAPI). - Knowledge of MLOps/LLMOps best practices (model versioning, A/B testing, canary deployments). - Familiarity with cloud platforms (AWS, GCP, Azure). Preferred Qualifications: - Contributions to open-source LLM projects. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We are looking for a ML Ops Engineer to join our Technology team at Clarivate. You will get the opportunity to work in a cross-cultural work environment while working on the latest web technologies with an emphasis on user-centered design. About You (Skills & Experience Required) Bachelor’s or master’s degree in computer science, Engineering, or a related field. 5+ years of experience in machine learning, data engineering, or software development. Good experience in building data pipelines, data cleaning, and feature engineering is essential for preparing data for model training. Knowledge of programming languages (Python, R), and version control systems (Git) is necessary for building and maintaining MLOps pipelines. Experience with MLOps-specific tools and platforms (e.g., Kubeflow, MLflow, Airflow) can streamline MLOps workflows. DevOps principles, including CI/CD pipelines, infrastructure as code (IaaC), and monitoring is helpful for automating ML workflows. Experience with atleast one of the cloud platforms (AWS, GCP, Azure) and their associated services (e.g., compute, storage, ML platforms) is essential for deploying and scaling ML models. Familiarity with container orchestration tools like Kubernetes can help manage and scale ML workloads efficiently. It would be great if you also had, Experience with big data technologies (Hadoop, Spark). Knowledge of data governance and security practices. Familiarity with DevOps practices and tools. What will you be doing in this role? Model Deployment & Monitoring Oversee the deployment of machine learning models into production environments. Ensure continuous monitoring and performance tuning of deployed models. Implement robust CI/CD pipelines for model updates and rollbacks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate project status, risks, and opportunities to stakeholders. Provide technical guidance and support to team members. Infrastructure & Automation Design and manage scalable infrastructure for model training and deployment. Automate repetitive tasks to improve efficiency and reduce errors. Ensure the infrastructure meets security and compliance standards. Innovation & Improvement Stay updated with the latest trends and technologies in MLOps. Identify opportunities for process improvements and implement them. Drive innovation within the team to enhance the MLOps capabilities. About The Team You would be part of our incredible data science team of Intellectual property (IP) group & work closely with product and technology teams spreads across various locations worldwide. You would be working on interesting IP data and interesting challenges to create insights and drive business acumen to add value to our world class products and services. Hours of Work This is a permanent position with Clarivate.9 hours per day including lunch break. you should be flexible with working hours to align with globally distributed teams and stakeholders. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less

Posted 3 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Walk-in interview only -BLR No virtual Python & libraries scikit-learn, TensorFlow,PyTorch/XGBoost exp in supervised, unsupervised, and deep learning techniques Deployment tools Flask, FastAPI, Docker AWS GCP, Azure &ML,SQL Contact Maya 9880516218

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Job Summary: We are looking for a skilled and innovative Machine Learning Engineer to develop, implement, and optimize intelligent systems that leverage data to drive business decisions and enhance product functionality. The ideal candidate will have strong programming skills, a solid understanding of machine learning algorithms, and experience in deploying models into production environments. Key Responsibilities: Design and develop scalable ML models and algorithms to solve real-world problems Analyze large and complex datasets to extract actionable insights Train, test, and validate models to ensure performance and accuracy Work closely with data engineers, product teams, and stakeholders to integrate ML models into applications Research and stay updated on state-of-the-art techniques in machine learning and AI Optimize models for speed, scalability, and interpretability Document processes, experiments, and results clearly Deploy models into production using MLOps tools and practices Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or related field Strong proficiency in Python and ML libraries like scikit-learn, TensorFlow, PyTorch, Keras Solid understanding of statistical modeling, classification, regression, clustering, and deep learning Experience with data handling tools (e.g., Pandas , NumPy ) and data visualization (e.g., Matplotlib , Seaborn ) Proficiency with SQL and working knowledge of big data technologies (e.g., Spark, Hadoop) Familiarity with cloud platforms (AWS, Azure, GCP) and MLOps tools (e.g., MLflow, SageMaker) Preferred Qualifications: Experience with NLP, computer vision, or recommendation systems Knowledge of Docker, Kubernetes, and CI/CD for model deployment Published research or contributions to open-source ML projects Exposure to agile environments and collaborative workflows Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

CSQ326R28 The Machine Learning (ML) Practice team is a highly specialized customer-facing ML team at Databricks facing an increasing demand for Large Language Model (LLM)-based solutions. We deliver professional services engagements to help our customers build, scale, and optimize ML pipelines, as well as put those pipelines into production. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in LLMs, MLOps, and ML more broadly. The Impact You Will Have Develop LLM solutions on customer data such as RAG architectures on enterprise knowledge repos, querying structured data with natural language, and content generation Build, scale, and optimize customer data science workloads and apply best in class MLOps to productionize these workloads across a variety of domains Advise data teams on various data science such as architecture, tooling, and best practices Present at conferences such as Data+AI Summit Provide technical mentorship to the larger ML SME community in Databricks Collaborate cross-functionally with the product and engineering teams to define priorities and influence the product roadmap What We Look For Experience with the latest techniques in natural language processing including vector databases, fine-tuning LLMs, and deploying LLMs with tools such as HuggingFace, Langchain, and OpenAI 4+ years of hands-on industry data science experience, leveraging machine learning and data science tools including pandas, scikit-learn, gensim, nltk, and TensorFlow/PyTorch Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through ML About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 3 weeks ago

Apply

0.0 - 1.0 years

3 - 5 Lacs

Bengaluru

Remote

Naukri logo

AI/ML Engineer Work on cutting-edge technologies like Natural Language Processing (NLP), Generative AI, and Large Language Models (LLMs) Assist in building intelligent systems for real-world applications Collaborate with data scientists and product teams on end-to-end ML workflows Cloud DevOps Engineer Get hands-on with AWS, Azure, or GCP cloud platforms Automate infrastructure, CI/CD pipelines, monitoring, and deployments Learn Infrastructure as Code (IaC) using tools like Terraform or Cloud Formation Software Engineer (Backend/Frontend) Build scalable applications using Python, Node.js for backend and React.js for frontend Contribute to the full-stack development lifecycle Participate in design, coding, testing, and debugging of software components MLOps Engineer Automate and scale ML workflows from development to production Work with tools like MLflow, Kubeflow, Docker, Kubernetes Ensure reproducibility, monitoring, and performance of ML models in production Data Engineer Develop robust ETL pipelines, data ingestion, and transformation logic Work with databases, data warehouses, and data lakes Handle real-time and batch data workflows using tools like Apache Airflow, Spark, Kafka Eligibility : BE/B.Tech/MCA/MSc 2023 or 2024 pass outs

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less

Posted 3 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Software Engineer – Backend (Python) Experience-6 to 8 Years Location- Pan India (Hybrid) Notice Period- Immediate-30 Days Must have skills: • Experience with web development frameworks such as Flask, Django or FastAPI. • Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. • Experience with concurrent programming designs such as AsyncIO. • Experience with unit and functional testing frameworks. • Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. • Experience with CI/CD practices , tools, and frameworks. Nice to have skills: • Experience with Apache Kafka and developing Kafka client applications in Python. • Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. • Experience with big data processing frameworks, preferably Apache Spark. • Experience with containers (Docker) and container platorms like AWS ECS or AWS Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

🚀 AppTestify is Hiring — ML Engineer (MLOps) 📍 Locations: Pune | Bangalore | Indore 🧠 Experience: 7+ Years | Level: Advanced Join AppTestify as a seasoned ML Engineer (MLOps) and help us build and scale high-impact machine learning solutions! 🔧 Key Responsibilities: Work in an Agile/Scrum environment Scope technical issues and write clean, scalable code Handle pull requests and resolve linter/scanner issues Conduct code reviews and maintain best practices 🛠️ Tech Stack & Skills: Python | PostgreSQL | AWS S3 (ECS) | LLMs | MLFlow Hub | GitHub 📩 Apply Now: anamika.p@apptestify.com #AppTestify #Hiring #MLEngineer #MLOps #Python #LLMs #TechJobs #PuneJobs #BangaloreJobs #IndoreJobs Show more Show less

Posted 3 weeks ago

Apply

10.0 - 20.0 years

35 - 75 Lacs

Chennai, Coimbatore

Work from Office

Naukri logo

Job Description: Designs and implements AI architectures based on client needs Strategically utilizes the in-house data center as a platform for customer AI workloads Collaborates closely with international teams of data scientists, engineers, and infrastructure specialists Aligns AI solutions with compute, storage, and network capacities across private, hybrid, and public infrastructures Makes strategic choices in AI tools and technologies with a focus on open source and scalability Supports MLOps implementations, CI/CD, data pipelines, and lifecycle management Contributes to AI governance, reliability, and solution scalability Keeps up with the latest developments in AI, hardware acceleration, and infrastructure Assists clients in leveraging AI at an enterprise level Requirements: At least 10 years of experience as an AI Architect or in a comparable technical AI role Experience developing AI solutions for clients in an enterprise environment Experience working in an international context with cross-functional teams Experience with AI hardware and solutions from NVIDIA, IBM, and HPE is a plus In-depth knowledge of open-source AI frameworks such as TensorFlow, PyTorch, Hugging Face, MLflow, Kubeflow, etc. Familiarity with containers and orchestration tools like Docker and Kubernetes Understanding of data governance, security, and compliance in AI projects Strong communication skills and a client-oriented mindset Excellent command of the English language, both spoken and written

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: Sr. MLOps Engineer Location: Ahmedabad, Pune Required Experience: 5+ Years of experience Preferred Immediate Joiners Job Overview Building the machine learning production infrastructure (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. We are looking for a highly skilled MLOps Engineer to join our team. As an MLOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure that supports the deployment, monitoring, and scaling of machine learning models in production. You will work closely with data scientists, software engineers, and DevOps teams to ensure seamless integration of machine learning models into our production systems. The job is NOT for your if You don’t want to build a career in AI/ML. Becoming an expert in this technology and staying current will require significant self-motivation. You like the comfort and predictability of working on the same problem or code base for years. The tools, best practices, architectures, and problems are all going through rapid change — you will be expected to learn new skills quickly and adapt. Key Responsibilities: · Model Deployment: Design and implement scalable, reliable, and secure pipelines for deploying machine learning models to production. · Infrastructure Management: Develop and maintain infrastructure as code (IaC) for managing cloud resources, compute environments, and data storage. · Monitoring and Optimization: Implement monitoring tools to track the performance of models in production, identify issues, and optimize performance. · Collaboration: Work closely with data scientists to understand model requirements and ensure models are production ready. · Automation: Automate the end-to-end process of training, testing, deploying, and monitoring models. · Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines for machine learning projects. · Version Control: Implement model versioning to manage different iterations of machine learning models. · Security and Governance: Ensure that the deployed models and data pipelines are secure and comply with industry regulations. · Documentation: Create and maintain detailed documentation of all processes, tools, and infrastructure. Qualifications: · 5+ years of experience in a similar role (DevOps, DataOps, MLOps, etc.) · Bachelor’s or master’s degree in computer science, Engineering, or a related field. · Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes) · Strong understanding of machine learning lifecycle, data pipelines, and model serving. · Proficiency in programming languages such as Python, Shell scripting, and familiarity with ML frameworks (TensorFlow, PyTorch, etc.). · Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc.) · Experience with CI/CD tools like Jenkins, GitLab CI, or similar · Experience building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data Engineer (or equivalent) · Strong software engineering skills in complex, multi-language systems · Comfort with Linux administration · Experience working with cloud computing and database systems · Experience building custom integrations between cloud-based systems using APIs · Experience developing and maintaining ML systems built with open-source tools · Experience developing with containers and Kubernetes in cloud computing environments · Familiarity with one or more data-oriented workflow orchestration frameworks (MLFlow, KubeFlow, Airflow, Argo, etc.) · Ability to translate business needs to technical requirements · Strong understanding of software testing, benchmarking, and continuous integration · Exposure to machine learning methodology and best practices · Understanding of regulatory requirements for data privacy and model governance. Preferred Skills: · Excellent problem-solving skills and ability to troubleshoot complex production issues. · Strong communication skills and ability to collaborate with cross-functional teams. · Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack). · Knowledge of database systems (SQL, NoSQL). · Experience with Generative AI frameworks · Preferred cloud-based or MLOps/DevOps certification (AWS, GCP, or Azure) Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are looking for a hands-on MLOps & CI/CD Engineer to join our fast-paced engineering team. This role is ideal for professionals passionate about deploying and maintaining AI/ML pipelines while enabling robust CI/CD practices across engineering projects. You will be responsible for managing the full lifecycle of machine learning models — from experimentation to deployment and monitoring — while also playing a critical role in designing and managing end-to-end CI/CD pipelines for all software components, ensuring high developer productivity and system reliability. Key Responsibilities MLOps Responsibilities Build and maintain scalable MLOps pipelines using tools like MLflow , Azure ML , or Kubeflow . Operationalise AI agents and LLM-based solutions into production-ready workflows. Deploy and maintain Retrieval-Augmented Generation (RAG) pipelines and manage vector databases . Implement model monitoring, drift detection, and retraining workflows. Collaborate with AI leads to integrate ML models via secure, optimised APIs. CI/CD & DevOps Responsibilities Design and implement CI/CD pipelines for web applications , APIs , agents , and bots . Automate build, test, and deployment processes using Azure DevOps or GitHub Actions . Manage infrastructure through Infrastructure as Code (IaC) using tools like Terraform , Bicep , or ARM templates . Standardise deployment practices across dev , staging , and production environments. Implement containerisation strategies using Docker and orchestrate using Kubernetes (AKS) . Ensure CI/CD processes include automated testing, version control, rollback strategies, and promotion policies. Enhance system observability with robust logging, monitoring, and alerting using Azure Monitor , Prometheus , or Grafana . Required Qualifications Minimum 3 years of experience in MLOps, DevOps, or CI/CD-focused roles. Proficiency in Python and YAML scripting for workflow automation. Strong hands-on experience with Azure Cloud (preferred) or AWS/GCP. Familiarity with ML tools such as MLflow , Azure ML , or Kubeflow . Experience in building and maintaining CI/CD pipelines using Azure DevOps or GitHub Actions . Proficient in Docker , Kubernetes (preferably AKS) , and secure container practices. Good understanding of RESTful APIs , web services, and microservices architecture. Experience with Git workflows ; GitOps is a plus. Bonus Points Experience with LangChain , LangGraph , or other agentic AI frameworks. Exposure to CI/CD for front-end frameworks like React or Next.js . Knowledge of security , compliance , and observability tools. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 9.0 years

20 - 30 Lacs

Ahmedabad

Work from Office

Naukri logo

Senior DevOps Engineer Experience: 5 - 9 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Azure (Microsoft Azure), Docker/Terraform, TensorFlow, Python, AWS Good to have skills : Kubeflow, MLFlow Attri (One of Uplers' Clients) is Looking for: Senior DevOps Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About Attri Attri is an AI organization that helps businesses initiate and accelerate their AI efforts. We offer the industrys first end-to-end enterprise machine learning platform, empowering teams to focus on ML development rather than infrastructure. From ideation to execution, our global team of AI experts supports organizations in building scalable, state-of-the-art ML solutions. Our mission is to redefine businesses by harnessing cutting-edge technology and a unique, value-driven approach. With team members across continents, we celebrate diversity, curiosity, and innovation. Were now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, wed love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary Support for continual learning (free books and online courses) Leveling Up Opportunities Diverse team environment How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Attri, an AI organization, leads the way in enterprise AI, offering advanced solutions and services driven by AI agents and powered by Foundation Models. Our comprehensive suite of AI-enabled tools drives business impact, enhances quality, mitigates risk, and also helps unlock growth opportunities. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

About the Role: We are seeking an experienced MLOps Engineer with a strong background in NVIDIA GPU-based containerization and scalable ML infrastructure ( Contractual - Assignment Basis) . You will work closely with data scientists, ML engineers, and DevOps teams to build, deploy, and maintain robust, high-performance machine learning pipelines using NVIDIA NGC containers, Docker, Kubernetes , and modern MLOps practices. Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for training, validation, deployment, and monitoring of ML models. Implement GPU-accelerated workflows using NVIDIA NGC containers, CUDA, and RAPIDS . Containerize ML workloads using Docker and deploy on Kubernetes (preferably with GPU support like NVIDIA device plugin for K8s) . Integrate model versioning, reproducibility, CI/CD, and automated model retraining using tools like MLflow, DVC, Kubeflow, or similar . Optimize model deployment for inference on NVIDIA hardware using TensorRT, Triton Inference Server , or ONNX Runtime-GPU . Manage cloud/on-prem GPU infrastructure and monitor resource utilization and model performance in production. Collaborate with data scientists to transition models from research to production-ready pipelines. Required Skills: Proficiency in Python and ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Strong experience with Docker , Kubernetes , and NVIDIA GPU containerization (NGC, nvidia-docker) . Familiarity with NVIDIA Triton Inference Server , TensorRT , and CUDA . Experience with CI/CD for ML (GitHub Actions, GitLab CI, Jenkins, etc.). Deep understanding of ML lifecycle management , monitoring, and retraining. Experience working with cloud platforms (AWS/GCP/Azure) or on-prem GPU clusters. Preferred Qualifications: Experience with Kubeflow , Seldon Core , or similar orchestration tools. Exposure to Airflow , MLflow , Weights & Biases , or DVC . Knowledge of NVIDIA RAPIDS and distributed GPU workloads. MLOps certifications or NVIDIA Deep Learning Institute training (preferred but not mandatory). Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About TwoSD (2SD Technologies Limited) TwoSD is the innovation engine of 2SD Technologies Limited , a global leader in product engineering, platform development, and advanced IT solutions. Backed by two decades of leadership in technology, our team brings together strategy, design, and data to craft transformative solutions for global clients. Our culture is built around cultivating talent, curiosity, and collaboration. Whether you're a career technologist, a self-taught coder, or a domain expert with a passion for real-world impact, TwoSD is where your journey accelerates. Join us and thrive. At 2SD Technologies, we push past the expected—with insight, integrity, and a passion for making things better. Role Overview We are hiring a Solution Architect with a proven track record in SaaS platform architecture , AI-driven solutions , and CRM/enterprise systems like Microsoft Dynamics 365 . This is a full-time position based in Gurugram, India , for professionals who thrive on solving complex problems across cloud, data, and application layers. You’ll design and orchestrate large-scale platforms that blend intelligence , automation , and multi-tenant scalability —powering real-time customer experiences, operational agility, and cross-system connectivity. Key Responsibilities Architect cloud-native SaaS solutions with scalability, modularity, and resilience at the core Design end-to-end technical architectures spanning CRM systems , custom apps , AI services , and data pipelines Lead technical discovery, solution workshops, and architecture governance with internal and client teams Drive the integration of Microsoft Dynamics 365 with other platforms including AI/ML services and business applications Create architectural blueprints and frameworks for microservices, event-driven systems, and intelligent automation Collaborate with engineers, data scientists, UX/UI designers, and DevOps teams to deliver platform excellence Oversee security, identity, compliance, and performance in high-scale environments Evaluate and introduce modern tools, frameworks, and architectural patterns for enterprise innovation Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s is a plus) 7+ years of experience in enterprise application architecture Hands-on expertise in Microsoft Dynamics 365 CE/CRM with complex integrations Experience architecting and delivering SaaS applications on cloud platforms (preferably AWS/Azure/GCP) Familiarity with LLM APIs , AI orchestration tools , or machine learning workflows Proven ability to lead multi-team and multi-technology architecture efforts Deep understanding of security , multi-tenancy , data privacy , and compliance standards Preferred Qualifications Microsoft Certified: Dynamics 365 + Azure/AWS Architect certifications Experience with AI platform components like OpenAI, LangChain, or Azure/AWS Services Experience designing or re-architecting legacy monoliths into cloud-native microservices Familiarity with DevOps and Infrastructure as Code (IaC) practices using Terraform or Bicep Experience integrating event-based systems using AWS, Azure Event Grid, Service Bus, or Kafka Exposure to enterprise observability tools and performance monitoring strategies Core Competencies Enterprise SaaS Architecture Cloud-Native Platform Design (Azure preferred) CRM + AI Integration Strategy End-to-End System Thinking Cross-Functional Collaboration & Mentorship Future-Proof Solution Design & Documentation Tools & Platforms CRM/ERP: Microsoft Dynamics 365 CE, Power Platform, Dataverse AI & Data: OpenAI, AWS SageMaker, AWS Bedrock, Azure Cognitive Services, LangChain, MLflow Cloud: Azure (App Services, API Management, Logic Apps, Functions, Cosmos DB) DevOps & IaC: GitHub Actions, Azure DevOps, Terraform, Bicep Integration: REST/GraphQL APIs, Azure Service Bus, Event Grid, Kafka Modeling & Docs: Lucidchart, Draw.io, ArchiMate, PlantUML Agile & Collaboration: Jira, Confluence, Slack, MS Teams Why Join TwoSD? At TwoSD , innovation isn’t a department—it’s a mindset. Here, your voice matters, your expertise is valued, and your growth is supported by a collaborative culture that blends mentorship with autonomy. With access to cutting-edge tools, meaningful projects, and a global knowledge network, you’ll do work that counts—and evolve with every challenge. Solution Architect – SaaS Platforms, AI Solutions & Enterprise CRM Position: Solution Architect Location: Gurugram, India (Onsite/Hybrid) Company: TwoSD (2SD Technologies Limited) Industry: Enterprise Software / CRM / Cloud Platforms Employment Type: Permanent Date Posted: 26 May 2025 How to Apply To apply, send your resume and technical portfolio or project overview to hr@2sdtechnologies.com or visit our LinkedIn careers page. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 3 years Location: Bengaluru JobType: full-time Requirements About the Role We are seeking a passionate and skilled AI Engineer to join our innovative engineering team. In this role, you will play a pivotal part in designing, developing, and deploying cutting-edge artificial intelligence solutions with a focus on natural language processing (NLP) , computer vision , and machine learning models using TensorFlow and related frameworks. You will work on challenging projects that leverage large-scale data, deep learning, and advanced AI techniques, helping transform business problems into smart, automated, and scalable solutions. If you're someone who thrives in a fast-paced, tech-driven environment and loves solving real-world problems with AI, we'd love to hear from you. Key Responsibilities Design, develop, train, and deploy AI/ML models using frameworks such as TensorFlow, Keras, and PyTorch. Implement solutions across NLP, computer vision, and deep learning domains, using advanced techniques such as transformers, CNNs, LSTMs, OCR, image classification, and object detection. Collaborate closely with product managers, data scientists, and software engineers to identify use cases, define architecture, and integrate AI solutions into products. Optimize model performance for speed, accuracy, and scalability, using industry best practices in model tuning, validation, and A/B testing. Deploy AI models to cloud platforms such as AWS, GCP, and Azure, leveraging their native AI/ML services for efficient and reliable operation. Stay up to date with the latest AI research, trends, and technologies, and propose how they can be applied within the company's context. Ensure model explainability, reproducibility, and compliance with ethical AI standards. Contribute to the development of MLOps pipelines for managing model versioning, CI/CD for ML, and monitoring deployed models in production. Required Skills & Qualifications 3+ years of hands-on experience building and deploying AI/ML models in production environments. Proficiency in TensorFlow and deep learning workflows; experience with PyTorch is a plus. Strong foundation in natural language processing (e.g., NER, text classification, sentiment analysis, transformers) and computer vision (e.g., image processing, object recognition). Experience deploying and managing AI models on AWS, Google Cloud Platform (GCP), and Microsoft Azure. Skilled in Python and relevant libraries such as NumPy, Pandas, OpenCV, Scikit-learn, Hugging Face Transformers, etc. Familiarity with model deployment tools such as TensorFlow Serving, Docker, and Kubernetes. Experience working in cross-functional teams and agile environments. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or related field. Preferred Qualifications Experience with MLOps tools and pipelines (MLflow, Kubeflow, SageMaker, etc.). Knowledge of data privacy and ethical AI practices. Exposure to edge AI or real-time inference systems. Show more Show less

Posted 3 weeks ago

Apply

Exploring mlflow Jobs in India

The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These cities are known for their thriving tech industries and have a high demand for mlflow professionals.

Average Salary Range

The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Salaries may vary based on factors such as location, company size, and specific job requirements.

Career Path

A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager

With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.

Related Skills

In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)

Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.

Interview Questions

  • What is mlflow and how does it help in the machine learning lifecycle? (basic)
  • Explain the difference between tracking, projects, and models in mlflow. (medium)
  • How do you deploy a machine learning model using mlflow? (medium)
  • Can you explain the concept of model registry in mlflow? (advanced)
  • What are the benefits of using mlflow in a machine learning project? (basic)
  • How do you manage experiments in mlflow? (medium)
  • What are some common challenges faced when using mlflow in a production environment? (advanced)
  • How can you scale mlflow for large-scale machine learning projects? (advanced)
  • Explain the concept of artifact storage in mlflow. (medium)
  • How do you compare different machine learning models using mlflow? (medium)
  • Describe a project where you successfully used mlflow to streamline the machine learning process. (advanced)
  • What are some best practices for versioning machine learning models in mlflow? (advanced)
  • How does mlflow support hyperparameter tuning in machine learning models? (medium)
  • Can you explain the role of mlflow tracking server in a machine learning project? (medium)
  • What are some limitations of mlflow that you have encountered in your projects? (advanced)
  • How do you ensure reproducibility in machine learning experiments using mlflow? (medium)
  • Describe a situation where you had to troubleshoot an issue with mlflow and how you resolved it. (advanced)
  • How do you manage dependencies in a mlflow project? (medium)
  • What are some key metrics to track when using mlflow for machine learning experiments? (medium)
  • Explain the concept of model serving in the context of mlflow. (advanced)
  • How do you handle data drift in machine learning models deployed using mlflow? (advanced)
  • What are some security considerations to keep in mind when using mlflow in a production environment? (advanced)
  • How do you integrate mlflow with other tools in the machine learning ecosystem? (medium)
  • Describe a situation where you had to optimize a machine learning model using mlflow. (advanced)

Closing Remark

As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies