Home
Jobs

713 Mlflow Jobs - Page 8

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Location: Mumbai/ Bengaluru Experience: 3-5 years Industry: Banking / Financial Services (Mandatory) Why would you like to join us? TransOrg Analytics specializes in Data Science, Data Engineering and Generative AI, providing advanced analytics solutions to industry leaders and Fortune 500 companies across India, US, APAC and the Middle East. We leverage data science to streamline, optimize, and accelerate our clients' businesses. Visit at www.transorg.com to know more about us. What do we expect from you? Build and validate credit risk models , including application scorecards and behavior scorecards (B-score), aligned with business and regulatory requirements. Use advanced machine learning algorithms such as Logistic Regression, XGBoost , and Clustering to develop interpretable and high-performance models. Translate business problems into data-driven solutions using robust statistical and analytical methods. Collaborate with cross-functional teams including credit policy, risk strategy, and data engineering to ensure effective model implementation and monitoring. Maintain clear, audit-ready documentation for all models and comply with internal model governance standards. Track and monitor model performance, proactively suggesting recalibrations or enhancements as needed. What do you need to excel at? Writing efficient and scalable code in Python, SQL, and PySpark for data processing, feature engineering, and model training. Working with large-scale structured and unstructured data in a fast-paced, banking or fintech environment. Deploying and managing models using MLFlow, with a strong understanding of version control and model lifecycle management. Understanding retail banking products , especially credit card portfolios , customer behavior, and risk segmentation. Communicating complex technical outcomes clearly to non-technical stakeholders and senior management. Applying a structured problem-solving approach and delivering insights that drive business value. What are we looking for? Bachelors or masters degree in Statistics, Mathematics, Computer Science , or a related quantitative field. 3-5 years of experience in credit risk modelling , preferably in retail banking or credit cards. Hands-on expertise in Python, SQL, PySpark , and experience with MLFlow or equivalent MLOps tools. Deep understanding of machine learning techniques including Logistic Regression, XGBoost, and Clustering. Proven experience in developing Application Scorecards and behavior Scorecards using real-world banking data. Strong documentation and compliance orientation, with an ability to work within regulatory frameworks. Curiosity, accountability, and a passion for solving real-world problems using data. Cloud Knowledge, JIRA, GitHub(good to have) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Role: AI Engineer Employment Type: Full-Time, Permanent Location: On-site, Bengaluru Vacancies: Multiple Positions Company Overview EmployAbility.AI is an AI-driven career enablement platform dedicated to transforming how individuals and organizations navigate the future of work. We are committed to democratizing access to meaningful employment by using advanced artificial intelligence, real-time labor market data, and intelligent career pathways to bridge the gap between skills and opportunities. Our platform empowers job seekers with personalized career insights, learning recommendations, and job-matching tools while enabling organizations to make smarter hiring and workforce development decisions. By aligning talent capabilities with market demand, we help create a more inclusive, adaptive, and future-ready workforce. At EmployAbility.AI, we’re not just building software—we’re building solutions that make employability equitable, data-driven, and scalable. Job Role: AI Engineer As an AI Engineer at EmployAbility.AI, you will be at the forefront of building intelligent systems that power the platform’s core functionality—from LLM-based recommendations to intelligent search and contextual assistants. You will develop and deploy state-of-the-art AI models and pipelines using LLMs , LangChain , and Retrieval-Augmented Generation (RAG) to deliver impactful, real-world solutions. You will work closely with a cross-functional team of developers, data scientists, and product managers to create scalable, production-ready AI features that enhance user experiences and drive measurable value across industries and regions. Key Responsibilities Design and develop AI-driven features including conversational agents, recommendation engines, and smart search using LLMs. Build and integrate LangChain-based applications that leverage RAG pipelines for improved reasoning and contextual understanding. Fine-tune, evaluate, and optimize transformer models (BERT, GPT, LLaMA, etc.) for domain-specific use cases. Work with unstructured and semi-structured data (e.g., resumes, job descriptions, labor market datasets). Develop embedding-based search using tools like FAISS , Pinecone , or Weaviate . Collaborate with backend and frontend teams to integrate AI services via scalable APIs. Perform data preprocessing, feature engineering, and model evaluation. Monitor performance of deployed models and iterate based on feedback and metrics. Participate in prompt engineering, experiment tracking, and continuous optimization of AI systems. Stay updated on the latest trends in AI/ML and contribute to internal knowledge sharing. Education Requirements B.Tech/B.E, M.Tech, MCA, M.Sc, MS, or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Core Technical Skills – AI Engineering Large Language Models & NLP Experience with LLMs and transformer-based architectures (e.g., GPT, BERT, LLaMA). Hands-on with LangChain framework and RAG (Retrieval-Augmented Generation) workflows. Proficiency in prompt engineering , embedding models, and semantic search. Experience using Hugging Face Transformers , OpenAI API , or open-source equivalents. Vector Stores & Knowledge Retrieval Experience with FAISS , Pinecone , or Weaviate for similarity search. Implementation of document chunking, embedding pipelines, and vector indexing. ML/AI Development Strong skills in Python and ML libraries (PyTorch, TensorFlow, Scikit-learn). Familiar with NLP tasks like named entity recognition, text classification, and summarization. Experience with API development and deploying AI models into production environments. Tooling & Development Practices Version control with Git , collaborative workflows via GitHub Experiment tracking with MLflow , Weights & Biases , or equivalent API testing tools (Postman, Swagger), and JSON schema validation Use of Jupyter notebooks for experimentation and prototyping Deployment & DevOps (Basic Understanding) Containerization using Docker , basic orchestration knowledge is a plus Cloud environments: familiarity with AWS , GCP , or Azure CI/CD workflows (GitHub Actions, Jenkins) Monitoring tools for model performance and error tracking (Sentry, Prometheus, etc.) Soft Skills & Work Habits Strong problem-solving and analytical thinking Ability to work cross-functionally with technical and non-technical teams Clear and concise communication of complex AI concepts Team collaboration and willingness to mentor peers or juniors Agile/Scrum practices using tools like Jira, Trello, and Confluence Bonus Skills (Good to Have) TypeScript or JavaScript for frontend or integration work Knowledge of GraphQL , chatbot development , or multi-modal AI Familiarity with AutoML , RLHF , or explainable AI Experience with knowledge graphs , ontologies , or custom taxonomies Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and deployment of cutting-edge AI/ML solutions across our products and platforms. As a senior member of the team, you will be responsible for solving complex problems using AI technologies, mentoring junior engineers, and collaborating cross-functionally with product, data, and engineering teams to deliver scalable and impactful AI-driven features. Key Responsibilities: Design and develop advanced machine learning models and deep learning architectures for real-world problems. Lead AI/ML system design, including data pipelines, model training workflows, evaluation strategies, and deployment frameworks. Conduct research and experimentation with new algorithms, tools, and technologies to enhance the company's AI capabilities. Collaborate with product managers, data scientists, and software engineers to translate business requirements into AI solutions. Mentor and guide junior AI engineers and data scientists, fostering best practices in coding, experimentation, and model deployment. Optimize and monitor model performance in production, including addressing issues of bias, drift, and latency. Stay current with AI research, tools, and industry trends, and bring innovative ideas into the team. Contribute to technical strategy and help define the roadmap for AI initiatives. Requirements Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related field (Ph.D. preferred). 5+ years of experience in AI/ML engineering with a strong portfolio of production-level projects. Deep knowledge of machine learning and deep learning algorithms (e.g., CNNs, RNNs, Transformers, LLMs). Proficient in Python and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. Strong experience with MLOps tools (e.g., MLflow, Kubeflow, Airflow) and cloud platforms (e.g., AWS, GCP, Azure). Experience in deploying and maintaining models in production at scale. Strong understanding of data structures, algorithms, and software engineering principles. Excellent problem-solving skills and communication abilities. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Job Profile - Senior Engineer Python + AWS cloud Exp - 4-7 yrs Job Location – Bangalore We’re looking for a top-notch Software Engineer who always sweats the small stuff and cares about impeccable code. If you always strive towards a sustainable, scalable and secure code for every single application you develop, while keeping in mind the customer needs, then you are what we are looking for. You’ll get the chance to work with experienced engineers across our enterprise with a chance to move across varying automation technologies in the future. As Python developers in AI team, you will have the opportunity to work on multiple complex assignments like physics based complex models, lambda functions, automation pipelines, object-oriented programming, etc. Responsibilities Analyze and translate business requirements into scalable and resilient design. Own parts of the application and continuously improve them in an agile environment. Create high quality maintainable products and applications using best engineering practices. Pair with other developers and share design philosophy and goals across the team. Work in cross functional teams (DevOps, Data, UX, Testing etc.). Build and manage fully automated build/test/deployment environments. Ensure high availability and provide quick turnaround to production issues. Contribute to the design of useful, usable, and desirable products in a team environment. Adapt to new programming languages, methodologies, platforms, and frameworks to support the business needs. Qualifications Degree in computer science or a similar field. Four or more years of experience in architecting, designing, developing, and implementing cloud solutions on AWS and/or Azure platforms. Technologies: Python, Azure, AWS, MLFlow, Kubernetes, Terraform, AWS Sage maker, Lambda, Step Function. Development experience with configuration management tools (Terraform, Ansible, CloudFormation). Developing and maintaining continuous integration and continuous deployment pipelines – Jenkins Groovy scripts. Developing containerized solutions and orchestration (Docker, Kubernetes, ECS, ECR) Experience of server less architecture, cloud computing, cloud native application and scalability etc. Understanding of core cloud concepts like Infra as code, IaaS, PaaS and SaaS. Relevant certification of Azure or AWS preferred. Troubleshooting and analytical skills. Knowledge of AI & ML technologies, as well as ML model management context Strong verbal & written communication: should be able to articulate concisely & clearly. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🚨 We are Hiring 🚨 https://grhombustech.com/jobs/job-description-senior-test-automation-lead-playwright-ai-ml-focus/ Job Description Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Experience: 10 - 12 years Job Type: Full-Time Company Overview: GRhombus Technologies Pvt Ltd, a pioneer in Software Solutions – Especially on Test Automation, Cyber Security, Full Stack Development, DevOps, Salesforce, Performance Testing and Manual Testing. GRhombus delivery centres are located in India at Hyderabad, Chennai, Bengaluru and Pune. In the Middle East, we are located in Dubai. Our partner offices are located in the USA and the Netherlands. About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. Key Responsibilities: Test Automation Framework Design & Implementation Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage Lead the implementation of end-to-end automation for: Web interfaces (React, Angular, or other SPA frameworks) Backend services (REST, GraphQL, WebSockets) ML model integration endpoints (real-time inference APIs, batch pipelines) Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. Lead and mentor a team of automation and QA engineers across multiple projects. Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. Review feature specs, AI/ML model update notes, and data schemas for impact analysis. Required Skills and Qualifications: Technical Skills: Strong hands-on expertise with Playwright (TypeScript/JavaScript). Experience building custom automation frameworks and utilities from scratch. Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: Proven experience leading QA/Automation teams (4+ engineers). Strong documentation, code review, and stakeholder communication skills. Experience collaborating in Agile/SAFe environments with cross-functional teams. Preferred Qualifications: Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. Experience with GraphQL, Kafka, or event-driven architecture testing. QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. Why Join Us? At GRhombus, we are redefining quality assurance and software testing with cutting-edge methodologies and a commitment to innovation. As a test automation lead, you will play a pivotal role in shaping the future of automated testing, optimizing frameworks, and driving efficiency across our engineering ecosystem. Be part of a workplace that values experimentation, learning, and professional growth. Contribute to an organisation where your ideas drive innovation and make a tangible impact. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Job Title: MLOps Engineer Location: [Insert Location – e.g., Gurugram / Remote / On-site] Experience: 2–5 years Type: Full-Time Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for seamless deployment and monitoring of ML models. Implement and manage CI/CD workflows using modern tools (e.g., GitHub Actions, Azure DevOps, Jenkins). Orchestrate ML services using Kubernetes for scalable and reliable deployments. Develop and maintain FastAPI-based microservices to serve machine learning models via RESTful APIs. Collaborate with data scientists and ML engineers to productionize models in Azure and AWS cloud environments. Automate infrastructure provisioning and configuration using Infrastructure-as-Code (IaC) tools. Ensure observability, logging, monitoring, and model drift detection in deployed solutions. Required Skills: Strong proficiency in Kubernetes for container orchestration. Experience with CI/CD pipelines and tools like Jenkins, GitHub Actions, or Azure DevOps. Hands-on experience with FastAPI for developing ML-serving APIs. Proficient in deploying ML workflows on Azure and AWS . Knowledge of containerization (Docker optional, if used during local development). Familiarity with model versioning, reproducibility, and experiment tracking tools (e.g., MLflow, DVC). Strong scripting skills (Python, Bash). Preferred Qualifications: B.Tech/M.Tech in Computer Science, Data Engineering, or related fields. Experience with Terraform, Helm, or other IaC tools. Understanding of DevOps practices and security in ML workflows. Good communication skills and a collaborative mindset. Show more Show less

Posted 1 week ago

Apply

7.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!! TCS is Hiring for Databricks architect Interview Mode: Virtual Required Experience: 7-15 years Work location: Chennai, Kolkata, Hyderabad Must have: Hands on Experience in ADF, Azure Databricks, Pyspark, Azure Data Factory, Unity Catalog, Data migrations, Data Security Good to have - Spark SQL, Spark Streaming, Kafka Hands on in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena, AWS Data Catalog, Amazon Redshift, Amazon Athena, AWS RDS, AWS Glue, AWS EMR (Spark/Hadoop) CI/CD (Code Pipeline, Code Build) Good to have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through DM/E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification (Fulltime): Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract): If shortlisted, will you be available for a virtual interview on 13-Jun-25 (Friday)?: Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

The Data Engineering team within the AI, Data, and Analytics (AIDA) organization is the backbone of our data-driven sales and marketing operations. We provide the essential foundation for transformative insights and data innovation. By focusing on integration, curation, quality, and data expertise across diverse sources, we power world-class solutions that advance Pfizer’s mission. Join us in shaping a data-driven organization that makes a meaningful global impact. Role Summary We are seeking a technically adept and experienced Data Solutions Engineering Senior Manager who is passionate about and skilled in designing and developing robust, scalable data models. This role focuses on optimizing the consumption of data sources to generate unique insights from Pfizer’s extensive data ecosystems. A strong technical design and development background is essential to ensure effective collaboration with engineering and developer team members. As a Senior Data Solutions Engineer in our data lake/data warehousing team, you will play a crucial role in designing and building data pipelines and processes that support data transformation, workload management, data structures, dependencies, and metadata management. Your expertise will be pivotal in creating and maintaining the data capabilities that enables advanced analytics and data-driven decision-making. In this role, you will work closely with stakeholders to understand their needs and collaborate with them to create end-to-end data solutions. This process starts with designing data models and pipelines and establishing robust CI/CD procedures. You will work with complex and advanced data environments, design and implement the right architecture to build reusable data products and solutions, and support various analytics use cases, including business reporting, production data pipelines, machine learning, optimization models, statistical models, and simulations. As the Data Solutions Engineering Senior Manager, you will develop sound data quality and integrity standards and controls. You will enable data engineering communities with standard protocols to validate and cleanse data, resolve data anomalies, implement data quality checks, and conduct system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data-driven solutions for the pharmaceutical industry. Role Responsibilities Project solutioning, including scoping, and estimation. Data sourcing, investigation, and profiling. Prototyping and design thinking. Designing and developing data pipelines & complex data workflows. Create standard procedures to ensure efficient CI/CD. Responsible for project documentation and playbook, including but not limited to physical models, conceptual models, data dictionaries and data cataloging. Technical issue debugging and resolutions. Accountable for engineering development of both internal and external facing data solutions by conforming to EDSE and Digital technology standards. Partner with internal / external partners to design, build and deliver best in class data products globally to improve the quality of our customer analytics and insights and the growth of commercial in its role in helping patients. Demonstrate outstanding collaboration and operational excellence. Drive best practices and world-class product capabilities. Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. Master’s degree is preferred. 9+ years of combined data warehouse/data lake experience as a data lake/warehouse developer or data engineer. 9+ years in developing data product and data features in servicing analytics and AI use cases. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Technical Skillset 9+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc.) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. 9+ years of hands-on experience designing and delivering data lake/data warehousing projects. Minimal of 5 years in hands on design of data models. Proven ability to effectively assist the team in resolving technical issues. Proficient in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

Job Title: AI Engineer Location: Remote Employment Type: Full-time About the Role: We are seeking a skilled and motivated AI Engineer to help us build intelligent, agentic systems that drive real-world impact. In this role, you will develop, deploy, and maintain AI models and pipelines - working with large language models (LLMs), vector databases, and orchestration frameworks like Langchain. You will collaborate across teams to build robust, scalable AI-driven solutions. Key Responsibilities- Design and develop intelligent systems using LLMs, retrieval-augmented generation (RAG), and agentic frameworks. Build and deploy AI pipelines using Langchain, vector stores, and custom tools.- Integrate models with production APIs and backend systems. Monitor, fine-tune, and improve performance of deployed AI systems.- Collaborate with data engineers, product managers, and UX designers to deliver AI-first user experiences. Stay up to date with advancements in generative AI, LLMs, and orchestration frameworks. Required Qualifications 2-5 years of experience in building and deploying machine learning or AI-based systems. Hands-on experience with Langchain in building agent workflows or RAG pipelines. Proficiency in Python and frameworks such as PyTorch, TensorFlow, or Scikit-learn. Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Strong understanding of prompt engineering, embeddings, and vector database operations (e.g., FAISS, Pinecone, Weaviate). Familiarity with MLOps tools such as MLflow, SageMaker, or Vertex AI. Preferred Qualifications Experience with large language models (e.g., GPT, Claude, LLaMA) and GenAI platforms (e.g., OpenAI, Bedrock, Anthropic). Background in NLP, RAG architectures, or autonomous agents. Experience in deploying AI applications via APIs and microservices. Contributions to open-source Langchain or GenAI ecosystems. Why Join Us? Remote-first company working on frontier AI systems. Opportunity to shape production-grade AI experiences used globally. Dynamic, collaborative, and intellectually curious team. Competitive compensation with fast growth pot Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

The Data Engineering team within the AI, Data, and Analytics (AIDA) organization is the backbone of our data-driven sales and marketing operations. We provide the essential foundation for transformative insights and data innovation. By focusing on integration, curation, quality, and data expertise across diverse sources, we power world-class solutions that advance Pfizer’s mission. Join us in shaping a data-driven organization that makes a meaningful global impact. Role Summary We are seeking a technically adept and experienced Data Solutions Engineering Senior Manager who is passionate about and skilled in designing and developing robust, scalable data models. This role focuses on optimizing the consumption of data sources to generate unique insights from Pfizer’s extensive data ecosystems. A strong technical design and development background is essential to ensure effective collaboration with engineering and developer team members. As a Senior Data Solutions Engineer in our data lake/data warehousing team, you will play a crucial role in designing and building data pipelines and processes that support data transformation, workload management, data structures, dependencies, and metadata management. Your expertise will be pivotal in creating and maintaining the data capabilities that enables advanced analytics and data-driven decision-making. In this role, you will work closely with stakeholders to understand their needs and collaborate with them to create end-to-end data solutions. This process starts with designing data models and pipelines and establishing robust CI/CD procedures. You will work with complex and advanced data environments, design and implement the right architecture to build reusable data products and solutions, and support various analytics use cases, including business reporting, production data pipelines, machine learning, optimization models, statistical models, and simulations. As the Data Solutions Engineering Senior Manager, you will develop sound data quality and integrity standards and controls. You will enable data engineering communities with standard protocols to validate and cleanse data, resolve data anomalies, implement data quality checks, and conduct system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data-driven solutions for the pharmaceutical industry. Role Responsibilities Project solutioning, including scoping, and estimation. Data sourcing, investigation, and profiling. Prototyping and design thinking. Designing and developing data pipelines & complex data workflows. Create standard procedures to ensure efficient CI/CD. Responsible for project documentation and playbook, including but not limited to physical models, conceptual models, data dictionaries and data cataloging. Technical issue debugging and resolutions. Accountable for engineering development of both internal and external facing data solutions by conforming to EDSE and Digital technology standards. Partner with internal / external partners to design, build and deliver best in class data products globally to improve the quality of our customer analytics and insights and the growth of commercial in its role in helping patients. Demonstrate outstanding collaboration and operational excellence. Drive best practices and world-class product capabilities. Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. Master’s degree is preferred. 9+ years of combined data warehouse/data lake experience as a data lake/warehouse developer or data engineer. 9+ years in developing data product and data features in servicing analytics and AI use cases. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Technical Skillset 9+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc.) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. 9+ years of hands-on experience designing and delivering data lake/data warehousing projects. Minimal of 5 years in hands on design of data models. Proven ability to effectively assist the team in resolving technical issues. Proficient in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Role Summary The AI, Data, and Analytics (AIDA) organization team, a Pfizer Digital organization, is responsible for the development and management of all data and analytics tools and platforms across the enterprise – from global product development, to manufacturing, to commercial, to point of patient care across over 100+ countries. One of the team’s top priorities is the development of Business Intelligence (BI), Reporting, and Visualization products which will serve as an enabler for the company’s digital transformation to bring innovative therapeutics to patients. We are looking for a technically skilled and experienced Reporting Engineering Manager who is passionate about developing BI and data visualization products for our Customer Facing and Sales Enablement Colleagues, totaling over 20,000 individuals. This role involves working across multiple business segments globally to deliver top-tier BI Reporting and Visualization capabilities that enable impactful business decisions and high engagement user experiences. This role will work across multiple business segments globally to deliver best in class BI Reporting and Visualization capabilities that enable impactful business decisions and cohesive high engagement user experiences. In this position, you will be accountable to have a thorough understanding of data, business, and analytic requirements to deliver high-impact, relevant interactive data visualizations products that drive company performance through continuously monitoring, measuring, identifying root cause, and proactively identifying patterns and triggers across the company to optimize performance. This role will also drive best practices and standards for BI & Visualization. This role will work closely with stakeholders to understand their needs and ensure that reporting assets are created with a focus on Customer Experience. This role requires working with complex and advanced data environments, employing the right architecture to build scalable semantic layers and contemporary reporting visualizations. The Reporting Manager will ensure data quality and integrity by validating the accuracy of KPIs and insights, resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data and analytics driven solutions for the pharmaceutical industry. Role Responsibilities Engineering expert in business intelligence and data visualization products in service of field force and HQ enabling functions. Act as a Technical BI & Visualization developer on projects and collaborate with global team members (e.g. other engineers, regional delivery and activation teams, vendors) to architect, design and create BI & Visualization products at scale. Thorough understanding of data, business, and analytic requirements (incl. BI Product Blueprints such as SMART) to deliver high-impact, relevant data visualizations products while respecting project or program budgets and timelines. Deliver quality Functional Requirements and Solution Design, adhering to established standards and best practices. Follow Pfizer Process in Portfolio Management, Project Management, Product Management Playbook following Agile, Hybrid or Enterprise Solution Life Cycle. Extensive technical and implementation knowledge of multitude of BI and Visualization platforms not limiting to Tableau, MicroStrategy, Business Objects, MS-SSRS, and etc. Experience of cloud-based architectures, cloud analytics products / solutions, and data products / solutions (eg: AWS Redshift, MS SQL, Snowflake, Oracle, Teradata). Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Relevant experience or knowledge in areas such as database management, data quality, master data management, metadata management, performance tuning, collaboration, and business process management. Strong Business Analysis acumen to meet or exceed business requirements following User Center Design (UCD). Strong Experience with testing of BI and Analytics applications – Unit Testing (e.g. Phased or Agile Sprints or MVP), System Integration Testing (SIT) and User Integration Testing (UAT). Experience with technical solution management tools such as JIRA or Github. Stay abreast of customer, industry, and technology trends with enterprise Business Intelligence (BI) and visualization tools. Technical Skillset 5+ years of hands-on experience in developing BI capabilities using Microstrategy Proficiency in common BI tools, such as Tableau, PowerBI, etc.. is a plus. Common Data Model (Logical & Physical), Conceptual Data Model validation to create Consumption Layer for Reporting (Dimensional Model, Semantic Layer, Direct Database Aggregates or OLAP Cubes) Develop using Design System for Reporting as well as Adhoc Analytics Template BI Product Scalability, Performance-tuning Platform Admin and Security, BI Platform tenant (licensing, capacity, vendor access, vulnerability testing) Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Python Developer (GCP) Location: Chennai Experience: 7-12 Years Job Summary We are seeking a Python Developer (GCP) with deep expertise in Python , Google Cloud Platform (GCP) , and MLOps to lead end-to-end development and deployment of machine learning solutions. The ideal candidate is a versatile engineer capable of building both front-end and back-end systems, while also managing the automation and scalability of ML workflows and models in production. Required Skills & Experience Strong proficiency in Python, including OOP, data processing, and backend development. 3+ years of experience with Google Cloud Platform (GCP) and services relevant to ML & application deployment. Proven experience with MLOps practices and tools such as Vertex AI, MLflow, Kubeflow, TensorFlow Extended (TFX), Airflow, or similar. Hands-on experience in front-end development (React.js, Angular, or similar). Experience in building RESTful APIs and working with Flask/FastAPI/Django frameworks. Familiarity with Docker, Kubernetes, and CI/CD pipelines in cloud environments. Experience with Terraform, Cloud Build, and monitoring tools (e.g., Stackdriver, Prometheus). Understanding of version control (Git), Agile methodologies, and collaborative software development. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Find the below JD: 5+ years of hands-on experience in data engineering/ETL using Databricks on AWS / Azure cloud infrastructure and functions. 3+ years of experience in PBI and Data Warehousing experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience with AWS (e.g. S3, Athena, Glue, Lambda, etc.) preferred. Deep understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized, OBT) implementing highly performant data ingestion pipelines from multiple sources Strong proficiency in Python and SQL. Deep understanding of Databricks platform features (Delta Lake, Databricks SQL, MLflow) Experience with CI/CD on Databricks using tools such as BitBucket, GitHub Actions, and Databricks CLI Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow. Basic working knowledge of API or Stream based data extraction processes like Salesforce API, Bulk API. Understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloguing) Nice to have: Databricks certifications and AWS Solution Architect certification. Nice to have: experience with building data pipeline from various business applications like Salesforce, Marketo, NetSuite, Workday etc. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description: 5+ years of hands-on experience in data engineering/ETL using Databricks on AWS / Azure cloud infrastructure and functions. 3+ years of experience in PBI and Data Warehousing experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience with AWS (e.g. S3, Athena, Glue, Lambda, etc.) preferred. Deep understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized, OBT) implementing highly performant data ingestion pipelines from multiple sources Strong proficiency in Python and SQL. Deep understanding of Databricks platform features (Delta Lake, Databricks SQL, MLflow) Experience with CI/CD on Databricks using tools such as BitBucket, GitHub Actions, and Databricks CLI Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow. Basic working knowledge of API or Stream based data extraction processes like Salesforce API, Bulk API. Understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloguing) Nice to have: Databricks certifications and AWS Solution Architect certification. Nice to have: experience with building data pipeline from various business applications like Salesforce, Marketo, NetSuite, Workday etc. Show more Show less

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Information Number of Positions 1 Industry Engineering Date Opened 06/09/2025 Job Type Permanent Work Experience 2-3 years City Bangalore State/Province Karnataka Country India Zip/Postal Code 560037 Location Bangalore About Us CloudifyOps is a company with DevOps and Cloud in our DNA. CloudifyOps enables businesses to become more agile and innovative through a comprehensive portfolio of services that addresses hybrid IT transformation, Cloud transformation, and end-to-end DevOps Workflows. We are a proud Advance Partner of Amazon Web Services and have deep expertise in Microsoft Azure and Google Cloud Platform solutions. We are passionate about what we do. The novelty and the excitement of helping our customers accomplish their goals drives us to become excellent at what we do. Job Description Culture at CloudifyOps : Working at CloudifyOps is a rewarding experience! Great people, a work environment that thrives on creativity, and the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. About the Role: We are seeking a proactive and technically skilled AI/ML Engineer with 2–3 years of experience to join our growing technology team. The ideal candidate will have hands-on expertise in AWS-based machine learning, Agentic AI, and Generative AI tools, especially within the Amazon AI ecosystem. You will play a key role in building intelligent, scalable solutions that address complex business challenges. Key Responsibilities: 1. AWS-Based Machine Learning Develop, train, and fine-tune ML models on AWS SageMaker, Bedrock, and EC2. Implement serverless ML workflows using Lambda, Step Functions, and EventBridge. Optimize models for cost/performance using AWS Inferentia/Trainium. 2. MLOps & Productionization Build CI/CD pipelines for ML using AWS SageMaker Pipelines, MLflow, or Kubeflow. Containerize models with Docker and deploy via AWS EKS/ECS/Fargate. Monitor models in production using AWS CloudWatch, SageMaker Model Monitor. 3. Agentic AI Development Design autonomous agent systems (e.g., AutoGPT, BabyAGI) for task automation. Integrate multi-agent frameworks (LangChain, AutoGen) with AWS services. Implement RAG (Retrieval-Augmented Generation) for agent knowledge enhancement. 4. Generative AI & LLMs Fine-tune and deploy LLMs (GPT-4, Claude, Llama 2/3) using LoRA/QLoRA. Build Generative AI apps (chatbots, content generators) with LangChain, LlamaIndex. Optimize prompts and evaluate LLM performance using AWS Bedrock/Amazon Titan. 5. Collaboration & Innovation Work with cross-functional teams to translate business needs into AI solutions. Collaborate with DevOps and Cloud Engineering teams to develop scalable, production-ready AI systems. Stay updated with cutting-edge AI research (arXiv, NeurIPS, ICML). 5. Governance & Documentation Implement model governance frameworks to ensure ethical AI/ML deployments. Design reproducible ML pipelines following MLOps best practices (versioning, testing, monitoring). Maintain detailed documentation for models, APIs, and workflows (Markdown, Sphinx, ReadTheDocs). Create runbooks for model deployment, troubleshooting, and scaling. Technical Skills Programming: Python (PyTorch, TensorFlow, Hugging Face Transformers). AWS: SageMaker, Lambda, ECS/EKS, Bedrock, S3, IAM. MLOps: MLflow, Kubeflow, Docker, GitHub Actions/GitLab CI. Generative AI: Prompt engineering, LLM fine-tuning, RAG, LangChain. Agentic AI: AutoGPT, BabyAGI, multi-agent orchestration. Data Engineering: SQL, PySpark, AWS Glue/EMR. Soft Skills Strong problem-solving and analytical thinking. Ability to explain complex AI concepts to non-technical stakeholders. What We’re Looking For Bachelor’s/Master’s in CS, AI, Data Science, or related field. 2-3 years of industry experience in AI/ML engineering. Portfolio of deployed ML/AI projects (GitHub, blog, case studies). Good to have an AWS Certified Machine Learning Specialty certification. Why Join Us? Innovative Projects: Work on cutting-edge AI applications that push the boundaries of technology. Collaborative Environment: Join a team of passionate engineers and researchers committed to excellence. Career Growth: Opportunities for professional development and advancement in the rapidly evolving field of AI. Equal opportunity employer CloudifyOps is proud to be an equal opportunity employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, color, sex, religion, national origin, disability, pregnancy, marital status, sexual orientation, gender reassignment, veteran status, or other protected category.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Compensation: INR 2 crore per year including incentives Strictly please do NOT apply if you have not built 1-2 SLM for clients before. Multiplier AI is a leader in AI accelerators for life sciences and is due for listing. About the Role We are seeking a seasoned and forward-thinking Head for AI and SLM to spearhead Small Language Model (SLM) implementation projects across enterprise and industry-specific use cases. This is a high-impact leadership role that combines deep technical expertise with strategic consulting to deliver scalable, efficient, and secure SLM solutions. Key Responsibilities Lead end-to-end design and deployment of Small Language Models (SLMs) in production environments. Define architecture for on-device or private-cloud SLM deployments, optimizing for latency, token cost, and privacy. Collaborate with cross-functional teams (data, MLOps, product, security) to integrate SLMs into existing systems and workflows. Select and fine-tune open-source or custom SLMs (e.g., Phi-3, TinyLlama, Mistral) for targeted business use cases. Mentor engineering and data science teams on best practices in efficient prompt engineering, RAG pipelines, quantization, and distillation techniques. Act as a thought partner to leadership and clients on GenAI roadmap, risk management, and responsible AI design. Required Skills & Experience Proven experience in deploying Small Language Models in production (not just large-scale LLMs). this is essential do not apply if not done it Strong understanding of transformer architecture, tokenizer design, and parameter-efficient fine-tuning (LoRA, QLoRA). Hands-on with HuggingFace, ONNX, GGUF, and GPU/CPU/edge model optimization techniques. Experience integrating SLMs into real-world systems—mobile apps, secure enterprise workflows, or embedded devices. Background in Python, PyTorch/TensorFlow, and familiarity with MLOps tools like Weights & Biases, MLflow, and LangChain. Strategic mindset to balance model performance vs. cost vs. explainability . Preferred Qualifications Prior consulting experience with AI/ML deployments in pharma, finance, or regulated sectors. Familiarity with privacy-preserving AI, federated learning, or differential privacy. Contributions to open-source LLM/SLM projects. What We Offer Leadership in shaping the future of lightweight AI. Exposure to cutting-edge GenAI applications across industries. Competitive compensation and equity options (for permanent roles). Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

What You’ll Do Handle data: pull, clean, and shape structured & unstructured data. Manage pipelines: Airflow / Step Functions / ADF… your call. Deploy models: build, tune, and push to production on SageMaker, Azure ML, or Vertex AI. Scale: Spark / Databricks for the heavy lifting. Automate processes: Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Collaborate effectively: work with engineers, architects, and business professionals to solve real problems promptly. What You Bring 3+ years hands-on MLOps (4-5 yrs total software experience). Proven experience with one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark, Python, SQL, TensorFlow / PyTorch / Scikit-learn. Extensive experience handling and troubleshooting Kubernetes and proficiency in Dockerfile management. Prototyping with open-source tools, selecting the appropriate solution, and ensuring scalability. Analytical thinker, team player, with a proactive attitude. Nice-to-Haves Sagemaker, Azure ML, or Vertex AI in production. Dedication to clean code, thorough documentation, and precise pull requests. Skills: mlflow,ml ops,scikit-learn,airflow,mlops,sql,pytorch,adf,step functions,kubernetes,gcp,kubeflow,python,databricks,tensorflow,aws,azure,docker,seldon,spark Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Key Accountabilities JOB DESCRIPTION Collaborate with cross-functional teams (e.g., data scientists, software engineers, product managers) to define ML problems and objectives. Research, design, and implement machine learning algorithms and models (e.g., supervised, unsupervised, deep learning, reinforcement learning). Analyse and preprocess large-scale datasets for training and evaluation. Train, test, and optimize ML models for accuracy, scalability, and performance. Deploy ML models in production using cloud platforms and/or MLOps best practices. Monitor and evaluate model performance over time, ensuring reliability and robustness. Document findings, methodologies, and results to share insights with stakeholders. Qualifications, Experience And Skills Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field (graduation within the last 12 months or upcoming). Proficiency in Python or a similar language, with experience in frameworks like TensorFlow, PyTorch, or Scikit-learn. Strong foundation in linear algebra, probability, statistics, and optimization techniques. Familiarity with machine learning algorithms (e.g., decision trees, SVMs, neural networks) and concepts like feature engineering, overfitting, and regularization. Hands-on experience working with structured and unstructured data using tools like Pandas, SQL, or Spark. Ability to think critically and apply your knowledge to solve complex ML problems. Strong communication and collaboration skills to work effectively in diverse teams. Additional Skills (Good To Have) Experience with cloud platforms (e.g., AWS, Azure, GCP) and MLOps tools (e.g., MLflow, Kubeflow). Knowledge of distributed computing or big data technologies (e.g., Hadoop, Apache Spark). Previous internships, academic research, or projects showcasing your ML skills. Familiarity with deployment frameworks like Docker and Kubernetes. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

Job Req ID: 47375 Location: Hyderabad, IN Function: Technology/ IOT/Cloud About: Role Overview: We are seeking a highly skilled and motivated Senior Data Scientist with deep expertise in Generative AI , Machine Learning , Deep Learning , and advanced Data Analytics . The ideal candidate will have hands-on experience in building, deploying, and maintaining end-to-end ML solutions at scale, preferably within the Telecom domain. You will be part of our AI & Data Science team, working on high-impact projects ranging from customer analytics, network intelligence, churn prediction, to generative AI applications in telco automation and customer experience. Key Responsibilities: Design, develop, and deploy advanced machine learning and deep learning models for Telco use cases such as: Network optimization Customer churn prediction Usage pattern modeling Fraud detection GenAI applications (e.g., personalized recommendations, customer service automation) Lead the design and implementation of Generative AI solutions (LLMs, transformers, text-to-text/image models) using tools like OpenAI, Hugging Face, LangChain, etc. Collaborate with cross-functional teams including network, marketing, IT, and business to define AI-driven solutions. Perform exploratory data analysis , feature engineering, model selection, and evaluation using real-world telecom datasets (structured and unstructured). Drive end-to-end ML solution deployment into production (CI/CD pipelines, model monitoring, scalability). Optimize model performance and latency in production, especially for real-time and edge applications. Evaluate and integrate new tools, platforms, and AI frameworks to advance Vi’s data science capabilities. Provide technical mentorship to junior data scientists and data engineers. Required Qualifications & Skills: 8+ years of industry experience in Machine Learning, Deep Learning, and Advanced Analytics. Strong hands-on experience with GenAI models and frameworks (e.g., GPT, BERT, Llama, LangChain, RAG pipelines). Proficiency in Python , and libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers , etc. Experience in end-to-end model lifecycle management , from data preprocessing to production deployment (MLOps). Familiarity with cloud platforms like AWS, GCP, or Azure; and ML deployment tools (Docker, Kubernetes, MLflow, FastAPI, etc.). Strong understanding of SQL , big data tools (Spark, Hive), and data pipelines. Excellent problem-solving skills with a strong analytical mindset and business acumen. Prior experience working on Telecom datasets or use cases is a strong plus. Preferred Skills: Experience with vector databases , embeddings , and retrieval-augmented generation (RAG) pipelines. Exposure to real-time ML inference and streaming data platforms (Kafka, Flink). Knowledge of network analytics , geo-spatial modeling , or customer behavior modeling in a Telco environment. Experience mentoring teams or leading small AI/ML projects.

Posted 1 week ago

Apply

8.0 years

2 - 6 Lacs

Hyderābād

On-site

Job Req ID: 47376 Location: Hyderabad, IN Function: Technology/ IOT/Cloud About: Role Overview: We are seeking a highly skilled and motivated Senior Data Scientist with deep expertise in Generative AI , Machine Learning , Deep Learning , and advanced Data Analytics . The ideal candidate will have hands-on experience in building, deploying, and maintaining end-to-end ML solutions at scale, preferably within the Telecom domain. You will be part of our AI & Data Science team, working on high-impact projects ranging from customer analytics, network intelligence, churn prediction, to generative AI applications in telco automation and customer experience. Key Responsibilities: Design, develop, and deploy advanced machine learning and deep learning models for Telco use cases such as: Network optimization Customer churn prediction Usage pattern modeling Fraud detection GenAI applications (e.g., personalized recommendations, customer service automation) Lead the design and implementation of Generative AI solutions (LLMs, transformers, text-to-text/image models) using tools like OpenAI, Hugging Face, LangChain, etc. Collaborate with cross-functional teams including network, marketing, IT, and business to define AI-driven solutions. Perform exploratory data analysis , feature engineering, model selection, and evaluation using real-world telecom datasets (structured and unstructured). Drive end-to-end ML solution deployment into production (CI/CD pipelines, model monitoring, scalability). Optimize model performance and latency in production, especially for real-time and edge applications. Evaluate and integrate new tools, platforms, and AI frameworks to advance Vi’s data science capabilities. Provide technical mentorship to junior data scientists and data engineers. Required Qualifications & Skills: 8+ years of industry experience in Machine Learning, Deep Learning, and Advanced Analytics. Strong hands-on experience with GenAI models and frameworks (e.g., GPT, BERT, Llama, LangChain, RAG pipelines). Proficiency in Python , and libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers , etc. Experience in end-to-end model lifecycle management , from data preprocessing to production deployment (MLOps). Familiarity with cloud platforms like AWS, GCP, or Azure; and ML deployment tools (Docker, Kubernetes, MLflow, FastAPI, etc.). Strong understanding of SQL , big data tools (Spark, Hive), and data pipelines. Excellent problem-solving skills with a strong analytical mindset and business acumen. Prior experience working on Telecom datasets or use cases is a strong plus. Preferred Skills: Experience with vector databases , embeddings , and retrieval-augmented generation (RAG) pipelines. Exposure to real-time ML inference and streaming data platforms (Kafka, Flink). Knowledge of network analytics , geo-spatial modeling , or customer behavior modeling in a Telco environment. Experience mentoring teams or leading small AI/ML projects.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

Job Req ID: 47377 Location: Hyderabad, IN Function: Technology/ IOT/Cloud About: Role Overview: We are looking for a hands-on Data Engineer with 8+ years of experience to build, manage, and scale data pipelines, deploy ML solutions , and enable advanced data visualizations and dashboards for business consumption. The ideal candidate will have a strong engineering mindset, deep understanding of data infrastructure, and prior experience working on self-managed or private cloud (VM-based) deployments. Candidates from premier institutes ( IITs, NITs, or equivalent Tier-1/2 schools ) are strongly preferred. Key Responsibilities: Design and build robust, scalable, and secure data pipelines (batch and real-time) to support AI/ML workloads and BI dashboards. Collaborate with data scientists to operationalize ML models , including containerization (Docker), CI/CD pipelines, model serving (FastAPI/Flask), and monitoring. Develop and maintain interactive dashboards using tools such as Plotly Dash, Power BI, or Streamlit to visualize key insights for business stakeholders. Manage deployments and orchestration on Vi’s local private cloud infrastructure (VM-based setups) . Work closely with analytics, business, and DevOps teams to ensure reliable data availability and system health. Optimize ETL/ELT workflows for performance and scale across large telecom datasets. Implement data quality checks, governance, and logging/monitoring solutions for all production workloads. Required Qualifications & Skills: 8+ years of experience in data engineering, platform development, and/or ML deployment. Prefered B.Tech/M.Tech from Tier-1 or Tier-2 institutes (IITs, NITs, IIITs, BITS, etc.) . Strong proficiency in Python , SQL , and data pipeline frameworks (Airflow, Luigi, or similar). Solid experience with containerization (Docker), scripting, and deploying production-grade ML or analytics services. Hands-on experience with dashboarding and visualization tools such as: Power BI / Tableau / Streamlit Custom front-end dashboards (nice to have) Experience working on self-managed VMs, bare-metal servers, or local private clouds (not just public cloud services). Familiarity with ML deployment architectures , REST APIs, and performance tuning. Preferred Skills: Experience with Kafka, Spark, or distributed processing systems . Exposure to MLOps tools (MLflow, DVC, Kubeflow). Understanding of telecom data and analytics use cases. Ability to lead and mentor junior engineers or analysts.

Posted 1 week ago

Apply

5.0 years

8 - 10 Lacs

Thiruvananthapuram

Remote

5 - 7 Years 1 Opening Kochi, Trivandrum Role description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview: We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities: Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills: 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills: Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 12 Lacs

Coimbatore

On-site

Job Type: Full-time, Permanent Job mode: On-Site/Hybrid Joining: Open to immediate joiners and candidates available within 1-2 weeks. Sense7AI Data Solutions is seeking a highly skilled and forward-thinking AI/ML Engineer to join our dynamic team. You will play a critical role in designing, developing, and deploying state-of-the-art AI solutions using both classical machine learning and cutting-edge generative AI technologies. The ideal candidate is not only technically proficient but also deeply familiar with modern AI tools, frameworks, and prompt engineering strategies. Key Responsibilities Design, build, and deploy end-to-end AI/ML solutions tailored to real-world business challenges. Leverage the latest advancements in Generative AI, LLMs (e.g., GPT, Claude, LLaMA), and multimodal models for intelligent applications. Develop, fine-tune, and evaluate custom language models using transfer learning and prompt engineering. Work with traditional ML models and deep learning architectures (CNNs, RNNs, Transformers) for diverse applications such as NLP, computer vision, and time-series forecasting. Create and maintain scalable ML pipelines using MLOps best practices. Collaborate with cross-functional teams (data engineers, product managers, business analysts) to understand domain needs and translate them into AI solutions. Stay current on the evolving AI landscape, including open-source tools, academic research, cloud-native AI services, and responsible AI practices. Ensure AI model transparency, fairness, bias mitigation, and compliance with data governance standards. Required Skills & Qualifications Education: Any degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Experience: 3 – 5 years of hands-on experience in AI/ML solution development, model deployment, and experimentation. Technical Proficiency: Programming: Python (strong), familiarity with Bash/CLI, and Git . Frameworks: TensorFlow, PyTorch, Hugging Face Transformers, Scikit-learn . GenAI & LLM Tools: LangChain, OpenAI APIs, Anthropic, Vertex AI, PromptLayer, Weights & Biases. Prompt Engineering: Experience crafting, testing, and optimizing prompts for LLMs across multiple platforms. Cloud & MLOps: AWS/GCP/Azure (SageMaker, Vertex AI, Azure ML), Docker, Kubernetes, MLflow. Data: SQL, NoSQL, BigQuery, Spark, Hadoop; data wrangling, cleansing, and feature engineering. Strong grasp of model evaluation techniques, fine-tuning strategies, and A/B testing. Preferred Qualifications Experience with AutoML , reinforcement learning, vector databases (e.g., Milvus, FAISS), or RAG (Retrieval-Augmented Generation). Familiarity with deploying LLMs and GenAI systems in production environments. Hands-on experience with open-source LLMs and fine-tuning (e.g., LLaMA, Mistral, Falcon, Open LLaMA). Understanding of AI compliance, data privacy, ethical AI, and explainability (XAI). Strong problem-solving skills and the ability to work in fast-paced, evolving tech landscapes. Excellent written and verbal communication in English. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Health insurance Paid time off Provident Fund Experience: Machine learning: 3 years (Preferred) AI: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Agnito Technologies is Hiring: Machine Learning Ops Engineer (MLOps) Location: Bhopal (Work From Office) Vacancy: 1 Experience: 5+ years managing ML pipelines in production Package: No bar for the right candidate Key Responsibilities: Design, implement, and manage end-to-end machine learning pipelines in production environments Automate model deployment workflows using CI/CD tools Containerize ML models and services with Docker and orchestrate them using Kubernetes Work with platforms like MLflow, Kubeflow, and Seldon for experiment tracking and model management Deploy and monitor models in AWS SageMaker or similar cloud environments Collaborate with data scientists, DevOps, and software engineers to ensure smooth production rollouts Key Skills: MLflow, Kubeflow, Seldon Docker, Kubernetes CI/CD for ML models AWS SageMaker or equivalent cloud ML platforms Strong understanding of MLOps principles and real-time production deployment Eligibility: 5+ years of hands-on experience in managing ML pipelines and deployments Proven experience in MLOps tools and practices in production environments Job Type: Full-time Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role Overview We are seeking a highly skilled and motivated Senior Data Scientist with deep expertise in Generative AI , Machine Learning , Deep Learning , and advanced Data Analytics . The ideal candidate will have hands-on experience in building, deploying, and maintaining end-to-end ML solutions at scale, preferably within the Telecom domain. You will be part of our AI & Data Science team, working on high-impact projects ranging from customer analytics, network intelligence, churn prediction, to generative AI applications in telco automation and customer experience. Key Responsibilities Design, develop, and deploy advanced machine learning and deep learning models for Telco use cases such as: Network optimization Customer churn prediction Usage pattern modeling Fraud detection GenAI applications (e.g., personalized recommendations, customer service automation) Lead the design and implementation of Generative AI solutions (LLMs, transformers, text-to-text/image models) using tools like OpenAI, Hugging Face, LangChain, etc. Collaborate with cross-functional teams including network, marketing, IT, and business to define AI-driven solutions. Perform exploratory data analysis, feature engineering, model selection, and evaluation using real-world telecom datasets (structured and unstructured). Drive end-to-end ML solution deployment into production (CI/CD pipelines, model monitoring, scalability). Optimize model performance and latency in production, especially for real-time and edge applications. Evaluate and integrate new tools, platforms, and AI frameworks to advance Vi’s data science capabilities. Provide technical mentorship to junior data scientists and data engineers. Required Qualifications & Skills 8+ years of industry experience in Machine Learning, Deep Learning, and Advanced Analytics. Strong hands-on experience with GenAI models and frameworks (e.g., GPT, BERT, Llama, LangChain, RAG pipelines). Proficiency in Python, and libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers, etc. Experience in end-to-end model lifecycle management, from data preprocessing to production deployment (MLOps). Familiarity with cloud platforms like AWS, GCP, or Azure; and ML deployment tools (Docker, Kubernetes, MLflow, FastAPI, etc.). Strong understanding of SQL, big data tools (Spark, Hive), and data pipelines. Excellent problem-solving skills with a strong analytical mindset and business acumen. Prior experience working on Telecom datasets or use cases is a strong plus. Preferred Skills Experience with vector databases, embeddings, and retrieval-augmented generation (RAG) pipelines. Exposure to real-time ML inference and streaming data platforms (Kafka, Flink). Knowledge of network analytics, geo-spatial modeling, or customer behavior modeling in a Telco environment. Experience mentoring teams or leading small AI/ML projects. Show more Show less

Posted 1 week ago

Apply

Exploring mlflow Jobs in India

The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These cities are known for their thriving tech industries and have a high demand for mlflow professionals.

Average Salary Range

The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Salaries may vary based on factors such as location, company size, and specific job requirements.

Career Path

A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager

With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.

Related Skills

In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)

Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.

Interview Questions

  • What is mlflow and how does it help in the machine learning lifecycle? (basic)
  • Explain the difference between tracking, projects, and models in mlflow. (medium)
  • How do you deploy a machine learning model using mlflow? (medium)
  • Can you explain the concept of model registry in mlflow? (advanced)
  • What are the benefits of using mlflow in a machine learning project? (basic)
  • How do you manage experiments in mlflow? (medium)
  • What are some common challenges faced when using mlflow in a production environment? (advanced)
  • How can you scale mlflow for large-scale machine learning projects? (advanced)
  • Explain the concept of artifact storage in mlflow. (medium)
  • How do you compare different machine learning models using mlflow? (medium)
  • Describe a project where you successfully used mlflow to streamline the machine learning process. (advanced)
  • What are some best practices for versioning machine learning models in mlflow? (advanced)
  • How does mlflow support hyperparameter tuning in machine learning models? (medium)
  • Can you explain the role of mlflow tracking server in a machine learning project? (medium)
  • What are some limitations of mlflow that you have encountered in your projects? (advanced)
  • How do you ensure reproducibility in machine learning experiments using mlflow? (medium)
  • Describe a situation where you had to troubleshoot an issue with mlflow and how you resolved it. (advanced)
  • How do you manage dependencies in a mlflow project? (medium)
  • What are some key metrics to track when using mlflow for machine learning experiments? (medium)
  • Explain the concept of model serving in the context of mlflow. (advanced)
  • How do you handle data drift in machine learning models deployed using mlflow? (advanced)
  • What are some security considerations to keep in mind when using mlflow in a production environment? (advanced)
  • How do you integrate mlflow with other tools in the machine learning ecosystem? (medium)
  • Describe a situation where you had to optimize a machine learning model using mlflow. (advanced)

Closing Remark

As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies