Jobs
Interviews

14 Ml Engineer Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

We are seeking an experienced Machine Learning Engineer / Databricks Architect to join our team and contribute to the development of state-of-the-art, scalable ML and big data solutions. The ideal candidate should have over 8 years of experience in designing ML infrastructure and big data systems, with a strong emphasis on Databricks. Possession of Databricks Architect Certification is mandatory for this role. In addition, the candidate should have at least 4 years of practical experience working with Databricks. The successful candidate should be well-versed in orchestration tools such as Airflow, Kubeflow, DAGster, Optuna, etc. They should also have a solid understanding of CI/CD and DevOps tools like Jenkins, Terraform, and CloudFormation. Proficiency in distributed computing technologies including Apache Spark, EMR/DataProc, and Glue is essential for this role. Hands-on experience with containerization and orchestration tools like Docker and Kubernetes will be highly valued. If you are enthusiastic about constructing scalable ML platforms and have a knack for working with modern cloud-native technologies, we are eager to learn more about you! Feel free to reach out by sending a direct message or submitting your resume to [ankita.gupta@mbww.com]. This position is based in Pune and offers a hybrid working arrangement. Thank you for considering this exciting opportunity!,

Posted 18 hours ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are looking for a highly experienced Voice AI /ML Engineer to take the lead in designing and deploying real-time voice intelligence systems. This position specifically involves working on ASR, TTS, speaker diarization, wake word detection, and developing production-grade modular audio processing pipelines to support next-generation contact center solutions, intelligent voice agents, and high-quality audio systems. You will be operating at the convergence of deep learning, streaming infrastructure, and speech/NLP technology, with a focus on creating scalable, low-latency systems that cater to diverse audio formats and real-world applications. Your responsibilities will include: - Building, fine-tuning, and deploying ASR models such as Whisper, wav2vec2.0, and Conformer for real-time transcription. - Developing high-quality TTS systems using VITS, Tacotron, FastSpeech for natural-sounding voice generation. - Implementing speaker diarization to segment and identify speakers in multi-party conversations using embeddings and clustering techniques. - Designing wake word detection models with ultra-low latency and high accuracy even in noisy conditions. In addition to the above, you will also be involved in: - Architecting bi-directional real-time audio streaming pipelines utilizing WebSocket, gRPC, Twilio Media Streams, or WebRTC. - Integrating voice AI models into live voice agent solutions, IVR automation, and AI contact center platforms. - Building scalable microservices for audio processing, encoding, and streaming across various codecs and containers. - Leveraging deep learning and NLP techniques for speech and language tasks. Furthermore, you will be responsible for: - Developing reusable modules for different voice tasks and system components. - Designing APIs and interfaces for orchestrating voice tasks across multi-stage pipelines. - Writing efficient Python code, optimizing models for real-time inference, and deploying them on cloud platforms. Join us to be part of impactful work, tremendous growth opportunities, and an innovative environment at Tanla, where diversity is championed and inclusivity is valued.,

Posted 6 days ago

Apply

6.0 - 11.0 years

30 - 45 Lacs

Valsad

Remote

Job Timing : Monday-Friday : 3:00 PM to 12:00 AM ( 8:00 PM to 9:00 PM Dinner Break ) Saturday : 9:30 AM to 2:30 PM ( 1PM to 1:30 PM Lunch Break ) Job Description: As a Data Scientist specializing in AI and Machine Learning, you will play a key role in developing and deploying state-of-the-art machine learning models. You will work closely with cross-functional teams to create solutions leveraging AI technologies, including OpenAI models, Google Gemini, Copilot, and other cutting-edge AI tools. Key Responsibilities: Design, develop, and implement advanced AI and machine learning models focusing on generative AI and NLP technologies. Work with large datasets, applying statistical and machine learning techniques to extract insights and develop predictive models. Collaborate with engineering teams to integrate models into production systems. Apply best practices for model training, tuning, evaluation, and optimization. Develop and maintain pipelines for data ingestion, feature engineering, and model deployment. Leverage tools like OpenAI's GPT models, Google Gemini, Microsoft Copilot, and other available platforms for AI-driven solutions. Build and experiment with large language models, recommendation systems, computer vision models, and reinforcement learning systems. Continuously stay up-to-date with the latest AI/ML technologies and research trends. Qualifications: Required: Proven experience as a Data Scientist, Machine Learning Engineer, or similar role. Strong expertise in building and deploying machine learning models across various use cases. In-depth experience with AI frameworks and tools such as OpenAI (e.g., GPT models), Google Gemini, Microsoft Copilot, and others. Proficiency in machine learning techniques, including supervised/unsupervised learning, reinforcement learning, and deep learning. Expertise in model training, fine-tuning, and hyperparameter optimization. Strong programming skills in Python, R, or similar languages. Solid understanding of model evaluation metrics and performance tuning. Familiarity with cloud platforms (AWS, Azure, Google Cloud) and ML model deployment tools like TensorFlow, PyTorch, and Keras. Experience with MLOps tools such as MLflow, Kubeflow, and DataRobot. Strong experience with data wrangling, feature engineering, and preprocessing techniques. Excellent problem-solving skills and the ability to communicate complex ideas to non-technical stakeholders. Preferred: PhD or Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience with large-scale data processing frameworks (Hadoop, Spark, Databricks). Expertise in Natural Language Processing (NLP) techniques and frameworks like Hugging Face, BERT, T5, etc. Familiarity with deploying AI solutions on cloud services, including AWS SageMaker, Azure ML, or Google AI Platform. Experience with distributed machine learning techniques, multi-GPU setups, and optimizing large-scale models. Knowledge of reinforcement learning (RL) algorithms and practical application experience. Familiarity with AI interpretability tools such as SHAP, LIME, and Fairness Indicators. Proficiency in using collaboration tools such as Jupyter Notebooks, Git, and Docker for version control and deployment. Additional Tools & Technologies (Preferred Experience): Natural Language Processing (NLP): OpenAI GPT, BERT, T5, spaCy, NLTK, Hugging Face Machine Learning Frameworks: TensorFlow, PyTorch, Keras, Scikit-Learn Big Data Processing: Hadoop, Spark, Databricks, Dask Cloud Platforms: AWS SageMaker, Google AI Platform, Microsoft Azure ML, IBM Watson Automation & Deployment: Docker, Kubernetes, Terraform, Jenkins, CircleCI, GitLab CI/CD Visualization & Analysis: Tableau, Power BI, Plotly, Matplotlib, Seaborn, NumPy, Pandas Database : RDBMS, NoSQL Version Control: Git, GitHub, GitLab Why Join Us: Innovative Projects: Work on groundbreaking AI solutions and cutting-edge technology. Collaborative Team: Join a passionate, highly skilled, and collaborative team that values creativity and new ideas. Growth Opportunities: Develop your career in an expanding AI-focused company with continuous learning opportunities. Competitive Compensation: We offer a competitive salary and benefits package.

Posted 1 week ago

Apply

5.0 - 8.0 years

17 - 25 Lacs

Bengaluru

Work from Office

We are hiring for the position of Machine Learning Engineer and the role is based out Bangalore Shift Timings - General Shift Interested Candidates can send CV directly at Pratibha@myndsol.com Responsibilities: 1. Machine Learning Development & Deployment Design and implement supervised and unsupervised models for predictive analytics, including churn prediction, demand forecasting, renewal risk scoring, and cross-sell/upsell opportunity identification. Translate business problems into ML frameworks and production solutions that improve efficiency, revenue, or customer experience. Build, optimize, and maintain ML pipelines using tools such as MLflow, Airflow, or Kubeflow. 2. Cross-Functional ML Use Cases Partner with teams across Sales (e.g., lead scoring, next-best action), Customer Service (e.g., case deflection, sentiment analysis), Finance (e.g., revenue forecasting, fraud detection), Supply Chain (e.g., inventory optimization, ETA prediction), and Order Fulfillment (e.g., delivery risk modeling) to define impactful ML use cases. Develop domain-specific models and continuously improve them using feedback loops and real-world performance data. 3. Model Governance and MLOps Ensure robust model monitoring, versioning, and retraining strategies to keep models reliable in dynamic environments. Work closely with DevOps and Data Engineering teams to automate deployment, CI/CD workflows, and cloud-native ML infrastructure (AWS/GCP/Azure). 4. Data Engineering and Feature Architecture Collaborate with data engineers to define feature stores, data quality checks, and model-ready datasets on platforms like Snowflake or Databricks. Perform feature selection, transformation, and engineering aligned with each domains business logic. 5. Communication & Stakeholder Collaboration Present technical insights and model results to business and executive stakeholders in a clear, actionable format. Work with Product Owners and Program Managers to scope, prioritize, and plan delivery of ML projects. Qualifications: Required: 5+ years of experience in machine learning, data science, or AI engineering, with a strong software engineering foundation. Proficiency in Python, and libraries such as scikit-learn, XGBoost, PyTorch, TensorFlow, or similar. Experience deploying models into production using ML pipelines and orchestration frameworks. Strong understanding of data structures, SQL, and cloud platforms (e.g., AWS SageMaker, Azure ML, or GCP Vertex AI). Preferred: Experience supporting business functions such as Finance, Sales, or Operations with ML use cases. Familiarity with MLOps tools (MLflow, SageMaker Pipelines, Feature Store). Exposure to enterprise data platforms (e.g., Snowflake, Oracle Fusion, Salesforce). Background in statistics, forecasting, optimization, or recommendation systems

Posted 2 weeks ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Gurugram

Work from Office

About the Role We are looking for a highly skilled and self-driven AI/LLM Engineer with a strong background in Artificial Intelligence and Machine Learning, specifically with hands-on experience in building LLM-based solutions from the ground up. This is a unique opportunity to lead the development of AI systems that power intelligent automation and personalization across our platform hosted on AWS. You will work on the full lifecycle of AI product development from problem discovery, model selection, data preparation, fine-tuning, evaluation, and deployment. Key Responsibilities Design and develop AI/LLM-based solutions from scratch for fintech use cases such as underwriting, fraud detection, intelligent chatbots, document processing, etc. Fine-tune large language models (LLMs) for custom domain-specific tasks Develop RAG (retrieval-augmented generation) systems using embeddings and vector databases Build APIs to integrate AI/LLM capabilities into production-grade fintech applications Optimize and deploy models in AWS cloud environments using services like SageMaker, Lambda, ECS, ECR, etc. Implement prompt engineering techniques to improve LLM output quality Monitor model performance and continuously improve model accuracy, latency, and robustness Collaborate closely with product, data, and engineering teams to deliver business-impacting AI features Required Qualifications Bachelors or Master’s in Computer Science, Artificial Intelligence, Machine Learning, or related fields Minimum 3+ years of experience in AI/ML, with proven hands-on experience building LLM-based solutions Strong programming skills in Python and ML frameworks like PyTorch, TensorFlow Experience with LLM platforms (OpenAI, Anthropic, Cohere, Hugging Face) and frameworks (LangChain, LlamaIndex) Expertise in prompt engineering, model tuning, tokenization, embeddings, and language model fine-tuning Good understanding of vector databases (e.g., FAISS, Pinecone, Weaviate) Strong experience in AWS cloud services, especially SageMaker, Lambda, S3, ECS, ECR, API Gateway Ability to independently take a project from ideation to deployment Preferred Skills Experience with Docker, CI/CD, serverless architecture, and observability tools Prior experience in fintech domain or regulated environments Knowledge of data privacy, AI governance, and model explainability Familiarity with OCR, NLP pipelines, and generative AI use cases Contributions to open-source AI projects or published research is a plus

Posted 2 weeks ago

Apply

7.0 - 10.0 years

25 - 37 Lacs

Chennai, Bengaluru

Work from Office

Role Overview: Zolvit is looking for a highly skilled and self-driven Lead Machine Learning Engineer / Lead Data Scientist to lead the design and development of scalable, production-grade ML systems. This role is ideal for someone who thrives on solving complex problems using data, is deeply passionate about machine learning, and has a strong understanding of both classical techniques and modern AI systems like Large Language Models (LLMs). You will work closely with engineering, product, and business teams to identify impactful ML use cases, build data pipelines, design training workflows, and ensure the deployment of robust, high-performance models at scale. Key Responsibilities: Design and implement scalable ML systems, from experimentation to deployment. Build and maintain end-to-end data pipelines for data ingestion, preprocessing, feature engineering, and monitoring. Lead the development and deployment of ML models across a variety of use cases including classical ML and LLM-based applications like summarization, classification, document understanding, and more. Define model training and evaluation pipelines, ensuring reproducibility and performance tracking. Apply statistical methods to interpret data, validate assumptions, and inform modeling decisions. Collaborate cross-functionally with engineers, data analysts, and product managers to solve high-impact business problems using ML. Ensure proper MLOps practices are in place for model versioning, monitoring, retraining, and performance management. Keep up-to-date with the latest advancements in AI/ML, and actively evaluate and incorporate LLM capabilities and frameworks into solutions. Mentor junior ML engineers and data scientists, and help scale the ML function across the organization. Required Qualifications: 7+ years of hands-on experience in ML/AI, building real-world ML systems at scale. Proven experience with classical ML algorithms (e.g., regression, classification, clustering, ensemble models). Deep expertise in modern LLM frameworks (e.g., OpenAI, HuggingFace, LangChain) and their integration into production workflows. Strong experience with Python, and frameworks such as Scikit-learn, TensorFlow, PyTorch, or equivalent. Solid background in statistics and the ability to apply statistical thinking to real-world problems. Experience with data engineering tools and platforms (e.g., Spark, Airflow, SQL, Pandas, AWS Glue, etc.). Familiarity with cloud services (AWS preferred) and containerization tools (Docker, Kubernetes) is a plus. Strong communication and leadership skills, with experience mentoring and guiding junior team members. Self-starter attitude with a bias for action and ability to thrive in fast-paced environments. Masters degree in Machine Learning, Artificial Intelligence, Statistics, or a related field is preferred. Preferred Qualifications: Experience deploying ML systems in microservices or event-driven architectures. Hands-on experience with vector databases, embeddings, and retrieval-augmented generation (RAG) systems. Understanding of Responsible AI principles and practices. Why Join Us? Lead the ML charter in a mission-driven company solving real-world challenges. Work on cutting-edge LLM use cases and platformize ML capabilities for scale. Collaborate with a passionate and technically strong team in a high-impact environment. Competitive compensation, flexible working model, and ample growth opportunities. Work Location - Bangalore & Chennnai Interested candidates, please share your resume to lakshmi@vakilsearch.com

Posted 2 weeks ago

Apply

5.0 - 10.0 years

17 - 32 Lacs

Pune, Thiruvananthapuram

Hybrid

Role & responsibilities Job Description: Lead Generative AI Engineer Overview We are seeking a highly skilled and experienced Generative AI Developer to lead our advanced AI solutions team. The ideal candidate will have a strong foundation in Generative AI, Machine learning, and Artificial intelligence, with a proven ability to develop cutting-edge AI solutions that transform business processes and drive innovation. The Lead GenAI Engineer will develop and deploy advanced Generative AI Products, including fine-tuning V/LLM models and implementing (Graph) RAG (Retrieval-Augmented Generation) solutions. This hands-on role requires deep technical expertise and as part of the GenAI team will implement and optimize Gen AI agents Responsibilities - Technical Leadership: Lead and mentor a team of AI developers, providing technical guidance and innovative problem-solving strategies. Drive multiple POC developments parallelly. - Solution Design/Development: Architect and develop advanced generative AI solutions, including large language models, multimodal AI systems, and complex generative workflows. Implement state-of-the-art GenAI & agents technologies and best practices. Develop, experiment with, and validate Gen AI applications aligning with business objectives. Model Development: Design, train, and optimize state-of-the-art generative AI models, including fine-tuning, prompt engineering, and model alignment techniques. Embed automated processes (LLMOps) Implement scalable, secure, and cost-effective Gen AI solutions to meet current and future business needs. Prototype and benchmark on the AI/ML stack, LLMOps, and AgentOps frameworks. Troubleshoot AI application issues, working closely with infrastructure teams and application owners. Foster innovation within the team to support a collaborative work environment. - Project Management: Oversee the entire project lifecycle from conceptualization through development, deployment, and continuous improvement. - Collaboration Identify opportunities to apply the latest advancements in Large Language Models (LLMs) and Agents Work with our cross-functional team to deliver features in an iterative manner Educate the organization both from IT and the business perspectives on Generative AI. - Research and Innovation: Stay at the forefront of generative AI advancements, exploring emerging technologies and implementing innovative AI solutions. - Performance Optimization: Develop strategies to improve model efficiency, reduce computational costs, and enhance AI system performance. - Ethical AI Implementation: Ensure responsible AI development, addressing bias, fairness, and ethical considerations in generative AI systems. - Client Engagement: Translate complex technical concepts into actionable insights for stakeholders and clients. Qualifications - Masters in computer science, Artificial Intelligence, Machine Learning, or a related field. - Minimum 6 years of experience in AI development, with a strong focus on generative AI technologies. 5+ years in at least 2 AI domains: NLP, NLG, Computer Vision, Machine Learning, etc. - Proven track record of leading and delivering successful AI development projects. - Expert-level proficiency in programming languages such as Python, with strong software engineering skills. - Deep expertise in machine learning frameworks and libraries (TensorFlow, PyTorch, Hugging Face). Key Technical Competencies - Generative AI Technologies: - Large Language Models (LLMs), Diffusion Models, Transformer Architectures, Multimodal AI Systems Extensive knowledge of LLM Python libraries: langchain, langGraph, promptflow, semantic kernel, Autogen, graphRag, promptflow Good software engineering background, (developing application, API, frontend integration, security best practices, experience with fastAPI, Asyncio). Distributed Computing, GPU Optimization, Containerization (Docker, Kubernetes) is a plus Strong experience with atleast 1 of Cloud AI Services (AWS, GCP, Azure) Preferably Azure Strong experience with Gen AI model deployment and monitoring (CI/CD, Weight & Biases GitHub pipelines is a plus). Advanced understanding of security, compliance, and ethical considerations in AI - Bias Detection and Mitigation, Responsible AI Principles, Model Interpretability, Ethical AI Frameworks - Advanced Machine Learning Techniques - Transfer Learning, Few-shot and Zero-shot Learning, Prompt Engineering, Model Fine-tuning Preferred Qualifications - Strong communication and collaboration abilities - Experience with cutting-edge generative AI research - Publications or contributions to open-source AI projects - Experience in domain-specific AI applications (healthcare, finance, insurance) - Demonstrated ability to explain complex AI concepts to non-technical stakeholders Additional Skills - Exceptional problem-solving and analytical thinking - Ability to work in fast-paced, innovative environments - Commitment to continuous learning and technological advancement Preferred candidate profile Please apply who are interested to work C2H/Contract role in large insurance company the candidate should come office at least 4 days Monthly work from office in Pune/Trivandrum

Posted 4 weeks ago

Apply

10.0 - 17.0 years

9 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Dear Candidate, Please find below job description Role :- MLOps + ML Engineer Job Description: Role Overview: We are looking for a highly experienced MLOps and ML Engineer to lead the design, deployment, and optimization of machine learning systems at scale. This role requires deep expertise in MLOps practices, CI/CD automation, and AWS SageMaker, with a strong foundation in machine learning engineering and cloud-native development. Key Responsibilities: Architect and implement robust MLOps pipelines for model development, deployment, monitoring, and governance. Lead the operationalization of ML models using AWS SageMaker and other AWS services. Build and maintain CI/CD pipelines for ML workflows using tools like GitHub Actions, Jenkins, or AWS CodePipeline. Automate model lifecycle management including retraining, versioning, and rollback. Collaborate with data scientists, ML engineers, and DevOps teams to ensure seamless integration and scalability. Monitor production models for performance, drift, and reliability. Establish best practices for reproducibility, security, and compliance in ML systems. Required Skills: 10+ years of experience in ML Engineering, MLOps, or related fields. Deep hands-on experience with AWS SageMaker, Lambda, S3, CloudWatch, and related AWS services. Strong programming skills in Python and experience with Docker, Kubernetes, and Terraform. Expertise in CI/CD tools and infrastructure-as-code. Familiarity with model monitoring tools (e.g., Evidently, Prometheus, Grafana). Solid understanding of ML algorithms, data pipelines, and production-grade systems. Preferred Qualifications: AWS Certified Machine Learning Specialty or DevOps Engineer certification. Experience with feature stores, model registries, and real-time inference systems. Leadership experience in cross-functional ML/AI teams. Primary Skills: MLOps, ML Engineering, AWS related services (SageMaker/S3/CloudWatch) Regards Divya Grover +91 8448403677

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 8 Lacs

Noida

Work from Office

Optum AI is UnitedHealth Groups enterprise AI team. We are AI/ML scientists and engineers with deep expertise in AI/ML engineering for health care. We develop AI/ML solutions for the highest impact opportunities across UnitedHealth Group businesses including UnitedHealthcare, Optum Financial, Optum Health, Optum Insight, and Optum Rx. In addition to transforming the health care journey through responsible AI/ML innovation, our charter also includes developing and supporting an enterprise AI/ML development platform. Optum AI team members: Have impact at scale: We have the data and resources to make an impact at scale. When our solutions are deployed, they have the potential to make health care system work better for everyone. Do ground-breaking work: Many of our current projects involve cutting edge ML, NLP and LLM techniques. Generative AI methods for working with structured and unstructured health care data are continuously being developed and improved. We are working in one of the most important frontiers of AI/ML research and development. Partner with world-class experts on innovative solutions: Our team members are developing novel AI/ML solutions to business challenges. In some cases, this includes the opportunity to file patents and publish papers about the methods we develop. We also collaborate with AI/ML researchers at some of the worlds top universities. OptumAI is looking for a Senior Machine Learning Engineer with deep subject matter expertise in text processing, NLU and SLU (Natural language understanding and Spoken Language Understanding) who will be part of a team leading the technical development and inventions that allow Optum machine learning products drive positive impact in the healthcare business. Your expertise will bring business and industry context to science and technology decisions. As part of the OptumAI AI/ML engineering group, you set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your code, designs and documents are exemplary and are used as references across the organization. A successful candidate is a hands-on AI/ML engineering expert that will tackle intrinsically hard problems, acquiring expertise as needed. The candidate will help decompose complex problems into straightforward solutions. Primary Responsibilities: Help design and develop the next generation of NLP, ML & AI products, and services for healthcare Develop machine learning and deep learning models and systems in domains including, but not limited to: NLP, NLU, NLG, SLU and multidimensional time series forecasting among others Exposure in RAG, LangChain, VectorDBs Ability to Quantize, Optimize GenAI models Manage NLP & ML models lifecycle for a suite of products Run large complex proof-of-concepts for the healthcare business Manage prioritization and technology work for building NLP, ML & AI solutions Lead the full end-to-end machine learning development process including data ingestion and preparation, feature engineering, analysis and modeling, model deployment, performance tracking and documentation Establish best practices for end-to-end deep learning and machine learning development cycle to ensure rigor in process and quality in outcome Work with a great deal of autonomy to find solutions to complex problems Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree in applicable area of expertise or equivalent experience Experience in deploying scalable solutions to complex problems, from defining the problem, implementing the solution, and launching the new product successfully Skill Set; NLP, NLU, NLI Architecture: Transformers, Attention Models: GPT, Llama, Mistral Model Quantization Model Optimization Retrieval & Ranking, RAG, RAGAS Statistics, Machine Learning Models, Model Deployment Proven excellent communication, writing and presentation skills Experience in the health care industry Preferred Qualification: Experience in the health care industry

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Chennai

Work from Office

Prescience Decision Solutions is looking for ML Engineer to join our dynamic team and embark on a rewarding career journey. We are seeking a highly skilled and motivated Machine Learning Engineer to join our dynamic team. The Machine Learning Engineer will be responsible for designing, developing, and deploying machine learning models to solve complex problems and enhance our products or services. The ideal candidate will have a strong background in machine learning algorithms, programming, and data analysis. Responsibilities : Problem Definition : Collaborate with cross - functional teams to define and understand business problems suitable for machine learning solutions. Translate business requirements into machine learning objectives. Data Exploration and Preparation : Analyze and preprocess large datasets to extract relevant features for model training. Address data quality issues and ensure data readiness for machine learning tasks. Model Development : Develop and implement machine learning models using state - of - the - art algorithms. Experiment with different models and approaches to achieve optimal performance. Training and Evaluation : Train machine learning models on diverse datasets and fine - tune hyperparameters. Evaluate model performance using appropriate metrics and iterate on improvements. Deployment : Deploy machine learning models into production environments. Collaborate with DevOps and IT teams to ensure smooth integration. Monitoring and Maintenance : Implement monitoring systems to track model performance in real - time. Regularly update and retrain models to adapt to evolving data patterns. Documentation : Document the entire machine learning development pipeline, from data preprocessing to model deployment. Create user guides and documentation for end - users and stakeholders. Collaboration : Collaborate with data scientists, software engineers, and domain experts to achieve project goals. Participate in cross - functional team meetings and knowledge - sharing sessions.

Posted 1 month ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Pune, Bengaluru

Hybrid

Job role & responsibilities:- Collaborate with different teams to propose AI solutions on different use cases across the insurance value chain, with a focus on AIops and MLOps Research, build, and deploy AI models as part of the broader AI team, leveraging AIops and MLOps practices for efficient model management Contribute to our DevOps practices using OpenShift or Azure ML DevOps Technical Skills, Experience & Qualification required:- Expertise is required in the following fields: 6-9 years of progressive experience in AI and ML, with a focus on AIops and MLOps Experience in ML Flow or Cube Flow or Airflow, ML Ops, more in to production deployment Experience in deploying and managing AI models in production environments using Azure ML DevOps or OpenShift Implementation of at least 5 AI projects, preferably with experience in AIops and MLOps Experience with Azure, OpenShift, MLFlow DevOps for model deployment, monitoring, and management Setting up CI/CD pipelines using Azure DevOps, Jenkins, etc. Hands-on experience with Generative AI tech LLMs, RAG, Prompt Engineering Broad understanding of machine learning algorithms and techniques, including LLMs/SLMs, CNNs) RNNs, transformers, and attention mechanisms Immediate Joiners will be preferred

Posted 1 month ago

Apply

6.0 - 8.0 years

15 - 25 Lacs

Hyderabad

Hybrid

Role & responsibilities Data Scientist /ML engineers : ML Engineer with Python, SQL, Machine Learning, Azure

Posted 1 month ago

Apply

6.0 - 8.0 years

35 - 40 Lacs

Chennai, Bengaluru

Hybrid

Work closely with the ML Architect to develop on ML frameworks (TensorFlow, Scikit-Learn, Pytorch) Strong background in MLOps practices, including CI/CD, containerization (Docker), Orchestration frameworks (Kubernetes, Airflow)

Posted 1 month ago

Apply

4.0 - 9.0 years

20 - 32 Lacs

Noida

Work from Office

Dear Candidate Greetings from A2Z HR Consultants !!!!!!!! We are hiring for one of the renowned Web Software company based in Noida Number of working days: 5 Shift Timings: Day Shifts Salary: upto 32 LPA Profile: AI/ ML Engineer Experience Required: Min 5 Years **** Work from Office only Job Summary: Join our forward-thinking team to pioneer cutting-edge AI solutions that transform industries. We seek an AI Expert with 5+ years of experience in Python, machine learning, and large language models (LLMs), paired with robust MLOps expertise. You will architect, optimize, and deploy scalable AI systems, focusing on LLM fine-tuning (e.g., Llama, GPT, Mistral), Retrieval-Augmented Generation (RAG), and production-grade deployment on AWS . If you thrive on solving complex challenges, driving ethical AI innovation, and leading cross-functional teams, this role is for you. Key Responsibilities: Design, develop, and deploy AI/ML models using Python and relevant frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Optimize and fine-tune machine learning algorithms for performance, scalability, and accuracy. Work with large datasets to extract insights, preprocess data, and build predictive models. Develop and integrate AI-powered solutions into applications, including natural language processing (NLP), computer vision, and deep learning systems. Architect, fine-tune, and deploy large language models (LLMs) for various use cases such as chatbots, text generation, summarization, and document understanding. Implement retrieval-augmented generation (RAG) techniques to enhance LLM capabilities. Research and apply model compression techniques such as quantization and distillation to optimize LLM deployment. Leverage embeddings, knowledge graphs, and vector databases for efficient information retrieval and AI-driven insights. Develop robust MLOps/LLMOps pipelines for model versioning, monitoring, and CI/CD integration. Deploy AI models in cloud environments (GCP Vertex AI, AWS SageMaker, Azure ML) and optimize inference cost-performance trade-offs. Utilize containerization and orchestration tools such as Docker, Kubernetes, and Kubeflow for scalable AI deployments. Stay updated on the latest AI advancements and integrate emerging technologies into production systems. Ensure AI model interpretability, fairness, and adherence to ethical AI principles. Participate in code reviews, debugging, and troubleshooting of AI models and pipelines. Required Qualifications & Skills: Education: Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Experience: 5+ years of hands-on experience in AI, machine learning, or deep learning projects. Programming: Strong proficiency in Python and its AI/ML libraries (NumPy, Pandas, TensorFlow, PyTorch, Scikit-learn, etc.). LLM Expertise: Hands-on experience with LLMs such as OpenAIs GPT, Llama, Mistral, Gemini, PaLM, or similar frameworks. Fine-Tuning & Optimization: Experience fine-tuning LLMs, optimizing for cost-performance balance, and utilizing techniques like LoRA, PEFT, and RLHF. NLP & Deep Learning: Expertise in NLP model training, transformer-based architectures (BERT, T5, GPT, etc.), and model evaluation techniques. MLOps & LLMOps: Experience with model lifecycle management, monitoring, CI/CD pipelines, and cloud-based model deployment. Cloud & Deployment: Proficiency in deploying AI models on Google Cloud (Vertex AI), AWS, or Azure. Containerization & Orchestration: Experience with Docker, Kubernetes, and Kubeflow for AI model deployment. Data Engineering: Knowledge of data preprocessing, feature engineering, and handling large-scale datasets efficiently. Prompt Engineering: Strong understanding of prompt design, embedding generation, and model evaluation metrics for LLMs. Security & Ethics: Familiarity with AI security best practices, data privacy, and responsible AI principles. Interested candidates can reach out at 9711831492 or share your resume at gaurav.a2zhrconsultants@gmail.com Candidates who are already on Notice period or immediately available shall apply only. Regards Gaurav Kumar A2Z HR Consultants 9711831492

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies