Jobs
Interviews

31 Ml Flow Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

20 - 35 Lacs

noida, gurugram, delhi / ncr

Hybrid

Salary : 20 to 35 LPA Exp: 3 to 8 years Location :/Gurugram/Bangalore/Hyderabad Notice: Immediate to 30 days..!! Roles & responsibilities: 3+ years exp on Python , ML and Banking model development Interact with the client to understand their requirements and communicate / brainstorm solutions, model Development: Design, build, and implement credit risk model. Contribute to how analytical approach is structured for specification of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems 3+ years exp on ML/Python (predictive modelling) . Design, implement, test, deploy and maintain innovative data and machine learning solutions to accelerate our business. Create experiments and prototype implementations of new learning algorithms and prediction techniques Collaborate with product managers, and stockholders to design and implement software solutions for science problems Use machine learning best practices to ensure a high standard of quality for all of the team deliverables Has experience working on unstructured data ( text ): Text cleaning, TFIDF, text vectorization Hands-on experience with IFRS 9 models and regulations. Data Analysis: Analyze large datasets to identify trends and risk factors, ensuring data quality and integrity. Statistical Analysis: Utilize advanced statistical methods to build robust models, leveraging expertise in R programming. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to align models with business needs. Continuous Improvement: Stay updated with the latest methodologies and tools in credit risk modeling and R programming.

Posted 5 days ago

Apply

8.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Platform Development and Machine Learning expert at Adobe, you will play a crucial role in changing the world through digital experiences by building scalable AI platforms and designing ML pipelines. Your responsibilities will include: - Building scalable AI platforms that are customer-facing and evangelizing the platform with customers and internal stakeholders. - Ensuring platform scalability, reliability, and performance to meet business needs. - Designing ML pipelines for experiment management, model management, feature management, and model retraining. - Implementing A/B testing of models and designing APIs for model inferencing at scale. - Demonstrating proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. - Serving as a subject matter expert in LLM serving paradigms and possessing deep knowledge of GPU architectures. - Expertise in distributed training and serving of large language models and proficiency in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. - Demonstrating proven expertise in model fine-tuning and optimization techniques to achieve better latencies and accuracies in model results. - Reducing training and resource requirements for fine-tuning LLM and LVM models. - Having extensive knowledge of different LLM models and providing insights on the applicability of each model based on use cases. - Delivering end-to-end solutions from engineering to production for specific customer use cases. - Showcasing proficiency in DevOps and LLMOps practices, including knowledge in Kubernetes, Docker, and container orchestration. - Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Your skills matrix should include expertise in LLM such as Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama, LLM Ops such as ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI, Databases/Datawarehouse like DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery, Cloud Knowledge of AWS/Azure/GCP, Dev Ops knowledge in Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus, and Cloud Certifications (Bonus) in AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert. Proficiency in Python, SQL, and Javascript is also required. Adobe is committed to creating exceptional employee experiences and values diversity. If you require accommodations to navigate the website or complete the application process, please contact accommodations@adobe.com or call (408) 536-3015.,

Posted 5 days ago

Apply

8.0 - 13.0 years

0 Lacs

pune, maharashtra

On-site

The ML Solutions team within Markets OPS Technology is dedicated to developing solutions using Artificial Intelligence, Machine Learning, and Generative AI. This team is a leader in creating new ideas, innovative technology solutions, and ground-breaking solutions for Markets Operations and Other Line of Businesses. We work closely with our clients and business partners to progress solutions from ideation to production by leveraging the entrepreneurial spirit and technological excellence. The ML Solutions team is seeking a Data Scientist/Machine Learning Engineer to drive the design, development, and deployment of innovative AI/ML and GenAI-based solutions. In this hands-on role, you will leverage your expertise to create a variety of AI models, guiding a team from initial concept to successful production. A key aspect involves mentoring team members and fostering their growth. You will collaborate closely with business partners and stakeholders, championing the adoption of these advanced technologies to enhance client experiences, deliver tangible value to our customers, and ensure adherence to regulatory requirements through cutting-edge technical solutions. This position offers a unique opportunity to shape the future of our AI initiatives and make a significant impact on the organization. Key Responsibilities: - **Hands-On Execution and Delivery:** Actively contribute to the development and delivery of AI solutions, driving innovation and excellence within the team. Take a hands-on approach to ensure AI models are successfully deployed into production environments, meeting high-quality standards and performance benchmarks. - **Mentoring Young Talents:** Mentoring the team, guiding data analysts/ML engineers from concept to production. This involves fostering technical growth, providing project oversight, and ensuring adherence to best practices, ultimately building a high-performing and innovative team. - **Quality Control:** Ensure the quality and performance of generative AI models, conducting rigorous testing and evaluation. - **Research and Development:** Participate in research activities to explore and advance state-of-the-art generative AI techniques. Stay actively engaged in monitoring ongoing research efforts, keeping abreast of emerging trends, and ensuring that the Generative AI team remains at the forefront of the field. - **Cross-Functional Collaboration:** Collaborate effectively with various teams, including product managers, engineers, and data scientists, to integrate AI technologies into products and services. Skills & Qualifications: - 8 to 13 years of Strong hands-on experience in Machine Learning, delivering complex solutions to production. - Experience with Generative AI technologies essential. - Understanding of concepts like supervised, unsupervised, clustering, embedding. - Knowledge of NLP, Name Entity Recognition, Computer Vision, Transformers, Large Language Models. - In-depth knowledge of deep learning and Generative AI frameworks such as Langchain, Lang Graph, Crew AI, or similar. - Experience with other open-source frameworks/libraries/APIs like Hugging Face Transformers, Spacy, Pandas, scikit-learn, NumPy, OpenCV. - Experience in using Machine Learning/Deep Learning: XGBoost, LightGBM, TensorFlow, PyTorch, Keras. - Proficiency in Python Software Development, following Object-Oriented design patterns and best practices. - Strong background in mathematics: linear algebra, probability, statistics, and optimization. - Experience with evaluation, scoring with frameworks like ML Flow. - Experience with Docker containers and editing a Docker file, experience with K8s is a plus. - Experience with Postgres and Vector DBs a plus. - Excellent problem-solving skills and the ability to think creatively. - Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. - Publications and contributions to the AI research community are a plus. - Masters degree/Ph.D. or equivalent experience in Computer Science, Data Science, Statistics, or a related field. - 8-12 years of experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.,

Posted 1 week ago

Apply

5.0 - 7.0 years

8 - 12 Lacs

hyderabad

Hybrid

Location: Hyderabad Kindly send your resume to +91 93619 12009 Job Description: Primary Skills: AI and RAG AWS Python (advanced knowledge) FastAPI, FLASK MLFlow Secondary skills: AWS EKS/ ECS Unit and functional test Terraform, Jenkins Soft skills (communication, collaboration etc.) Mandatory Skills AI and RAG AWS Python (advanced knowledge) FastAPI, FLASK MLFlow Good to have skills AWS EKS/ ECS Unit and functional test Terraform, Jenkins Soft skills (communication, collaboration etc.)

Posted 1 week ago

Apply

5.0 - 10.0 years

12 - 19 Lacs

vadodara

Work from Office

Job Title: Senior Data Engineer Basic Function The Senior Data Engineer should be an expert familiar with all areas of data warehousing technical components (e.g., ETL, Reporting, Data Model), connected infrastructure, and their integrations. The ideal candidate will be responsible for developing the overall architecture and high-level design of the data schema environment. The candidate must have extensive experience with Star Schemas, Dimensional Models, and Data Marts. The individual is expected to build efficient, flexible, extensible, and scalable ETL design and mappings. Excellent written and verbal communication skills are required as the candidate will work very closely with diverse teams. A wide degree of creativity and latitude is expected. This position reports to the Manager of Data Services. Typical Requirements Requires strong technical and analytical skills, data management expertise, and business acumen to achieve results. The ideal candidate should be able to deep dive into data, perform advanced analysis, discover root causes, and design scalable long-term solutions using Databricks, Spark, and related technologies to address business questions. A strong understanding of business data needs and alignment with strategic goals will significantly enhance effectiveness. The role requires the ability to prepare high-level architectural frameworks for data services and present them to business leadership. Additionally, the candidate must work well in a collaborative environment while performing a variety of detailed tasks daily. Strong oral and written communication skills are essential, along with expertise in application design and a deep understanding of distributed computing, data lake architecture, and relational database concepts. This position requires the ability to leverage both business and technical capabilities regularly. Essential Functions Gather, structure, and process data from various sources (e.g., transactional systems, third-party applications, cloud-based financial systems, customer feedback, etc.) using Databricks and Apache Spark to enhance business insights. Develop and enforce standards, procedures, and quality control measures for data analytics in compliance with enterprise policies and best practices. Partner with business stakeholders to build scalable data models and infrastructure, leveraging Databricks' Delta Lake, MLflow, and Unity Catalog. Identify, analyze, and interpret complex data sets to develop insightful analytics and predictive models. Utilize Databricks to design and optimize data processing pipelines for large-scale data ingestion, transformation, and storage. Ensure data infrastructure completeness and compatibility to support system performance, availability, and reliability requirements. Architect and implement robust data pipelines using PySpark, SQL, and Databricks Workflows for automation. Provide input on technical challenges and recommend best practices for data engineering solutions within Databricks. Design and optimize data models for analytical and operational use cases. Develop and implement monitoring, alerting, and logging frameworks for data pipelines. Lead the architecture and implementation of next-generation cloud-based data solutions. Build scalable and reliable data integration pipelines using Databricks, SQL, Python, and Spark. Mentor and develop junior team members, fostering a data-driven culture within the organization. Develop high-quality, scalable data solutions to support business intelligence, analytics, and data science initiatives. Interface with technology teams to extract, transform, and load (ETL) data from diverse data sources into Databricks. Continuously improve data processes, automating and simplifying workflows for self-service analytics. Work with large, complex data sets to solve non-routine analysis problems, applying advanced machine learning and data processing techniques as needed. Prototype, iterate, and scale data analysis pipelines, advocating for improvements in Databricks data structures and governance. Collaborate cross-functionally to present findings effectively through data visualizations and executive-level presentations. Research and implement advanced analytics, forecasting, and optimization methods to drive business outcomes. Stay up to date with industry trends and emerging Databricks technologies to enhance data-driven capabilities. Specialized Skills or Technical Knowledge Bachelor's degree or higher in a quantitative/technical field (e.g., Computer Science, Statistics, Engineering). A Master's degree in Computer Science, Mathematics, Statistics, or Economics is preferred. 5+ years of experience in data engineering, business intelligence, or data analytics, with a focus on Databricks and Apache Spark. Extensive experience with SQL and Python for developing optimized queries and data transformations. Expertise in Databricks ecosystem, including Delta Lake, ML flow, and Databricks SQL. Experience in designing and implementing ETL/ELT pipelines on Databricks using Spark and cloud-based data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage). Strong data modeling, data warehousing, and data governance knowledge. Experience in working with structured and unstructured data, including real-time and batch processing solutions. Familiarity with data visualization tools such as Power BI, Tableau, or Looker. Deep understanding of distributed computing, scalable data architecture, and cloud computing frameworks. Hands-on experience with CI/CD pipelines, Infrastructure as Code (IaC), and DevOps practices in data engineering. Proven track record of working with cross-functional teams, stakeholders, and senior management to deliver high-impact data solutions. Knowledge of machine learning and AI-driven analytics is a plus. Strong problem-solving skills and the ability to work independently and in a team-oriented environment. Excellent communication skills and ability to convey complex data concepts to non-technical stakeholders. Experience in a franchised organization is a plus.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Sr. Executive / Assistant Manager Data Scientist Godrej Agrovet Mumbai, Maharashtra, India ------------------------------------------------------------------------------------------------------------- Job Title: Sr. Executive / Assistant Manager Data Scientist Job Type: Permanent, Full-time Function: Digital Business: Godrej Agrovet Location: Mumbai, Maharashtra, India About Godrej Industries Group : GIG is a holding company of the Godrej Group. We have significant interests in consumer goods, real estate, agriculture, chemicals, and financial services through our subsidiary and associate companies, across 18 countries. https://www.godrejindustries.com/ About Godrej Agrovet: Godrej Agrovet Limited (GAVL) is a diversified, Research & Development focused agri-business Company dedicated to improving the productivity of Indian farmers by innovating products and services that sustainably increase crop and livestock yields. GAVL holds leading market positions in the different businesses it operates - Animal Feed, Crop Protection, Oil Palm, Dairy, Poultry and Processed Foods. GAVL has a pan India presence with sales of over a million tons annually of high-quality animal feed and cutting- edge nutrition products for cattle, poultry, aqua feed and specialty feed. Our teams have worked closely with Indian farmers to develop large Oil Palm Plantations which is helping in bridging the demand and supply gap of edible oil in India. In the crop protection segment, the company meets the niche requirement of farmers through innovative agrochemical offerings. GAVL through its subsidiary Astec Life Sciences Limited, is also a business-to-business (B2B) focused bulk manufacturer of fungicides & herbicides. In Dairy and Poultry and Processed Foods, the company operates through its subsidiaries Creamline Dairy Products Limited and Godrej Tyson Foods Limited. Apart from this, GAVL also has a joint venture with the ACI group of Bangladesh for animal feed business in Bangladesh. For more information on the Company, please log on to www.godrejagrovet.com Roles & Responsibilities: Data Cleaning, Preprocessing & exploration: Prepare data for analysis, ensuring quality, consistency, and completeness by handling missing values, outliers, and transforming data. Explore and analyze large and complex datasets to identify patterns, trends, and anomalies Machine Learning Model Development: Build, train, and deploy machine learning models on the Databricks platform, leveraging tools such as ML flow for experiment with techniques like regression, classification, clustering, and time series analysis Model Evaluation & Deployment: Develop and select features to improve model performance, leveraging Databricks' distributed computing capabilities for efficient processing. Familiarity with CI/CD tools (e.g., Jenkins, GitLab) for automating deployment and testing of data pipelines Collaboration: Collaborate with data engineers, analysts, and business stakeholders to understand business requirements and translate them into data-driven solutions. Data Visualization and Reporting: Create visualizations and dashboards within Databricks, Power BI and other tools to communicate insights to technical and non-technical stakeholders. Continuous Learning: Stay up to date with the latest developments in data science, machine learning, and industry best practices to continually enhance skills and processes. Key Skills: Knowledge of statistical analysis techniques, hypothesis testing, and machine learning Familiarity with NLP, time series analysis, and computer vision or A/B testing Databricks and Apache Spark: Proficiency with Databricks, Spark DataFrames and MLlib Programming: Proficiency in (Python, TensorFlow, Pandas, scikit-learn, PySpark, NumPy) with experience in writing scalable code for large datasets SQL: Strong SQL skills for data extraction, manipulation, and analysis Familiarity with ML flow for tracking, model versioning, and reproducibility. Familiarity with cloud data storage and processing tools (e.g., Azure Data Lake, AWS S3). Educational Qualification: Education: Bachelors degree in Statistics, Mathematics, Computer Science, or a related field Experience: 3+ years of experience in data science or analytical role Experience: As a skilled Data Scientist, you will leverage your analytical skills to uncover insights from complex datasets, build predictive models, and inform data-driven decisions across the organization. Youll work closely with cross-functional teams, including business, engineering, and product, to apply advanced statistical methods, machine learning, and domain knowledge to solve real-world problems. Whats in it for you Be an equal parent Maternity support, including paid leave ahead of statutory guidelines, and flexible work options on return Paternity support, including paid leave New mothers can bring a caregiver and children under a year old, on work travel Adoption support; gender neutral and based on the primary caregiver, with paid leave options No place for discrimination at Godrej Gender-neutral anti-harassment policy Same sex partner benefits at par with married spouses Gender transition support We are selfish about your wellness Comprehensive health insurance plans, as well as accident coverage for you and your family, with top-up options Uncapped sick leave Mental wellness and self-care programmes, resources and counselling Celebrating wins, the Godrej Way Structured recognition platforms for individual, team and business-level achievements Performance-based earning opportunities https://www.godrejcareers.com/benefits/ An inclusive Godrej Before you go, there is something important we want to highlight. There is no place for discrimination at Godrej. Diversity is the philosophy of who we are as a company. And has been for over a century. Its not just in our DNA and nice to do. Being more diverse - especially having our team members reflect the diversity of our businesses and communities - helps us innovate better and grow faster. We hope this resonates with you. We take pride in being an equal opportunities employer. We recognise merit and encourage diversity. We do not tolerate any form of discrimination on the basis of nationality, race, colour, religion, caste, gender identity or expression, sexual orientation, disability, age, or marital status and ensure equal opportunities for all our team members. If this sounds like a role for you, apply now! We look forward to meeting you. Show more Show less

Posted 2 weeks ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

pune

Work from Office

You will be a key member of the Data + AI Pipeline team, leading to the integration of Kubeflow, Kubernetes, Docker, Keda, and Python technologies. Your role will involve developing and maintaining AI pipelines that support various projects, ensuring seamless and efficient data processing, model training, and deployment. As part of a dynamic and interdisciplinary team, you will collaborate with experts in data engineering, AI, and software development to create robust and scalable solutions. Job Description • We are seeking a motivated and experienced Data + AI Pipeline Engineer to lead the development and maintenance of our KION Machine Vision AI pipeline infrastructure. • As a Data + AI Pipeline Lead, you will provide technical leadership and strategic direction to the team, be responsible for designing and implementing scalable and efficient AI pipelines using Kubeflow, Kubernetes, Docker, Keda, and Python. • You will collaborate with cross-functional teams to understand project requirements, define data processing workflows, and ensure the successful deployment of machine learning models. • Your role will involve integrating and optimizing data processing and machine learning components within our pipeline architecture. • You will provide technical leadership and mentorship to junior team members, guiding them in the design and implementation of AI pipelines and fostering their professional growth. • Conduct code reviews and provide constructive feedback to ensure the quality, readability, and maintainability of codebase across the team. • Collaborate with the software engineering team to ensure the seamless integration of AI pipelines with other software applications. • Implement and maintain CI/CD pipelines to automate the deployment of AI models and ensure continuous integration and delivery. • Work closely with external partners and vendors to leverage the latest advancements in AI and data processing technologies. Qualifications : • A university degree with a technical focus, preferably in computer science, data science, or a related field. • 10+ years of experience in building and maintaining large-scale AI pipeline projects, with at least 2 years in a leadership or managerial role. • Hands-on experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and services for data processing, storage, and deployment. • Strong programming skills in Python and proficiency in using libraries and frameworks for data manipulation, analysis, and visualization (e.g., pandas, NumPy, matplotlib). • Knowledge of containerization technologies (e.g., Docker, Kubernetes) and orchestration tools for deploying and managing machine learning pipelines at scale. • Expertise in designing and optimizing data processing workflows and machine learning pipelines. • Strong communication skills to collaborate effectively with cross-functional teams and present complex technical concepts in a clear manner. • Experience with CI/CD, automation, and a strong understanding of software engineering best practices. • Ability to overview complex software architectures and contribute to future-oriented developments in the field of AI and data processing. • Excellent problem-solving skills and the ability to work in a dynamic and fast-paced environment. • Leading and guiding, technical mentoring to the team • Very good English skills, both written and verbal, to facilitate effective communication within the global team.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

15 - 30 Lacs

gurugram

Hybrid

Exciting opportunity for an ML Platform Specialist to join a leading technology-driven firm. You will be designing, deploying, and maintaining scalable machine learning infrastructure with a strong focus on Databricks, model lifecycle, and MLOps practices. Location: Gurugram (Hybrid) Your Future Employer Our client is a leading digital transformation partner driving innovation across industries. With a strong focus on data-driven solutions and cutting-edge technologies, they are committed to fostering a collaborative and growth-focused environment. Responsibilities Designing and implementing scalable ML infrastructure on Databricks Lakehouse Building CI/CD pipelines and workflows for machine learning lifecycle Managing model monitoring, versioning, and registry using MLflow and Databricks Collaborating with cross-functional teams to optimize machine learning workflows Driving continuous improvement in MLOps and automation strategies Requirements Bachelors or Masters in Computer Science, ML, Data Engineering, or related field 3-5 years of experience in MLOps , with strong expertise in Databricks and Azure ML Proficient in Python, PySpark, MLflow, Delta Lake, and Databricks Feature Store Hands-on experience with cloud platforms ( Azure/AWS/GCP ), CI/CD, Git Knowledge of Terraform, Kubernetes, Azure DevOps , and distributed computing is a plus Whats in it for you Competitive compensation with performance-driven growth opportunities Work on cutting-edge MLOps infrastructure and enterprise-scale ML solutions Collaborative, diverse, and innovation-driven work culture Continuous learning, upskilling, and career development support

Posted 3 weeks ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

pune

Hybrid

Job Description We are seeking a Senior MLOps Engineer .In this role, you will collaborate across software engineering, machine learning, and DevOps to design, develop, and scale the core infrastructure that enables and supports AI initiatives throughout the organization. This position requires deep expertise in productionize/operationalize ML models. Key Responsibilities Design, develop, and maintain end-to-end MLOps pipelines to automate the machine learning lifecycle from training to deployment and monitoring. Collaborate with data scientists, ML engineers, and platform teams to operationalize ML models across cloud and hybrid environments. Build and manage containerized environments for training and inference using Docker and Kubernetes. Implement CI/CD workflows (e.g., GitHub Actions, Jenkins) for deploying ML models. Ensure observability and monitoring of models in production (latency, drift, performance, errors). Support model deployment to a variety of targets including APIs, applications, dashboards, and edge devices. Implement model versioning , rollback strategies, governance, and traceability using tools like MLflow or Kubeflow. Drive best practices across teams and provide technical mentorship on MLOps topics. Continuously evaluate and integrate new tools and technologies to improve MLOps capabilities. Required Skills & Qualifications 8+ years of experience in software, data, or ML engineering, with 6 + years in MLOps . Strong programming experience in Python , SQL , and Spark/PySpark . Deep expertise in MLOps tools such as MLflow, Kubeflow, Airflow, etc. Experience with cloud platforms , preferably GCP (Vertex AI, GKE, Cloud Run) or AWS . Hands-on with Databricks , FastAPI , Docker , Kubernetes . Proficient with CI/CD , Git , and Infrastructure as Code (Terraform, Ansible). Knowledge of monitoring frameworks like Prometheus and Grafana. Experience in GCP , AWS is also acceptable. Strong communication and stakeholder management skills. Preferred Qualifications Experience with building scalable, self-service ML infrastructure. Familiarity with model governance, compliance , and security in production environments. Prior work in building reusable and modular MLOps solutions for cross-functional teams.

Posted 3 weeks ago

Apply

6.0 - 9.0 years

30 - 37 Lacs

bangalore rural, bengaluru

Hybrid

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Lead applied research in LLMs, generative AI, and multimodal models Evaluate and experiment with state-of-the-art architectures (e.g., Transformers, Diffusion Models, Retrieval-Augmented Generation) Publish internal whitepapers and contribute to external conferences where applicable Engineering & Implementation Design and implement scalable AIML pipelines using frameworks like PyTorch, TensorFlow, Hugging Face, and MLflow Collaborate with engineering teams to deploy models into production using MLOps best practices (CI/CD, model versioning, monitoring) Tooling & Infrastructure Evaluate and integrate advanced AIML tools such as GitHub Copilot, Windsurf, Vertex AI, Azure OpenAI, and Hugging Face Transformers Work with cloud platforms (AWS, Azure, GCP) to ensure scalable and secure model deployment Architecture & Strategy Architect end-to-end AIML systems including data ingestion, model training, inference, and feedback loops Partner with enterprise architects and product leaders to align AIML capabilities with business goals 3 Mentorship & Collaboration Mentor junior engineers and researchers Collaborate with cross-functional teams including data scientists, software engineers, and business stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External Required Qualifications: Bachelors Degree in Computer Science, Machine Learning, or related field 4+ years of experience in AIML engineering and research Experience with experiment tracking tools (e.g., Weights & Biases, MLflow) Hands-on experience with MLOps, model deployment, and monitoring Proven expertise in LLMs, generative AI, and deep learning Solid programming skills in Python and familiarity with ML libraries (e.g., scikit-learn, Keras) Familiarity with AIML governance, ethics, and responsible AI practices

Posted 3 weeks ago

Apply

6.0 - 8.0 years

20 - 27 Lacs

indore, pune

Work from Office

Design, develop, deploy ML models for real-world applications Implement, optimize ML pipelines using PySpark & MLflow Structured/ unstructured data using Pandas, NumPy Train models using scikit-learn, TensorFlow, PyTorch. Integrate LLMs Required Candidate profile Strong in Python, ML algorithms, model development, evaluation techniques Exp in PySpark model lifecycle management, MLflow Pandas, NumPy, and Scikit-learn, LLMs (OpenAI, Hugging Face Transformers)

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

You should have hands-on experience with cloud platforms such as AWS, Azure, and GCP, including VPC, Load balancer, VM, SQL, and Storage. Additionally, familiarity with Configuration Management tools like Puppet, Ansible, Chef, and Terraform is required. It is essential to have proven experience as a DevOps or MLOps Engineer, focusing on machine learning deployments. Strong proficiency in scripting languages like Python and Bash, knowledge of the Linux operating system, and familiarity with CI/CD tools such as Jenkins are necessary. Hands-on experience with containerization and orchestration tools like Docker and Kubernetes is also expected. Excellent problem-solving and communication skills are a must for this role.,

Posted 1 month ago

Apply

4.0 - 9.0 years

0 - 0 Lacs

bangalore, noida, chennai

On-site

Exp: 4-8 yrs Must Have Skills: MLOps Fundamentals: CI/CD/CT Pipelines of ML with Azure Demo, Snowflake, Modeling Fundamentals & Linear Regression, ML Flow Budget 25 LPA Location: Remote (Permanent Work from home) PFB Details: Mode : Remote Job timings : General shift Total interview rounds : 2 Technical round, 1 client round Job description: ML engineer. Ability and experience to deploy models on Snowflake; More experience in handling the machine learning pipelines & ML engineering e.g. managing ML pipeline ML pipeline and DevOps experienced candidate is more relevant for the client Focus on usage of MLflow to monitor performance of models Some data engineering exposure is important as this person will work as a bridge between MPG's internal Data engineering team and ML team DE skill: Experience in Snowflake data platform and infrastructure/ Snow-park should be known Data Science theoretical or DS. Primary Skill- Modeling Fundamentals & Linear Regression, Snowflake, MLOps Fundamentals: CI/CD/CT Pipelines of ML with Azure Demo, ML Flow Secondary Skill- Stanford NER / Core NLP intrested share resume at talent.acquisition@iqanalytic.com

Posted 1 month ago

Apply

2.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Your IT Future, Delivered. Senior Software Engineer (AI/ML Engineer) With a global team of 5600+ IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. All our offices have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. At DHL IT Services, we are designing, building and running IT solutions for the whole DPDHL globally. Grow together. The AI & Analytics team builds and runs solutions to get much more value out of our data. We help our business colleagues all over the world with machine learning algorithms, predictive models and visualizations. We manage more than 46 AI & Big Data Applications, 3.000 active users, 87 countries and up to 100,000,000 daily transaction. Integration of AI & Big Data into business processes to compete in a data driven world needs state of the art technology. Our infrastructure, hosted on-prem and in the cloud (Azure and GCP), includes MapR, Airflow, Spark, Kafka, jupyter, Kubeflow, Jenkins, GitHub, Tableau, Power BI, Synapse (Analytics), Databricks and further interesting tools. We like to do everything in an Agile/DevOps way. No more throwing the problem code to support, no silos. Our teams are completely product oriented, having end to end responsibility for the success of our product. Ready to embark on the journey Heres what we are looking for: Currently, we are looking for AI / Machine Learning Engineer . In this role, you will have the opportunity to design and develop solutions, contribute to roadmaps of Big Data architectures and provide mentorship and feedback to more junior team members. We are looking for someone to help us manage the petabytes of data we have and turn them into value. Does that sound a bit like you Lets talk! Even if you dont tick all the boxes below, wed love to hear from you; our new department is rapidly growing and were looking for many people with the can-do mindset to join us on our digitalization journey. Thank you for considering DHL as the next step in your career we do believe we can make a difference together! What will you need University Degree in Computer Science, Information Systems, Business Administration, or related field. 2+ years of experience in the Data Scienctist / Machine Learning Engineer role Strong analytic skills related to working with structured, semi structured and unstructured datasets. Advanced Machine learning techniques: Decision Trees, Random Forest, Boosting Algorithm, Neural Networks, Deep Learning, Support Vector Machines, Clustering, Bayesian Networks, Reinforcement Learning, Feature Reduction / engineering, Anomaly deduction, Natural Language Processing (incl. sentiment analysis, Topic Modeling), Natural Language Generation. Statistics / Mathematics: Data Quality Analysis, Data identification, Hypothesis testing, Univariate / Multivariate Analysis, Cluster Analysis, Classification/PCA, Factor Analysis, Linear Modeling, Time Series, distribution / probability theory and/or Strong experience in specialized analytics tools and technologies (including, but not limited to) Lead the integration of large language models into AI applications. Very good in Python Programming. Power BI, Tableau Develop the application and deploy the model in production. Kubeflow, ML Flow, Airflow, Jenkins, CI/CD Pipeline. As an AI/ML Engineer, you will be responsible for developing applications and systems that leverage AI tools, Cloud AI services, and Generative AI models. Your role includes designing cloud-based or on-premises application pipelines that meet production-ready standards, utilizing deep learning, neural networks, chatbots, and image processing technologies. Professional & Technical Skills: Essential Skills: Expertise in Large Language Models. Strong knowledge of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Practical experience with various machine learning algorithms, including linear regression, logistic regression, decision trees, and clustering techniques. Proficient in data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Awareness of Apache Spark, Hadoop Awareness of Agile / Scrum ways of working. Identify the right modeling approach(es) for given scenario and articulate why the approach fits. Assess data availability and modeling feasibility. Review interpretation of models results. Experience in Logistic industry domain would be added advantage. Roles & Responsibilities: Act as a Subject Matter Expert (SME). Collaborate with and manage team performance. Make decisions that impact the team. Work with various teams and contribute to significant decision-making processes. Provide solutions to challenges that affect multiple teams. Lead the integration of large language models into AI applications. Research and implement advanced AI techniques to improve system performance. Assist in the development and deployment of AI solutions across different domains. You should have: Certifications in some of the core technologies. Ability to collaborate across different teams/geographies/stakeholders/levels of seniority. Customer focus with an eye on continuous improvement. Energetic, enthusiastic and results-oriented personality. Ability to coach other team members, you must be a team player! Strong will to overcome the complexities involved in developing and supporting data pipelines. Language requirements: English Fluent spoken and written (C1 level) An array of benefits for you: Hybrid work arrangements to balance in-office collaboration and home flexibility. Annual Leave: 42 days off apart from Public / National Holidays. Medical Insurance: Self + Spouse + 2 children. An option to opt for Voluntary Parental Insurance (Parents / Parent -in-laws) at a nominal premium covering pre existing disease. In House training programs: professional and technical training certifications. Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be working as an AI Platform Engineer in Bangalore as part of the GenAI COE Team. Your key responsibilities will involve developing and promoting scalable AI platforms for customer-facing applications. It will be essential to evangelize the platform with customers and internal stakeholders, ensuring scalability, reliability, and performance to meet business needs. Your role will also entail designing machine learning pipelines for experiment management, model management, feature management, and model retraining. Implementing A/B testing of models and designing APIs for model inferencing at scale will be crucial. You should have proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. As an AI Platform Engineer, you will serve as a subject matter expert in LLM serving paradigms, with in-depth knowledge of GPU architectures. Expertise in distributed training and serving of large language models, along with proficiency in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM, will be required. Demonstrating proven expertise in model fine-tuning and optimization techniques to achieve better latencies and accuracies in model results will be part of your responsibilities. Reducing training and resource requirements for fine-tuning LLM and LVM models will also be essential. Having extensive knowledge of different LLM models and providing insights on their applicability based on use cases is crucial. You should have proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. Your proficiency in DevOps and LLMOps practices, along with knowledge of Kubernetes, Docker, and container orchestration, will be necessary. A deep understanding of LLM orchestration frameworks such as Flowise, Langflow, and Langgraph is also required. In terms of skills, you should be familiar with LLM models like Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, and Llama, as well as LLM Ops tools like ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, and Azure AI. Additionally, knowledge of databases/data warehouse systems like DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, and Google BigQuery, as well as cloud platforms such as AWS, Azure, and GCP, is essential. Proficiency in DevOps tools like Kubernetes, Docker, FluentD, Kibana, Grafana, and Prometheus, along with cloud certifications like AWS Professional Solution Architect and Azure Solutions Architect Expert, will be beneficial. Strong programming skills in Python, SQL, and Javascript are required for this full-time role, with an in-person work location.,

Posted 1 month ago

Apply

8.0 - 13.0 years

0 Lacs

pune, maharashtra

On-site

The ML Solutions team within Markets OPS Technology is dedicated to developing solutions using Artificial Intelligence, Machine Learning, and Generative AI. This team is a leader in creating new ideas, innovative technology solutions, and ground-breaking solutions for Markets Operations and Other Line of Businesses. We work closely with our clients and business partners to progress solutions from ideation to production by leveraging the entrepreneurial spirit and technological excellence. The ML Solutions team is seeking a Data Scientist/Machine Learning Engineer to drive the design, development, and deployment of innovative AI/ML and GenAI-based solutions. In this hands-on role, you will leverage your expertise to create a variety of AI models, guiding a team from initial concept to successful production. A key aspect involves mentoring team members and fostering their growth. You will collaborate closely with business partners and stakeholders, championing the adoption of these advanced technologies to enhance client experiences, deliver tangible value to our customers, and ensure adherence to regulatory requirements through cutting-edge technical solutions. This position offers a unique opportunity to shape the future of our AI initiatives and make a significant impact on the organization. Key Responsibilities: - Hands-On Execution and Delivery: Actively contribute to the development and delivery of AI solutions, driving innovation and excellence within the team. Take a hands-on approach to ensure AI models are successfully deployed into production environments, meeting high-quality standards and performance benchmarks. - Mentoring Young Talents: Mentoring team, guiding data analysts/ML engineers from concept to production. This involves fostering technical growth, providing project oversight, and ensuring adherence to best practices, ultimately building a high-performing and innovative team. - Quality Control: Ensure the quality and performance of generative AI models, conducting rigorous testing and evaluation. - Research and Development: Participate in research activities to explore and advance state-of-the-art generative AI techniques. Stay actively engaged in monitoring ongoing research efforts, keeping abreast of emerging trends, and ensuring that the Generative AI team remains at the forefront of the field. - Cross-Functional Collaboration: Collaborate effectively with various teams, including product managers, engineers, and data scientists, to integrate AI technologies into products and services. Skills & Qualifications: - 8 to 13 years of Strong hands-on experience in Machine Learning, delivering complex solutions to production. - Experience with Generative AI technologies essential. - Understanding of concepts like supervised, unsupervised, clustering, embedding. - Knowledge of NLP, Name Entity Recognition, Computer Vision, Transformers, Large Language Models. - In-depth knowledge of deep learning and Generative AI frameworks such as, Langchain, Lang Graph, Crew AI or similar. - Experience with and other open-source frameworks/ libraries/ APIs like Hugging Face Transformers, Spacy, Pandas, scikit-learn, NumPy, OpenCV. - Experience in using Machine Learning/Deep Learning: XGBoost, LightGBM, TensorFlow, PyTorch, Keras. - Proficiency in Python Software Development, following Object-Oriented design patterns and best practices. - Strong background in mathematics: linear algebra, probability, statistics, and optimization. - Experience with evaluation, scoring with a framework like ML Flow - Experience of Docker container and edited a Docker file, experience with K8s is a plus. - Experience with Postgres and Vector DBs a plus. - Excellent problem-solving skills and the ability to think creatively. - Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams - Publications and contributions to the AI research community are a plus. - Masters degree/Ph. D. or equivalent experience in Computer Science, Data Science, Statistics, or a related field. - 8-12 years of experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.,

Posted 1 month ago

Apply

1.0 - 6.0 years

8 - 12 Lacs

Chennai

Hybrid

Min 1-3 yrs of exp in data science,NLP&Python Exp in PyTorch,Scikit-learn,and NLP libraries(NLTK,SpaCy,Hugging Face) Help deploy AI/ML solutions on AWS,GCP/Azure Exp in SQL for data manipulation & analysis Exp in Big Data processing Spark,Pandas,Dask

Posted 2 months ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

Pune

Work from Office

You will be a key member of the Data + AI Pipeline team, leading to the integration of Kubeflow, Kubernetes, Docker, Keda, and Python technologies. Your role will involve developing and maintaining AI pipelines that support various projects, ensuring seamless and efficient data processing, model training, and deployment. As part of a dynamic and interdisciplinary team, you will collaborate with experts in data engineering, AI, and software development to create robust and scalable solutions. Job Description • We are seeking a motivated and experienced Data + AI Pipeline Engineer to lead the development and maintenance of our KION Machine Vision AI pipeline infrastructure. • As a Data + AI Pipeline Lead, you will provide technical leadership and strategic direction to the team, be responsible for designing and implementing scalable and efficient AI pipelines using Kubeflow, Kubernetes, Docker, Keda, and Python. • You will collaborate with cross-functional teams to understand project requirements, define data processing workflows, and ensure the successful deployment of machine learning models. • Your role will involve integrating and optimizing data processing and machine learning components within our pipeline architecture. • You will provide technical leadership and mentorship to junior team members, guiding them in the design and implementation of AI pipelines and fostering their professional growth. • Conduct code reviews and provide constructive feedback to ensure the quality, readability, and maintainability of codebase across the team. • Collaborate with the software engineering team to ensure the seamless integration of AI pipelines with other software applications. • Implement and maintain CI/CD pipelines to automate the deployment of AI models and ensure continuous integration and delivery. • Work closely with external partners and vendors to leverage the latest advancements in AI and data processing technologies. Qualifications : • A university degree with a technical focus, preferably in computer science, data science, or a related field. • 10+ years of experience in building and maintaining large-scale AI pipeline projects, with at least 2 years in a leadership or managerial role. • Hands-on experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and services for data processing, storage, and deployment. • Strong programming skills in Python and proficiency in using libraries and frameworks for data manipulation, analysis, and visualization (e.g., pandas, NumPy, matplotlib). • Knowledge of containerization technologies (e.g., Docker, Kubernetes) and orchestration tools for deploying and managing machine learning pipelines at scale. • Expertise in designing and optimizing data processing workflows and machine learning pipelines. • Strong communication skills to collaborate effectively with cross-functional teams and present complex technical concepts in a clear manner. • Experience with CI/CD, automation, and a strong understanding of software engineering best practices. • Ability to overview complex software architectures and contribute to future-oriented developments in the field of AI and data processing. • Excellent problem-solving skills and the ability to work in a dynamic and fast-paced environment. • Leading and guiding, technical mentoring to the team • Very good English skills, both written and verbal, to facilitate effective communication within the global team.

Posted 2 months ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Skills required for ML engineering expert : : 5+ years of experience in Python programing and performance tuning and optimization. Experience on ML engineering, Knowledge on Feast, Kubeflow and MLFlow. Deep understanding on NumPy , Pandas, data frames etc. Working knowledge on bigdata environment and data science model added advantage. Strong analytical and problem-solving skills, with attention to detail and ability to work in a fast-paced environment Excellent communication and collaboration skills, with ability to work with cross-functional teams Knowledge of machine learning and data science concepts

Posted 2 months ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Pune, Bengaluru, Greater Noida

Work from Office

Role & responsibilities: Looking for 5 to 8 years experience of ML Engineer with strong Azure Cloud DevOps with even stronger DABs DevOps skills with even stronger DABs Databricks Asset Bundles implementation knowledge. 1 Azure Cloud Engineer 2 Azure DevOps CICD experienced in DABs 3 ML Engineer for deployment Translate business requirement into technical solution Implementation of MLOPS Scalable solution using AIML and reduce the risk of Fraud and other fiscal crisis Creating MLOPS Architecture and implementing it for multiple models in a scalable and automated way Designing and implementing end to end ML solutions Operationalize and monitor machine learning models using high end tools and technologies Design implementation of DevOps principles in Machine Learning Data Science quality assurance and testing Collaborate with data scientists engineers and other key stakeholders Preferred candidate profile: Azure Cloud Engineering Design implement and manage scalable cloud infrastructure on Microsoft Azure Ensure high availability performance and security of cloud based applications Collaborate with cross functional teams to define and implement cloud solutions Azure DevOps CICD Develop and maintain CICD pipelines using Azure DevOps Automate deployment processes to ensure efficient and reliable software delivery Monitor and troubleshoot CICD pipelines to ensure smooth operation DABs Databricks Asset Bundles Implementation Lead the implementation and management of Databricks Asset Bundles Optimize data workflows and ensure seamless integration with existing systems Provide expertise in DABs to enhance data processing and analytics capabilities Machine Learning Deployment Deploy machine learning models into production environments Monitor and maintain ML models to ensure optimal performance Collaborate with data scientists and engineers to integrate ML solutions into applications.

Posted 2 months ago

Apply

7.0 - 10.0 years

1 Lacs

Hyderabad, Telangana, India

On-site

As a Senior Software Engineer on the AI Engineering Team at Cotiviti, you will be a leading force in developing robust, scalable machine learning solutions for healthcare applications. This senior-level position involves significant responsibility, including leading design and development efforts, mentoring junior engineers, and ensuring the delivery of high-quality solutions. Basic Qualifications: Bachelor's degree in Computer Science, Engineering, Math, or a related field, or equivalent experience 7+ years of experience with Hadoop tech stack (Spark, Kafka) Should have experience with batch processing on large scale data with Spark and real-time without Spark Proficiency in programming languages such as Scala or Python Extensive experience with Kafka and data streaming platforms Advanced knowledge of Data Bricks on AWS or similar cloud platforms Proven experience building and maintaining microservices Deep understanding of data architecture principles Experience leading design and development of large systems Proficiency with CI/CD tools like Jenkins Experience with Unix/Linux operating systems Familiarity with Agile processes and tools like Jira and Confluence Strong drive to learn and advocate for development best practices Strong knowledge on troubleshooting and optimizing Spark applications Preferred Qualifications: Experience with Data Bricks on Azure/AWS Experience with Kafka, DataStream/DataFrame/DataSet Advanced proficiency with containerization tools like Docker, Kubernetes Knowledge of machine learning frameworks and tools such as DataRobot, H2O, ML Flow Experience with big data tools like Spark, Scala, Oozie, Hive or similar Streaming technologies Kafka, SparkStreams, RabbitMQ Experience with Continuous Integration and Delivery, unit testing, and functional automation testing Having API development experience will be a good addition. Healthcare domain experience will be a plus. Responsibilities: Lead the development and implementation of machine learning solutions for healthcare applications Guide and mentor a team of developers and testers Collaborate with data scientists and other engineers to design and build scalable solutions Write, test, and maintain high-quality code along with Code coverage Lead design and code review sessions Troubleshoot and resolve complex technical issues Document your work and share knowledge with the team Advocate for and implement development best practices Train and mentor junior engineers and software engineers

Posted 2 months ago

Apply

3.0 - 5.0 years

18 - 30 Lacs

Noida, Delhi / NCR

Work from Office

Role & responsibilities Key Responsibilities Design, develop, and optimize machine learning models for various business applications. Build and maintain scalable AI feature pipelines for efficient data processing and model training. Develop robust data ingestion, transformation, and storage solutions for big data. Implement and optimize ML workflows, ensuring scalability and efficiency. Monitor and maintain deployed models, ensuring performance, reliability, and retraining when necessary Qualifications and Experience Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 3.5 to 5 years of experience in machine learning, deep learning, or data science roles. Proficiency in Python and ML frameworks/tools such as PyTorch, Langchain Experience with data processing frameworks like Spark, Dask, Airflow and Dagster Hands-on experience with cloud platforms (AWS, GCP, Azure) and ML services. Experience with MLOps tools like ML flow, Kubeflow Familiarity with containerisation and orchestration tools like Docker and Kubernetes. Excellent problem-solving skills and ability to work in a fast-paced environment. Strong communication and collaboration skills. Preferred candidate profile

Posted 3 months ago

Apply

5.0 - 8.0 years

25 - 30 Lacs

Indore, Chennai

Work from Office

We are seeking a Senior Python DevOps Engineer to develop Python services and build CI/CD pipelines for AI/data platforms. Must have strong cloud, container, and ML workflow deployment experience. Required Candidate profile Experienced Python DevOps engineer with expertise in CI/CD, cloud, and AI platforms. Skilled in Flask/FastAPI, Airflow, MLFlow, and model deployment on Dataiku and OpenShift.

Posted 3 months ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Chennai

Work from Office

We are seeking a highly experienced and technically adept Lead AI/ML Engineer to spearhead the development and deployment of cutting-edge AI solutions, with a focus on Generative AI and Natural Language Processing (NLP). The ideal candidate will be responsible for leading a high-performing team, architecting scalable ML systems, and driving innovation across AI/ML projects using modern toolchains and cloud-native technologies. Key Responsibilities Team Leadership: Lead, mentor, and manage a team of data scientists and ML engineers; drive technical excellence and foster a culture of innovation. AI/ML Solution Development: Design and deploy end-to-end machine learning and AI solutions, including Generative AI and NLP applications. Conversational AI: Build LLM-based chatbots and document intelligence tools using frameworks like LangChain , Azure OpenAI , and Hugging Face . MLOps Execution: Implement and manage the full ML lifecycle using tools such as MLFlow , DVC , and Kubeflow to ensure reproducibility, scalability, and efficient CI/CD of ML models. Cross-functional Collaboration: Partner with business and engineering stakeholders to translate requirements into impactful AI solutions. Visualization & Insights: Develop interactive dashboards and data visualizations using Streamlit , Tableau , or Power BI for presenting model results and insights. Project Management: Own delivery of projects with clear milestones, timelines, and communication of progress and risks to stakeholders. Required Skills & Qualifications Languages & Frameworks: Proficient in Python and frameworks like TensorFlow , PyTorch , Keras , FastAPI , Django NLP & Generative AI: Hands-on experience with BERT , LLaMA , Spacy , LangChain , Hugging Face , and other LLM-based technologies MLOps Tools: Experience with MLFlow , Kubeflow , DVC , ClearML for managing ML pipelines and experiment tracking Visualization: Strong in building visualizations and apps using Power BI , Tableau , Streamlit Cloud & DevOps: Expertise with Azure ML , Azure OpenAI , Docker , Jenkins , GitHub Actions Databases & Data Engineering: Proficient with SQL/NoSQL databases and handling large-scale datasets efficiently Preferred Qualifications Masters or PhD in Computer Science, AI/ML, Data Science, or related field Experience working in agile product development environments Strong communication and presentation skills with technical and non-technical stakeholders

Posted 3 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Gurugram

Hybrid

Exciting opportunity for an ML Platform Specialist to join a leading technology-driven firm. You will be designing, deploying, and maintaining scalable machine learning infrastructure with a strong focus on Databricks, model lifecycle, and MLOps practices. Location: Gurugram (Hybrid) Your Future Employer Our client is a leading digital transformation partner driving innovation across industries. With a strong focus on data-driven solutions and cutting-edge technologies, they are committed to fostering a collaborative and growth-focused environment. Responsibilities Designing and implementing scalable ML infrastructure on Databricks Lakehouse Building CI/CD pipelines and workflows for machine learning lifecycle Managing model monitoring, versioning, and registry using MLflow and Databricks Collaborating with cross-functional teams to optimize machine learning workflows Driving continuous improvement in MLOps and automation strategies Requirements Bachelors or Masters in Computer Science, ML, Data Engineering, or related field 3-5 years of experience in MLOps, with strong expertise in Databricks and Azure ML Proficient in Python, PySpark, MLflow, Delta Lake, and Databricks Feature Store Hands-on experience with cloud platforms (Azure/AWS/GCP), CI/CD, Git Knowledge of Terraform, Kubernetes, Azure DevOps, and distributed computing is a plus Whats in it for you Competitive compensation with performance-driven growth opportunities Work on cutting-edge MLOps infrastructure and enterprise-scale ML solutions Collaborative, diverse, and innovation-driven work culture Continuous learning, upskilling, and career development support

Posted 3 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies