Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for an enthusiastic Machine Learning Engineer to join our growing team. The hire will be responsible for working in collaboration with other data scientists and engineers across the organization to develop production-quality models for a variety of problems across Razorpay. Some possible problems include : making recommendations to merchants from Razorpay’s suite of products, cost optimisation of transactions for merchants, automatic address disambiguation / correction to enable tracking customer purchases using advanced natural language processing techniques, computer vision techniques for auto-verifications, running large-scale bandit experiments to optimize Razorpay’s merchant facing web pages at scale, and many more. In addition to this, we expect the MLE to be adept at productionising ML models using state-of-the-art systems. As part of the DS team @ Razorpay, you’ll work with some of the smartest engineers/architects/data scientists/product leaders in the industry and have the opportunity to solve complex and critical problems for Razorpay. As a Senior MLE, you will also have the opportunity to partner with and be mentored by senior engineers across the organization and lay the foundation for a world-class DS team here at Razorpay. You come and work with the right attitude, fun and growth guaranteed! Required qualifications 5+ years of experience doing ML in a production environment and productionising ML models at scale Bachelors (required) or Masters in a quantitative field such as Computer science, operations research, statistics, mathematics, physics Familiarity with basic machine learning techniques : regression, classification, clustering, model metrics and performance (AUC, ROC, precision, recall and their various flavors) Basic knowledge of advanced machine learning techniques : regression, clustering, recommender systems, ranking systems and neural networks Expertise in coding in python and good knowledge of at least one language from C, C++, Java and at least one scripting language (perl, shell commands) Experience with big data tools like Spark and experience working with Databricks / DataRobots Experience with AWS’ suite of tools for production-quality ML work, or alternatively familiarity with Microsoft Azure / GCP Experience deploying complex ML algorithms to production in collaboration with engineers using Flask, MLFlow, Seldon, etc. Good to have: Excellent communication skills and ability to keep stakeholders informed of progress / blockers
Posted 2 weeks ago
5.0 years
10 Lacs
Calcutta
On-site
Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities Build and deploy production-grade ML pipelines for varied use cases across operations, manufacturing, supply chain, and more Work hands-on in designing, training, and fine-tuning models across traditional ML, deep learning, and GenAI (LLMs, diffusion models, etc.) Collaborate with data scientists to transform exploratory notebooks into scalable, maintainable, and monitored deployments Implement CI/CD pipelines, version control, and experiment tracking using tools like MLflow, DVC, or similar Do shadow deployment and A/B testing of production models Partner with data engineers to build data pipelines that support real-time or batch model inference Ensure high availability, performance, and observability of deployed ML solutions using MLOps best practices Conduct code reviews, performance tuning, and contribute to ML infrastructure improvements Support the end-to-end lifecycle of ML products Contribute to knowledge sharing, reusable component development, and internal upskilling initiatives Qualifications Bachelor's in Computer Science, Engineering, Data Science, or related field. Master’s degree preferred 4–6 years of experience in developing and deploying machine learning models, with significant exposure to MLOps practices Experience in implementing and productionizing Generative AI applications using LLMs (e.g., OpenAI, HuggingFace, LangChain, RAG architectures) Strong programming skills in Python; familiarity with ML libraries such as scikit-learn, TensorFlow, PyTorch Hands-on experience with tools like MLflow, Docker, Kubernetes, FastAPI/Flask, Airflow, Git, and cloud platforms (Azure/AWS) Solid understanding of software engineering fundamentals and DevOps/MLOps workflows Exposure to at least 2-3 industry domains (energy, manufacturing, finance, etc.) preferred Excellent problem-solving skills, ownership mindset, and ability to work in agile cross-functional teams
Posted 2 weeks ago
7.0 - 9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Biz2X Biz2X is the leading digital lending platform, enabling financial providers to power growth with a modern omni-channel experience, best-in-class risk management tools and a comprehensive yet flexible Servicing engine. The company partners with financial institutions to support their Digital Transformation efforts with Biz2X’s digital lending platform. Biz2X solutions not only reduces operational expenses, but accelerates lending growth by significantly improving client experience, reducing total turnaround time, and equipping the relationship managers with powerful monitoring insights and alerts Read Our Latest Press Release : Press Release - Biz 2X About Biz2Cre dit Biz2Credit is a digital-first provider of small business funding. Biz2Credit leverages data, cash flow insights, and the latest technology to give business owners an automated small business funding platform. Since its inception, Biz2Credit has been the best place for small businesses to get funding online. With over 750 employees globally, our team – made up of top-notch engineers, marketers, and data scientists – is building the next generation in business lending soluti ons. Read Our Latest Press Rele ase: Biz2Credit in the News - Biz2C redit Learn More: www.biz2x.com & www.biz2cred it.com Role – Lead Engineer – AI, Machine L earning Job O verview: We are seeking a Lead Engineer to drive the development and deployment of sophisticated AI solutions in our fintech products. You will lead a team of engineers, oversee MLOps pipelines, and manage large language models (LLMs) to enhance our financial technology services. Key Respons ibilities: AI/ML Development: Design and implement advanced ML models for applications including fraud detection, credit scoring, and algorithmic trading. MLOps: Develop and manage MLOps pipelines using tools such as MLflow, Kubeflow, and Airflow for CI/CD, model monitoring, and automation. LLMOps: Optimize and operationalize LLMs (e.g., GPT-4, BERT) for fintech applications like automated customer support and sentiment analysis. Technical Leadership : Mentor and lead a team of ML engineers and data scientists, conducting code reviews and ensuring best practices. Collaboration: Work with product managers, data engineers, and business analysts to align technical solutions with business objectives. Experience in building RA G pipelines Qualifications: Experience: 7-9 years in AI, ML, MLOps, and LLMOps with a focus on fintech. Technical Skills: Expertise in TensorFlow, PyTorch, scikit-learn, and MLOps tools (MLflow, Kubeflow). Proficiency in large language models (LLMs) and cloud platforms (AWS, GCP, Azure). Strong programming skills in Python, Java, or Scala.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!
Posted 2 weeks ago
15.0 years
0 Lacs
India
On-site
Job Location: Hyderabad / Bangalore / Pune Immediate Joiners / less than 30 days About the Role We are looking for a seasoned AI/ML Solutions Architect with deep expertise in designing and deploying scalable AI/ML and GenAI solutions on cloud platforms. The ideal candidate will have a strong track record in BFSI, leading end-to-end projects—from use case discovery to productionization—while ensuring governance, compliance, and performance at scale. Key Responsibilities Lead the design and deployment of enterprise-scale AI/ML and GenAI architectures. Drive end-to-end AI/ML project delivery : discovery, prototyping, productionization. Architect solutions using leading cloud-native AI services (AWS, Azure, GCP). Implement MLOps/LLMOps pipelines for model lifecycle and automation. Guide teams in selecting and integrating GenAI/LLM frameworks (OpenAI, Cohere, Hugging Face, LangChain, etc.). Ensure robust AI governance, model risk management , and compliance practices. Collaborate with senior business stakeholders and cross-functional engineering teams. Required Skills & Experience 15+ years in AI/ML, cloud architecture, and data engineering. At least 10 end-to-end AI/ML project implementations. Hands-on expertise in one or more of the following: ML frameworks: scikit-learn, XGBoost, TensorFlow, PyTorch GenAI/LLM tools: OpenAI, Cohere, LangChain, Hugging Face, FAISS, Pinecone Cloud platforms: AWS, Azure, GCP (AI/ML services) MLOps: MLflow, SageMaker Pipelines, Kubeflow, Vertex AI Strong understanding of data privacy, model governance, and compliance frameworks in BFSI. Proven leadership of cross-functional technical teams and stakeholder engagement.
Posted 2 weeks ago
0.0 - 5.0 years
0 Lacs
Thiruvananthapuram District, Kerala
On-site
Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person
Posted 2 weeks ago
5.0 years
0 Lacs
Tamil Nadu, India
On-site
Role : Sr. AI/ML Engineer Years of experience: 5+ years (with minimum 4 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: FTE Notice Period: Immediate to 15 days ONLY Key skills: Python, Tensorflow, Generative AI ,Machine Learning, AWS , Agentic AI, Open AI, Claude, Fast API JD: Experience in Gen AI, CI/CD pipelines, scripting languages, and a deep understanding of version control systems(e.g. Git), containerization (e.g. Docker), and continuous integration/deployment tools (e.g. Jenkins) third party integration is a plus, cloud computing platforms (e.g. AWS, GCP, Azure), Kubernetes and Kafka. Experience building production-grade ML pipelines. Proficient in Python and frameworks like Tensorflow, Keras , or PyTorch. Experience with cloud build, deployment, and orchestration tools Experience with MLOps tools such as MLFlow, Kubeflow, Weights & Biases, AWS Sagemaker, Vertex AI, DVC, Airflow, Prefect, etc., Experience in statistical modeling, machine learning, data mining, and unstructured data analytics. Understanding of ML Lifecycle, MLOps & Hands on experience to Productionize the ML Model Detail-oriented, with the ability to work both independently and collaboratively. Ability to work successfully with multi-functional teams, principals, and architects, across organizational boundaries and geographies. Equal comfort driving low-level technical implementation and high-level architecture evolution Experience working with data engineering pipelines
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.
Posted 2 weeks ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Senior Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Scientist with a focus on Predictive Analytics and expertise in Databricks, your primary responsibilities will involve designing and implementing predictive models for various applications such as forecasting, churn analysis, and fraud detection. You will utilize tools like Python, SQL, Spark MLlib, and Databricks ML to deploy these models effectively. Your role will also include building end-to-end machine learning pipelines on the Databricks Lakehouse platform, encompassing data ingestion, feature engineering, model training, and deployment. It will be essential to optimize model performance through techniques like hyperparameter tuning, AutoML, and leveraging MLflow for tracking. Collaboration with engineering teams will be a key aspect of your job to ensure the operationalization of models, both in batch and real-time scenarios, using Databricks Jobs or REST APIs. You will be responsible for implementing Delta Lake to support scalable and ACID-compliant data workflows, as well as enabling CI/CD for machine learning pipelines using Databricks Repos and GitHub Actions. In addition to your technical duties, troubleshooting Spark Jobs and resolving issues within the Databricks Environment will be part of your routine tasks. To excel in this role, you should possess 3 to 5 years of experience in predictive analytics, with a strong background in regression, classification, and time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark is crucial for success in this position. Familiarity with tools like MLflow, Feature Store, and Unity Catalog for governance purposes will be advantageous. Industry experience in Life Insurance or Property & Casualty (P&C) is preferred, and holding a certification as a Databricks Certified ML Practitioner would be considered a plus. Your technical skill set should include proficiency in Python, PySpark, MLflow, and Databricks AutoML. Expertise in predictive modeling techniques such as classification, clustering, regression, time series analysis, and NLP is required. Familiarity with cloud platforms like Azure or AWS, Delta Lake, and Unity Catalog will also be beneficial for this role.,
Posted 2 weeks ago
6.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
𝐏𝐨𝐬𝐢𝐭𝐢𝐨𝐧: AI/ML Engineer 𝐋𝐨𝐜𝐚𝐭𝐢𝐨𝐧: Noida 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞: 6 to 8 Years 𝐉𝐨𝐢𝐧𝐢𝐧𝐠: Immediate Are you a problem solver with a deep understanding of machine learning systems and a passion for driving innovation through AI? We are looking for a skilled AI/ML Engineer to join our team in Noida and contribute to high-impact projects across various domains. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬 Design, develop, and deploy end-to-end machine learning models Work on deep learning, natural language processing, and computer vision projects Collaborate with data scientists, product managers, and backend engineers to integrate models into scalable systems Optimize model performance using MLOps pipelines, cloud platforms (AWS/GCP), and containerized environments (Docker, Kubernetes) Stay updated with the latest in AI research and tools, applying innovative approaches to real-world problems 𝐌𝐮𝐬𝐭-𝐇𝐚𝐯𝐞 𝐒𝐤𝐢𝐥𝐥𝐬 6–8 years of experience in AI/ML development Strong proficiency in Python, TensorFlow, PyTorch, Scikit-learn Sound knowledge of supervised/unsupervised learning, model evaluation, feature engineering, and data preprocessing Hands-on experience with cloud services (AWS/GCP/Azure) and CI/CD pipelines Solid understanding of algorithms, data structures, and system design principles 𝐆𝐨𝐨𝐝 𝐭𝐨 𝐇𝐚𝐯𝐞 Experience with Lang Chain, LLMs, or AI agents Familiarity with streaming data, real-time inference, or Edge AI Exposure to MLflow, Kubeflow, or other MLOps tools Apply Now: Send your resume to: palak.sharma@mobcoder.com
Posted 2 weeks ago
6.0 years
20 - 25 Lacs
Bengaluru, Karnataka, India
Remote
:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: computer vision,azure,llms,llms and modern nlp techniques,tensorflow,javascript,aws,ml ops,docker,python, pytorch/tensorflow, and scikit-learn,ml, ai,python,scikit-learn,software/data engineering,sql,machine learning,mlops,llm technologies,nlp,mongodb,pytorch,kubernetes,llm,postgresql
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Roles & Responsibilities Qualifications and Skills: Master’s or Ph.D. in Computer Science, Statistics, Mathematics, Data Science, or a related field. 5+ years of hands-on experience in data science or machine learning roles. Strong proficiency in Python or R, with deep knowledge of libraries like scikit-learn, pandas, NumPy, TensorFlow, or PyTorch. Proficient in SQL and working with relational databases. Solid experience with Azure cloud platforms and data pipeline tools. Strong grasp of statistical methods, machine learning algorithms, and model evaluation techniques. Excellent communication and storytelling skills with the ability to influence stakeholders. Proven track record of delivering impactful data science solutions in a business setting. Preferred Qualifications Experience working in industries such as [logistics, aerospace, marketing, etc.]. Familiarity with MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow). Knowledge of data visualization tools (e.g., Tableau, Power BI, Plotly). Responsibilities and Duties Model Development: Design, build, and deploy scalable machine learning models to solve key business challenges (e.g., customer churn, recommendation engines, pricing optimization). Data Analysis: Perform exploratory data analysis (EDA), statistical testing, and feature engineering to uncover trends and actionable insights. Project Leadership: Lead end-to-end data science projects, including problem definition, data acquisition, modeling, and presentation of results to stakeholders. Cross-functional Collaboration: Partner with engineering, product, marketing, and business teams to integrate models into products and processes. Mentorship: Guide and mentor junior data scientists and analysts, helping them grow technically and professionally. Innovation: Stay current with the latest data science techniques, tools, and best practices. Evaluate and incorporate new technologies when appropriate. Communication: Translate complex analyses and findings into clear, compelling narratives for non-technical stakeholders. Experience 8-11 Years Skills Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Data Science About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location Bengaluru, Karnataka, India Job ID R-231679 Date posted 17/07/2025 Job Title: Senior MLOps Engineer Introduction to role: Are you ready to lead the charge in transforming machine learning operations? As a Senior MLOps Engineer at Alexion, you'll report directly to the IT Director of Insights and Analytics, playing a pivotal role in our IT RDU organization. Your mission? To develop and implement brand new machine learning solutions that propel our business forward. With your expertise, you'll design, build, and deploy production-ready models at scale, ensuring they meet the highest standards. Accountabilities: Lead the development and implementation of MLOps infrastructure and tools for machine learning models. Collaborate with multi-functional teams to identify, prioritize, and solve business problems using machine learning techniques. Design, develop, and implement production-grade machine learning models that meet business requirements. Oversee the training, testing, and validation of machine learning models. Ensure that machine learning models meet high-quality standards, including scalability, maintainability, and performance. Design and implement efficient development environments and processes for ML applications. Coordinate with partners and senior management to communicate updates on the progress of machine learning projects. Develop assets, accelerators, and thought capital for your practice by providing best-in-class frameworks and reusable components. Develop and maintain MLOps pipelines to automate machine learning workflows and integrate them with existing IT systems. Integrate Generative AI models-based solutions within the broader machine learning ecosystem, ensuring they adhere to ethical guidelines and serve intended business purposes. Implement robust monitoring and governance mechanisms for Generative AI models-based solutions to ensure they evolve in alignment with business needs and regulatory standards. Essential Skills/Experience: Bachelor's degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field. 4+ years of experience in developing and deploying machine learning models in production environments. Hands-on experience building production models with a focus on data science operations including serverless architectures, Kubernetes, Docker/containerization, and model upkeep and maintenance. Familiarity with API-based application architecture and API frameworks. Experience with CICD orchestration frameworks, such as GitHub Actions, Jenkins or Bitbucket pipelines. Deep understanding of software development lifecycle and maintenance. Extensive experience with one or more orchestration tools (e.g., Airflow, Flyte, Kubeflow). Experience working with MLOps tools like experiment tracking, model registry tools, and feature stores (e.g., MLFlow, Sagemaker, Azure). Strong programming skills in Python and experience with libraries such as Tensorflow, Keras, or PyTorch. Proficiency in MLOps standard methodologies, including model training, testing, deployment, and monitoring. Experience with cloud computing platforms, such as AWS, Azure or GCP. Proficient in standard processes within software engineering and agile methodologies. Strong understanding of data structures, algorithms, and machine learning techniques. Excellent communication and collaboration skills with the ability to work in a multi-functional team setting. Ability to work independently and hard-working, with strong problem-solving skills. Excellent communication and collaboration skills with the ability to partner well with business stakeholders. Desirable Skills/Experience: Experience in the pharmaceutical industry or related fields. Advanced degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field. Strong understanding of parallelization and asynchronous computation. Strong knowledge of data science techniques and tools, including statistical analysis, data visualization, and SQL. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, you'll find yourself at the forefront of biomedical science. Our commitment to transparency and ethics drives us to push boundaries and translate complex biology into transformative medicines. With global reach and potent capabilities, we're shaping the future of rare disease treatment. Here, you'll grow in an energizing culture that values innovation and connection. Empowered by tailored development programs, you'll align your growth with our mission to make a difference for underserved patients worldwide. Ready to make an impact? Apply now to join our team! Date Posted 18-Jul-2025 Closing Date 30-Jul-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You are a skilled Architect specializing in AIOps & MLOps Operations, responsible for supporting and enhancing the automation, scalability, and reliability of AI/ML operations across the enterprise. Your role involves deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to enhance system performance, minimize downtime, and improve decision-making with real-time AI-driven insights. Supporting and maintaining AIOps and MLOps programs is a key responsibility, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. You will assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Your role will also involve contributing to the development of governance models and execution roadmaps, driving efficiency across data platforms such as Azure, AWS, GCP, and on-prem environments. It is essential to ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaboration with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms will be part of your responsibilities. Additionally, you will assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Moreover, you will support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Your role will involve implementing AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. You will deploy Azure-based observability solutions to enhance real-time system performance monitoring and enable AI-driven anomaly detection and root cause analysis. Contribution to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate will be part of your responsibilities. Supporting ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models is also essential. You will assist in deploying scalable ML models with Azure Kubernetes Service, Azure Machine Learning Compute, and Azure Container Instances while automating feature engineering, model versioning, and drift detection. Collaboration with various teams to align AIOps/MLOps strategies with enterprise IT goals is an important aspect of the role. You will work closely with business stakeholders and IT leadership to implement AI-driven insights and automation for enhancing operational decision-making. Tracking and reporting AI/ML operational KPIs and ensuring adherence to Azure Information Protection and data security policies will also be part of your responsibilities. In summary, your role as an Architect - AIOps & MLOps Operations will involve supporting, enhancing, and automating AI/ML operations across the enterprise, ensuring operational excellence, and continuous improvement.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
kerala
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As a ML Developer at EY GDS Assurance Digital, you will be responsible for leveraging advanced machine learning techniques to develop innovative, high-impact models and solutions that drive growth and deliver significant business value. You will be helping EY's sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. This is a full-time Machine Learning Developer role, responsible for building and deploying robust machine learning models to solve real-world business problems. You will be working on the entire ML lifecycle, including data analysis, feature engineering, model training, evaluation, and deployment. Requirements: - A bachelor's degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance, or a related field with adequate industry experience. - Technical skills requirements include developing and implementing machine learning models, conducting exploratory data analysis (EDA), applying dimension reduction techniques, utilizing statistical models, evaluating and validating models, solid background in Python, familiarity with Time Series Forecasting, basic experience with cloud platforms such as AWS, Azure, or GCP, and exposure to ML Ops tools and practices (e.g., MLflow, Airflow, Docker). Additional skill requirements: - Proficient at quickly understanding complex machine learning concepts and utilizing technology for tasks such as data modeling, analysis, visualization, and process automation. - Skilled in selecting and applying the most suitable standards, methods, tools, and frameworks for specific ML tasks and use cases. - Capable of collaborating effectively within cross-functional teams, while also being able to work independently on complex ML projects. - Demonstrates a strong analytical mindset and systematic approach to solving machine learning challenges. - Excellent communication skills, able to present complex technical concepts clearly to both technical and non-technical audiences. Join us in building a better working world at EY, where opportunities for personal development, skill advancement, and career progression await. Be part of a diverse team of professionals dedicated to creating long-term value for clients, people, and society while fostering trust in the capital markets. Apply now and be a part of our journey towards a better future.,
Posted 2 weeks ago
14.0 - 18.0 years
0 Lacs
karnataka
On-site
As the AVP Databricks Squad Delivery Lead, you will play a crucial role in overseeing project delivery, team leadership, architecture reviews, and client engagement. Your primary responsibility will be to optimize Databricks implementations across cloud platforms such as AWS, Azure, and GCP, while leading cross-functional teams. You will lead and manage the end-to-end delivery of Databricks-based solutions. Your expertise as a subject matter expert (SME) in Databricks architecture, implementation, and optimization will be essential. Collaborating with architects and engineers, you will design scalable data pipelines and analytics platforms. Additionally, you will oversee Databricks workspace setup, performance tuning, and cost optimization. Acting as the primary point of contact for client stakeholders, you will ensure effective communication and alignment between business goals and technical solutions. Driving innovation within the team, you will implement best practices, tools, and technologies to enhance project delivery. The ideal candidate should possess a Bachelor's degree in Computer Science, Engineering, or equivalent (Masters or MBA preferred). Hands-on experience in delivering data engineering/analytics projects using Databricks and managing cloud-based data pipelines on AWS, Azure, or GCP is a must. Strong leadership skills and excellent client-facing communication are essential for this role. Preferred skills include proficiency with Spark, Delta Lake, MLflow, and distributed computing. Expertise in data engineering concepts such as ETL, data lakes, and data warehousing is highly desirable. Certifications in Databricks or cloud platforms (AWS/Azure/GCP) and Agile/Scrum or PMP certification are considered advantageous.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As the MLOps Engineering Director at Horizontal Data Science Enablement Team within SSO Data Science, you will play a crucial role in solving MLOps challenges, overseeing the Databricks platform, building CI/CD pipelines, and leading best practices. Your responsibilities will include the administration, configuration, and maintenance of Databricks clusters and workspaces. It will be essential to continuously monitor clusters for high workloads or excessive costs and alert relevant stakeholders promptly to maintain overall cluster health. Implementing and managing security protocols, including access controls and data encryption, to adhere to Mastercard standards will be a key aspect of your role. You will be responsible for integrating various data sources into Databricks to ensure seamless data flow and consistency. Resolving issues related to Databricks infrastructure, providing support to users and stakeholders, maintaining documentation of configurations, processes, and best practices, and leading security and architecture reviews will also be part of your responsibilities. Bringing MLOps expertise to the table will involve activities such as model monitoring, feature catalog/store, model lineage maintenance, and CI/CD pipeline management. You will also own and maintain MLOps solutions, build LLMOps pipelines, and manage a small team of MLOps engineers. To excel in this role, a Master's degree in computer science or a related field, strong experience with Databricks, cloud technologies, MLOps solutions like MLFlow, and hands-on experience in CI/CD tools are required. Additionally, you should have experience in data analysis, data observability, data ingestion, data integration, and system engineering. Strong coding skills in Python or other languages like Java and C++, along with a systematic problem-solving approach and effective communication skills, will be essential. Experience with SQL tuning, automation, data observability, and supporting highly scalable systems can set you apart. Operating in a 24x7 environment and being self-motivated to creatively solve software problems will be valuable assets in this role. As part of your corporate security responsibility, you are expected to abide by Mastercard's security policies, ensure information confidentiality and integrity, report any security violations, and complete mandatory security training.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a highly experienced Senior Python & AI Engineer who will lead the development of innovative AI/ML solutions. Your strong background in Python programming, deep learning, machine learning, and proven leadership skills will drive the team towards delivering high-quality AI systems. You will architect solutions, define technical strategies, mentor team members, and ensure timely project deliveries. In the technical realm, you will be responsible for designing and implementing scalable AI/ML systems and backend services using Python. Your role includes overseeing the development of machine learning pipelines, APIs, and model deployment workflows. Reviewing code, establishing best practices, and maintaining technical quality across the team will be key aspects of your responsibilities. As a team leader, you will guide data scientists, ML engineers, and Python developers, providing mentorship, coaching, and conducting performance evaluations. You will facilitate agile practices such as sprint planning, daily stand-ups, and retrospectives. Collaboration with cross-functional teams, including product, QA, DevOps, and UI/UX, will be essential for timely feature deliveries. In AI/ML development, you will develop and optimize models for NLP, computer vision, or structured data analysis based on project requirements. Implementing model monitoring, drift detection, and retraining strategies will contribute to the success of AI initiatives. You will work closely with product managers to translate business needs into technical solutions and ensure the end-to-end delivery of features meeting performance and reliability standards. Your technical skills should include over 5 years of Python experience, 2+ years in AI/ML projects, a deep understanding of ML/DL concepts, and proficiency in frameworks like PyTorch, TensorFlow, and scikit-learn. Experience with deployment tools, cloud platforms, and CI/CD pipelines is required. Leadership qualities such as planning, estimation, and effective communication are also crucial, with at least 3 years of experience leading engineering or AI teams. Preferred qualifications include a Masters or PhD in relevant fields, exposure to MLOps practices, and familiarity with advanced AI technologies. Prior experience in fast-paced startup environments is a plus. In return, you will have the opportunity to spearhead cutting-edge AI initiatives, face diverse challenges, enjoy autonomy, and receive competitive compensation with performance-based incentives.,
Posted 2 weeks ago
3.0 - 10.0 years
0 Lacs
karnataka
On-site
As an AI Engineer, you will have the exciting opportunity to work with companies seeking individuals who are passionate and hands-on in developing and scaling intelligent features directly into their products. In this fast-paced and high-impact role, you will be instrumental in designing machine learning pipelines, fine-tuning models, and seamlessly deploying them. Your responsibilities will include designing, training, and deploying machine learning and deep learning models, such as NLP, vision, or tabular models. You will collaborate closely with product and engineering teams to build end-to-end AI-driven features. Additionally, you will be responsible for building and maintaining data pipelines, monitoring model performance in production environments, researching and implementing cutting-edge techniques to enhance model outcomes, optimizing models for performance and scalability, and ensuring reproducibility and version control of model experiments. To excel in this role, you should have a strong foundation in machine learning and deep learning algorithms. Proficiency in Python and ML libraries like PyTorch, TensorFlow, and Scikit-learn is essential. Experience with model deployment and serving, including REST APIs, ONNX, and TorchScript, will be beneficial. Familiarity with data handling tools such as Pandas and NumPy, as well as workflow tools like MLflow and Airflow, is also required. Strong problem-solving skills and an iterative, experiment-driven mindset are key attributes for success in this position. In terms of qualifications, a minimum of 3-10 years of relevant experience is required. Exposure to additional areas such as LLMs, embeddings, vector databases (e.g., FAISS, Pinecone), MLOps or DevOps workflows, GPT-like models, retrieval-augmented generation (RAG), multimodal systems, cloud platforms (AWS, GCP, or Azure), and streaming data or real-time systems will be considered a bonus. Joining this role offers you high ownership and the opportunity to shape AI-first product experiences. You can expect a fast-paced learning environment with exposure to the entire product lifecycle. The collaborative team setting provides room for growth and leadership opportunities, allowing you to work on cutting-edge ML applications with significant real-world user impact.,
Posted 2 weeks ago
14.0 - 18.0 years
0 Lacs
karnataka
On-site
You are hiring for the role of AVP - Databricks with a requirement of minimum 14+ years of experience. The job location can be in Bangalore, Hyderabad, NCR, Kolkata, Mumbai, or Pune. As an AVP - Databricks, your responsibilities will include leading and managing Databricks-based project delivery to ensure that all solutions meet client requirements, best practices, and industry standards. You will serve as a subject matter expert (SME) on Databricks, providing guidance to teams on architecture, implementation, and optimization. Collaboration with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads will also be part of your role. Additionally, you will act as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. We are looking for a candidate with a Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred) with relevant years of experience in IT services, specifically in Databricks and cloud-based data engineering. Proven experience in leading end-to-end delivery and solution architecting of data engineering or analytics solutions on Databricks is a plus. Strong expertise in cloud technologies such as AWS, Azure, GCP, data pipelines, and big data tools is desired. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies is a requirement. An in-depth understanding of data engineering concepts including ETL, data lakes, data warehousing, and distributed computing will be beneficial for this role.,
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are seeking a dynamic professional with strong experience in Databricks and Machine Learning to design and implement scalable data pipelines and ML solutions. The ideal candidate will work closely with data scientists, analysts, and business teams to deliver high-performance data products and predictive models. Key Responsibilities Design, develop, and optimize data pipelines using Databricks, PySpark, and Delta Lake Build and deploy Machine Learning models at scale Perform data wrangling, feature engineering, and model tuning Collaborate with cross-functional teams for ML model integration and monitoring Implement MLflow for model versioning and tracking Ensure best practices in MLOps, code management, and automation Must-Have Skills Hands-on experience with Databricks, Spark, and SQL Strong knowledge of ML algorithms, Python (Pandas, Scikit-learn), and model deployment Familiarity with cloud platforms (Azure / AWS / GCP) Experience with CI/CD pipelines and ML lifecycle management tools Good To Have Exposure to data governance, monitoring tools, and performance optimization Knowledge of Docker/Kubernetes and REST API integration
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a versatile AI/ML Engineer to join the Our team, contributing to the design and deployment of scalable AI solutions across the full stack. This role blends machine learning engineering with frontend/backend development and cloud native microservices. You’ll work closely with data scientists, MLOps engineers, and product teams to bring generative AI capabilities like RAG and LLM based systems into production. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or masters in computer science, Engineering, or related field. 5+ years of experience in AI/ML engineering, full stack development, or MLOps. Proven experience deploying AI models in production environments. Solid understanding of microservices architecture and cloud native development. Familiarity with Agile/Scrum methodologies Technical Skills: Languages & Frameworks: Python, JavaScript/TypeScript, SQL, Scala ML Tools: MLflow, TensorFlow, PyTorch, Scikit learn Frontend: React.js, Angular (preferred), HTML/CSS Backend: Node.js, Spring Boot, REST APIs Cloud: Azure (preferred), UAIS, AWS DevOps & MLOps: Git, Jenkins, Docker, Kubernetes, Azure DevOps Data Engineering: Apache Spark/Databricks, Kafka, ETL pipelines Monitoring: Prometheus, Grafana RAG/LLM: LangChain, LlamaIndex, embedding pipelines, prompt engineering Preferred Qualifications Experience with Spark, Hadoop Familiarity with Maven, Spring, XML, Tomcat Proficiency in Unix shell scripting and SQL Server At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for an enthusiastic and curious Junior Data Scientist to join the Cloud Nova team. This is an excellent opportunity for someone with 2-3 years of experience to work on exciting projects involving Generative AI (GenAI), Retrieval-Augmented Generation (RAG), and deep learning. You will support senior data scientists and engineers in building and deploying AI models that solve real-world problems. Primary Responsibilities Assist in developing and testing GenAI models using tools like LangChain and Hugging Face Transformers Support the creation of RAG pipelines and embedding-based search systems Help prepare datasets and perform exploratory data analysis Contribute to model evaluation and performance tracking Collaborate with team members to integrate models into applications Stay updated on the latest trends in AI and deep learning Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree in Computer Science, Data Science, or a related field 1+ years of experience in data science, machine learning, or AI projects (internships count) Basic understanding of NLP and deep learning concepts Willingness to learn and grow in a collaborative environment Technical Skills: Programming: Python, SQL AI/ML: PyTorch or TensorFlow, Scikit-learn, Hugging Face Transformers GenAI Tools: LangChain, LlamaIndex (basic familiarity preferred) Data Tools: Pandas, NumPy, Jupyter Notebooks Version Control: Git Preferred Qualifications Experience with vector databases (e.g., FAISS) Familiarity with MLOps tools like MLflow or Docker Exposure to cloud-based model deployment Technical Skills: Cloud: Exposure to Azure or AWS At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France