Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 8.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 1 month ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317425 Job Description About The Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 1 month ago
0.0 - 8.0 years
0 Lacs
Gurugram, Haryana
On-site
Senior Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317427 Job Description About The Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 1 month ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/01/2025 Job Type Full time Industry Technology Work Experience 5+ years City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600096 Job Description Job Description: Job Summary: We are looking for a skilled and driven Machine Learning Engineer with 4+ years of experience to design, develop, and deploy ML solutions across business-critical applications. You’ll work alongside data scientists, software engineers, and product teams to create scalable, high-impact machine learning systems in production environments. Key Responsibilities: Build, train, and deploy ML models for tasks such as classification, regression, NLP, recommendation, or computer vision. Convert data science prototypes into production-ready applications . Design and implement ETL/data pipelines to support model training and inference. Optimize models and pipelines for performance, scalability, and efficiency . Collaborate with DevOps/MLOps teams to deploy and monitor models in production. Continuously evaluate and retrain models based on data and performance feedback. Maintain thorough documentation of models, features, and processes. Required Qualifications: 4+ years of hands-on experience in machine learning , data science , or AI engineering roles. Proficiency in Python and ML frameworks such as scikit-learn, TensorFlow, PyTorch , or similar. Solid understanding of ML algorithms , data preprocessing, model evaluation, and tuning. Experience working with cloud platforms (AWS/GCP) and containerization tools (e.g., Docker). Familiarity with SQL and NoSQL databases , and working with large-scale datasets. Strong problem-solving skills and a collaborative mindset. Good to Have: Experience with MLOps tools (e.g., MLflow, SageMaker, Kubeflow). Knowledge of deep learning , transformers , or LLMs . Familiarity with streaming data (Kafka, Spark Streaming) and real-time inference . Exposure to ElasticSearch for data indexing or log analytics. Interest in applied AI/ML innovation within domains like fin-tech, healthcare, or ecommerce. Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field.
Posted 1 month ago
3.0 - 6.0 years
10 - 22 Lacs
Chennai, Tamil Nadu, India
On-site
Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone
Posted 1 month ago
3.0 - 6.0 years
10 - 22 Lacs
Bengaluru, Karnataka, India
On-site
Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Primary Skill: Sagemaker, Python, LLM A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!
Posted 1 month ago
3.0 - 6.0 years
10 - 22 Lacs
Hyderabad, Telangana, India
On-site
Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone
Posted 1 month ago
0 years
0 Lacs
Chandigarh, India
On-site
Sagemaker, Python, LLM A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management
Posted 1 month ago
3.0 - 6.0 years
10 - 22 Lacs
Gurugram, Haryana, India
On-site
Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone
Posted 1 month ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, Senio r Data Scientist ! In this role, you will have a strong background in Gen AI implementations, data engineering, developing ETL processes, and utilizing machine learning tools to extract insights and drive business decisions. The Data Scientist will be responsible for analysing large datasets, developing predictive models, and communicating findings to various stakeholders Responsibilities Develop and maintain machine learning models to identify patterns and trends in large datasets. Utilize Gen AI and various LLMs to design & develop production ready use cases. Collaborate with cross-functional teams to identify business problems and develop data-driven solutions. Communicate complex data findings and insights to non-technical stakeholders in a clear and concise manner. Continuously monitor and improve the performance of existing models and processes. Stay up to date with industry trends and advancements in data science and machine learning. Design and implement data models and ETL processes to extract, transform, and load data from various sources. Good hands own experience in AWS bedrock models, Sage maker, Lamda etc Data Exploration & Preparation - Conduct exploratory data analysis and clean large datasets for modeling. Business Strategy & Decision Making - Translate data insights into actionable business strategies. Mentor Junior Data Scientists - Provide guidance and expertise to junior team members. Collaborate with Cross-Functional Teams - Work with engineers, product managers, and stakeholders to align data solutions with business goals. Qualifications we seek in you! Minimum Qualifications Bachelor%27s / Master%27s degree in computer science , Statistics, Mathematics, or a related field. Relevant years of experience in a data science or analytics role. Strong proficiency in SQL and experience with data warehousing and ETL processes. Experience with programming languages such as Python & R is a must . (either one ) Familiarity with machine learning tools and libraries such as Pandas, scikit-learn and AI libraries. Having excellent knowledge in Gen AI, RAG, LLM Models & strong understanding of prompt engineering. Proficiency in Az Open AI & AWS Sagemaker implementation. Good understanding statistical techniques such and advanced machine learning Experience with data warehousing and ETL processes. Proficiency in SQL and database management. Familiarity with cloud-based data platforms such as AWS, Azure, or Google Cloud. Experience with Azure ML Studio is desirable. Knowledge of different machine learning algorithms and their applications. Familiarity with data preprocessing and feature engineering techniques. Preferred Qualifications/ Skills Experience with model evaluation and performance metrics. Understanding of deep learning and neural networks is a plus. Certified in AWS Machine learning , AWS Infra engineer is a plus Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Sr Manager Data Sciences —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Develop and execute a multi-year data-science strategy and roadmap that directly supports corporate objectives, translating it into measurable quarterly OKRs for the team. Lead, mentor and grow a high-performing staff of data scientists and ML engineers, providing technical direction, career development, and continuous‐learning opportunities. Own the end-to-end delivery of advanced analytics and machine-learning solutions—from problem framing and data acquisition through model deployment, monitoring and iterative improvement—ensuring each project delivers clear business value. Prioritise and manage a balanced portfolio of initiatives, applying ROI, risk and resource-capacity criteria to allocate effort effectively across research, clinical, manufacturing and commercial domains. Provide hands-on guidance on algorithm selection and experimentation (regression, classification, clustering, time-series, deep learning, generative-AI, causal inference), ensuring methodological rigour and reproducibility. Establish and enforce best practices for code quality, version control, MLOps pipelines, model governance and responsible-AI safeguards (privacy, fairness, explainability). Partner with Data Engineering, Product, IT Security and Business stakeholders to integrate models into production systems via robust APIs, dashboards or workflow automations with well-defined SLAs. Manage cloud and on-prem analytics environments, optimising performance, reliability and cost; negotiate vendor contracts and influence platform roadmaps where appropriate. Champion a data-driven culture by communicating insights and model performance to VP/SVP-level leaders through clear storytelling, visualisations and actionable recommendations. Track emerging techniques, regulatory trends and tooling in AI/ML; pilot innovations that keep the organisation at the forefront of data-science practice and compliance requirements. Must-Have Skills: Leadership & Delivery: 10+ years in advanced analytics with 4+ years managing high-performing data-science or ML teams, steering projects from problem framing through production. Algorithmic Expertise: Deep command of classical ML, time-series, deep-learning (CNNs, transformers) and causal-inference techniques, with sound judgement on when and how to apply each. Production Engineering: Expert Python and strong SQL, plus hands-on experience deploying models via modern MLOps stacks (MLflow, Kubeflow, SageMaker, Vertex AI or Azure ML) with automated monitoring and retraining. Business Influence: Proven ability to translate complex analytics into concise, outcome-oriented narratives that inform VP/SVP-level decisions and secure investment. Cloud & Cost Governance: Working knowledge of AWS, Azure or GCP, including performance tuning and cost-optimisation for large-scale data and GPU/CPU workloads. Responsible AI & Compliance: Familiarity with privacy, security and AI-governance frameworks (GDPR, HIPAA, GxP, EU AI Act) and a track record embedding fairness, explainability and audit controls throughout the model lifecycle. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
5.0 years
7 - 10 Lacs
Hyderābād
On-site
The Senior Software Engineer, AI Platform will facilitate the build and operation of the centralized AI platform for AI/ML development and deployment across Thomson Reuters business operations. The Senior Software Engineer, AI Platform is expected to be an expert in AI tooling and setting up and streamlining AI workflows and building applications to enable AI/ML workflow development, testing and deployment About the role: As a Senior Software Engineer, AI Platform , you will: Build and Maintain software that tracks the full lifecycle of ML from ideation to post deployment monitoring Assist in the deployment of machine learning models into production and support these models throughout their lifecycle, including GenAI models Build out features that help data scientists and AI novices to iterate and re-train models at speed and ease. Build out features that facilitate the data collection and annotation for non-structured data and NLP use cases Utilize a variety of software and tools both commercial and open source Enable self-service tooling for teams to create and maintain models Create and deploy tooling for model monitoring and model governance Be part of a model ops framework Continuously challenge and evolve the existing platform capabilities and keep up to date with new offerings About You You’re a fit for the role of Senior Software Engineer, AI Platform if you meet all or most of these criteria: 5 years in Software Engineering Hands-on experience working with public cloud technology (AWS, Azure, GCP) Ability to collaborate with scientists, product management and work with an engineering-focused, iterative team to build and establish product requirements. Comfortable building prototypes from scratch. Familiarity with AI concepts and hands on experience with AI solutions Experience with AWS sagemaker, Azure Studio or similar cloud AI capabilities Proficiency in modern programming languages and in particular Python Experience with relational and/or non-relational databases Experience with Agile development and delivery – Scrum, Lean, XP, Kanban methodologies. Deep understanding of computer science concepts, such as time and space complexity, data structures and basic algorithms. Experience building ETL data pipelines. Hands on DevOps experience – CI/CD in AWS, Git, Monitoring, Log Analytics #LI-HG1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 month ago
5.0 years
1 - 1 Lacs
Hyderābād
On-site
We are seeking individuals with advanced expertise in Machine Learning (ML) to join our dynamic team. As an Applied AI ML Lead within our Corporate Sector, you will play a pivotal role in developing machine learning and deep learning solutions, and experimenting with state of the art models. You will contribute to our innovative projects and drive the future of machine learning at AI Technologies. You will use your knowledge of ML tools and algorithms to deliver the right solution. You will be a part of an innovative team, working closely with our product owners, data engineers, and software engineers to build new AI/ML solutions and productionize them. You will also mentor other AI engineers and scientists while fostering a culture of continuous learning and technical excellence. We are looking for someone with a passion for data, ML, and programming, who can build ML solutions at-scale with a hands-on approach with detailed technical acumen. Job responsibilities Serve as a subject matter expert on a wide range of machine learning techniques and optimizations. Provide in-depth knowledge of machine learning algorithms, frameworks, and techniques. Enhance machine learning workflows through advanced proficiency in large language models (LLMs) and related techniques. Conduct experiments using the latest machine learning technologies, analyze results, and tune models. Engage in hands-on coding to transition experimental results into production solutions by collaborating with the engineering team, owning end-to-end code development in Python for both proof of concept/experimentation and production-ready solutions. Optimize system accuracy and performance by identifying and resolving inefficiencies and bottlenecks, collaborating with product and engineering teams to deliver tailored, science and technology-driven solutions. Integrate Generative AI within the machine learning platform using state-of-the-art techniques, driving decisions that influence product design, application functionality, and technical operations and processes Required qualifications, capabilities, and skills Formal training or certification on AI/ML concepts and 5+ years applied experience Hans on experience in programming languages, particularly Python. Manage to apply data science and machine learning techniques to address business challenges. Strong background in Natural Language Processing (NLP) and Large Language Models (LLMs). Expertise in deep learning frameworks such as PyTorch or TensorFlow, and advanced applied ML areas like GPU optimization, finetuning, embedding models, inferencing, prompt engineering, evaluation, and RAG (Similarity Search). Manage to complete tasks and projects independently with minimal supervision, with a passion for detail and follow-through. Excellent communication skills, team player, and demonstrated leadership in collaborating effectively with engineers, product managers, and other ML practitioners Preferred qualifications, capabilities, and skills Exposure with Ray, MLFlow, and/or other distributed training frameworks. MS and/or PhD in Computer Science, Machine Learning, or a related field. Understanding of Search/Ranking, Recommender systems, Graph techniques, and other advanced methodologies. Familiar in Reinforcement Learning or Meta Learning. Understanding of Large Language Model (LLM) techniques, including Agents, Planning, Reasoning, and other related methods. Exposure building and deploying ML models on cloud platforms such as AWS and AWS tools like Sagemaker, EKS, etc.
Posted 1 month ago
0 years
0 Lacs
Hyderābād
Remote
We are united in our mission to make a positive impact on healthcare. Join Us! South Florida Business Journal, Best Places to Work 2024 Inc. 5000 Fastest-Growing Private Companies in America 2024 2024 Black Book Awards, ranked #1 EHR in 11 Specialties 2024 Spring Digital Health Awards, "Web-based Digital Health" category for EMA Health Records (Gold) 2024 Stevie American Business Award (Silver), New Product and Service: Health Technology Solution (Klara) Who we are: We Are Modernizing Medicine (WAMM)! We're a team of bright, passionate, and positive problem-solvers on a mission to place doctors and patients at the center of care through an intelligent, specialty-specific cloud platform. Our vision is a world where the software we build increases medical practice success and improves patient outcomes. Founded in 2010 by Daniel Cane and Dr. Michael Sherling, we have grown to over 3400 combined direct and contingent team members serving eleven specialties, and we are just getting started! ModMed's global headquarters is based in Boca Raton, FL, with a growing office in Hyderabad, India, and a robust remote workforce across the US, Chile, and Germany. ModMed is hiring a driven ML Ops Engineer 2 to join our positive, passionate, and high-performing team focused on scalable ML Systems. This is an exciting opportunity to You as you will collaborate with data scientists, engineers, and other cross-functional teams to ensure seamless model deployment, monitoring, and automation. If you're passionate about cloud infrastructure, automation, and optimizing ML pipelines, this is the role for you within a fast-paced Healthcare IT company that is truly Modernizing Medicine! Key Responsibilities: Model Deployment & Automation: Develop, deploy, and manage ML models on Databricks using MLflow for tracking experiments, managing models, and registering them in a centralized repository. Infrastructure & Environment Management: Set up scalable and fault-tolerant infrastructure to support model training and inference in cloud environments such as AWS, GCP, or Azure. Monitoring & Performance Optimization: Implement monitoring systems to track model performance, accuracy, and drift over time. Create automated systems for re-training and continuous learning to maintain optimal performance. Data Pipeline Integration: Collaborate with the data engineering team to integrate model pipelines with real-time and batch data processing frameworks, ensuring seamless data flow for training and inference. Skillset & Qualification Model Deployment: Experience with deploying models in production using cloud platforms like AWS Sagemaker, GCP AI Platform, or Azure ML Studio. Version Control & Automation: Experience with MLOps tools such as MLflow, Kubeflow, or Airflow to automate and monitor the lifecycle of machine learning models. Cloud Expertise: Experience with cloud-based machine learning services on AWS, Google Cloud, or Azure, ensuring that models are scalable and efficient. Engineers must be skilled in measuring and optimizing model performance through metrics like AUC, precision, recall, and F1-score, ensuring that models are robust and reliable in production settings. Education: Bachelor's or Master's degree in Data Science, Statistics, Mathematics, or a related technical field. ModMed in India Benefit Highlights: High growth, collaborative, transparent, fun, and award-winning culture Comprehensive benefits package including medical for you, your family, and your dependent parents The company supported community engagement opportunities along with a paid Voluntary Time Off day to use for volunteering in your community of interest Global presence, and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability Company-sponsored Employee Resource Groups that provide engaged and supportive communities within ModMed ModMed Benefits Highlight: At ModMed, we believe it's important to offer a competitive benefits package designed to meet the diverse needs of our growing workforce. Eligible Modernizers can enroll in a wide range of benefits: India Meals & Snacks: Enjoy complimentary office lunches & dinners on select days and healthy snacks delivered to your desk, Insurance Coverage: Comprehensive health, accidental, and life insurance plans, including coverage for family members, all at no cost to employees, Allowances: Annual wellness allowance to support your well-being and productivity, Earned, casual, and sick leaves to maintain a healthy work-life balance, Bereavement leave for difficult times and extended medical leave options, Paid parental leaves, including maternity, paternity, adoption, surrogacy, and abortion leave, Celebration leave to make your special day even more memorable, and company-paid holidays to recharge and unwind. United States Comprehensive medical, dental, and vision benefits, including a company Health Savings Account contribution, 401(k): ModMed provides a matching contribution each payday of 50% of your contribution deferred on up to 6% of your compensation. After one year of employment with ModMed, 100% of any matching contribution you receive is yours to keep. Generous Paid Time Off and Paid Parental Leave programs, Company paid Life and Disability benefits, Flexible Spending Account, and Employee Assistance Programs, Company-sponsored Business Resource & Special Interest Groups that provide engaged and supportive communities within ModMed, Professional development opportunities, including tuition reimbursement programs and unlimited access to LinkedIn Learning, Global presence and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability for some roles, Weekly catered breakfast and lunch, treadmill workstations, Zen, and wellness rooms within our BRIC headquarters. PHISHING SCAM WARNING: ModMed is among several companies recently made aware of a phishing scam involving imposters posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote "interviews," and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from ModMed without a formal interview process, and valid communications from our hiring team will come from our employees with a ModMed email address (first.lastname@modmed.com). Please check senders' email addresses carefully. Additionally, ModMed will not ask you to purchase equipment or supplies as part of your onboarding process. If you are receiving communications as described above, please report them to the FTC website.
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At Prodigal, we’re reshaping the future of consumer finance. Founded in 2018 by IITB alumni, our journey began with one bold mission: to eradicate the inefficiencies and confusion that have plagued the lending and collections industry for decades. Today, we stand at the forefront of a seismic shift in the industry, pioneering the concept of consumer finance intelligence. Powered by our cutting-edge platform, Prodigal’s Intelligence Engine, we’re creating the next-generation agentic workforce for consumer finance—one that empowers companies to achieve unprecedented levels of operational excellence. With over half a billion consumer finance interactions processed and a growing impact on more than 100 leading companies across North America, we’ve established ourselves as the go-to partner for organizations that demand more from their AI solutions. Our unparalleled experience, coupled with our trusted customer relationships, uniquely positions us to build generative AI solutions that will revolutionize the future of consumer finance. At Prodigal, we are driven by a singular, unrelenting purpose: to transform how consumer finance companies engage with their customers and, in turn, drive successful outcomes for all. About The Role We're seeking an exceptional Agent Engineer specializing in Applied AI and Prompt Engineering to join our team building next-generation industry centric vertical Voice AI Agents. In this role, you'll be at the intersection of AI engineering and customer experience, crafting and optimizing prompts that power natural, effective voice conversations in consumer finance. You'll work directly with our proprietary voice AI technology, analyzing conversation data, iterating on prompt strategies, and implementing systematic approaches to improve agent performance. This is a hands-on technical role that requires both engineering excellence and a deep understanding of conversational AI design. 🏆 Key Responsibilities Prompt Engineering & Optimization Design, test, and refine prompts for voice AI agents handling complex financial conversations Develop systematic prompt engineering methodologies and best practices for voice interactions Ability to meta-prompt and run experiments with different LLMs, bringing in m Bring in new agent architectures to bring in the lowest latencies possible while maintaining high accuracy Create prompt templates and frameworks that scale across different use cases and customer segments Implement A/B testing strategies to measure and improve prompt effectiveness Data-Driven Optimization Analyze conversation transcripts and performance metrics to identify improvement opportunities Use our Simulation Platform and existing call corpus of our customers to tune and improve AI Agent performance Develop automated evaluation frameworks for AI agent quality assessment Create feedback loops (semi or fully automated) between production data and prompt refinements Optimize prompts for latency, accuracy, and natural conversation flow Voice AI Development Collaborate with ML engineers to improve voice recognition and synthesis quality Design conversation flows that handle edge cases and error recovery gracefully Implement context management strategies for multi-turn conversations Develop domain-specific language models for financial services Customer Success & Travel Travel to customer sites to learn from human agents and gather requirements Conduct on-site prompt optimization based on customer-specific needs to deliver rapid iterations of prototype versions ✅ Requirements Must-Have Qualifications B.E/B.Tech/M.Tech in Computer Science, AI/ML, Linguistics, or equivalent Strong Python programming skills and experience with ML frameworks Ability to work with large-scale conversation data Excellent analytical and problem-solving skills Strong verbal and written communication skills and ability to explain technical concepts clearly Willingness to travel in the US for customer engagements Technical Skills Proficiency in prompt engineering techniques (few-shot learning, chain-of-thought meta-prompting, etc.) Knowledge of SQL and data analysis tools (Pandas, NumPy) Experience with experiment tracking and MLOps tools Understanding of real-time system constraints and latency optimization Bonus Qualifications Familiarity with voice biometrics and authentication Experience with real-time streaming architectures Published research or blog posts on prompt engineering or conversational AI Experience with multilingual voice AI systems 🎁 What We Offer Job Benefits GenAI Experience – Work at the cutting edge of voice AI and prompt engineering, shaping the future of conversational AI in consumer finance World-class Team – Learn from and collaborate with experts from BCG, Deloitte, Meta, Amazon, and top institutes like IIT and IIM Continuous Education – Full sponsorship for courses, certifications, conferences, and learning materials related to AI and prompt engineering Travel Opportunities – Gain exposure to diverse customer environments and real-world AI implementations Health Insurance – Comprehensive coverage for you and your family Flexible Schedule – Work when you're most productive, with core collaboration hours Generous Leave Policy – Unlimited PTO to ensure you stay refreshed and creative Food at Office – All meals provided when working from office Recreation & Team Activities – Regular team bonding through sports, games, and social events Our Tech Stack AI/ML: GPT, Claude, Gemini, Custom LLMs Languages: Python Infrastructure: AWS (Lambda, SageMaker, EKS), LiveKit for Real-time streaming Data: MongoDB, PostgreSQL, Databricks, Redis Tools: MLflow, Custom prompt management platforms Why This Role Matters As a Founding Agent Engineer Focused On Voice AI And Prompt Engineering, You'll Directly Impact How Millions Of Consumers Interact With Financial Services Companies, And Also Be Able To Lay Out How Agent Engineering Grows At Prodigal. Your Work Will Have Immense Implications For Consumer Finance How Prodigal operates Setting the industry standard in Voice AI From day 1, Prodigal has been defined by talented, humble, and hungry leaders and we want this mindset and culture to continue to blossom from top to bottom in the company. If you have an entrepreneurial spirit and want to work in a fast-paced, intellectually-stimulating environment where you will be pushed to grow, then please reach out because we are looking to build a transformational company that reinvents one of the biggest industries in the US. To learn more about us - please visit the following: Our Story - https://www.prodigaltech.com/our-story What shapes our thinking - https://link.prodigaltech.com/our-thesis Our website - https://www.prodigaltech.com/
Posted 1 month ago
6.0 years
0 Lacs
Kochi, Kerala, India
Remote
Job Title: Devops Engineer with MlOps & LLM ( Remote ) Job Type: Contract Contract Duration: 2 Months (as needed basis) Experience: 6+ Years Location: Remote Job Description & Responsibilities We Are Seeking a skilled and proactive DevOps Engineer to support with expertise to contribute to the design and development of a scalable, user-friendly front-end application for our AI-driven platform enabling search, recommendations and GIS based interactions.. The ideal candidate will have extensive experience in managing both cloud and on-premises infrastructure, implementing CI/CD pipelines, and optimizing system performance to ensure seamless integration and deployment of features. They should possess deep expertise in observability tooling, container orchestration, and MLOps for large language models (LLMs). This role involves managing the full DevOps lifecycle while closely collaborating with development, QA, and operations teams to drive reliability, scalability, and continuous delivery across secure, distributed environments. Manage infrastructure across AWS/Azure for development and lower environments. Setup, configure, and manage Kubernetes (K8s) clusters in hybrid environments. Integrate OpenTelemetry and Application Performance Monitoring (APM) tools for observability and performance tracking. Collaborate with ML teams to implement and scale MLOps pipelines for training, tuning, and deploying LLMs Build and maintain continuous integration/continuous deployment (CI/CD) pipelines for efficient software delivery. Automate build, test, and deployment processes to streamline development workflows. Manage and configure development, staging, and production environments. Ensure consistency and stability across all environments through automation and configuration management tools. Implement security best practices, including access controls, vulnerability scanning, and compliance monitoring. Regularly update and patch infrastructure components to mitigate security risks. Set up and maintain monitoring tools to track application performance and system health. Configure logging and alerting mechanisms to proactively identify and address issues. Work closely with development, QA, and operations teams to troubleshoot issues and ensure smooth deployments. Provide guidance and support for infrastructure-related tasks and challenges. Identify and implement improvements to DevOps processes and workflows. Stay updated on industry trends and tools to enhance DevOps practices. Primary Skills Bachelor's degree in Computer Science, Information Technology, or a related field. 3–5 years of experience as a DevOps Engineer or in a similar role. Proficiency in designing and managing CI/CD pipelines using Azure DevOps, Jenkins, or similar tools. Experience with tools like Kubeflow, MLflow, or SageMaker for MLOps. Secondary Skills(if Any) Proficiency in containerization tools such as Docker and orchestration platforms like Kubernetes. Strong knowledge of version control systems like Git and branching strategies. Understanding of network configurations, VPNs, firewalls, and load balancers. Familiarity with Agile methodologies and DevOps principles. Certifications Required(if Any) Certification in relevant skills will be considered an added advantage
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Senior Software Engineer, AI Platform will facilitate the build and operation of the centralized AI platform for AI/ML development and deployment across Thomson Reuters business operations. The Senior Software Engineer, AI Platform is expected to be an expert in AI tooling and setting up and streamlining AI workflows and building applications to enable AI/ML workflow development, testing and deployment About the role: As a Senior Software Engineer, AI Platform , you will: Build and Maintain software that tracks the full lifecycle of ML from ideation to post deployment monitoring Assist in the deployment of machine learning models into production and support these models throughout their lifecycle, including GenAI models Build out features that help data scientists and AI novices to iterate and re-train models at speed and ease. Build out features that facilitate the data collection and annotation for non-structured data and NLP use cases Utilize a variety of software and tools both commercial and open source Enable self-service tooling for teams to create and maintain models Create and deploy tooling for model monitoring and model governance Be part of a model ops framework Continuously challenge and evolve the existing platform capabilities and keep up to date with new offerings About You You’re a fit for the role of Senior Software Engineer, AI Platform if you meet all or most of these criteria: 5 years in Software Engineering Hands-on experience working with public cloud technology (AWS, Azure, GCP) Ability to collaborate with scientists, product management and work with an engineering-focused, iterative team to build and establish product requirements. Comfortable building prototypes from scratch. Familiarity with AI concepts and hands on experience with AI solutions Experience with AWS sagemaker, Azure Studio or similar cloud AI capabilities Proficiency in modern programming languages and in particular Python Experience with relational and/or non-relational databases Experience with Agile development and delivery – Scrum, Lean, XP, Kanban methodologies. Deep understanding of computer science concepts, such as time and space complexity, data structures and basic algorithms. Experience building ETL data pipelines. Hands on DevOps experience – CI/CD in AWS, Git, Monitoring, Log Analytics What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 month ago
7.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad, Experience: 7-9 Years Employment Type: Full-time Company Description At Antz AI, we lead the AI revolution with a focus on AI Agentic Solutions. Our mission is to integrate intelligent AI Agents and Human-Centric AI solutions into core business processes, driving innovation, efficiency, and growth. Our consulting is grounded in robust data centralization, resulting in significant boosts in decision-making speed and reductions in operational costs. Through strategic AI initiatives, we empower people to achieve more meaningful and productive work. Role Summary: We are seeking a highly experienced and dynamic Senior AI Consultant with 7-9 years of overall experience to join our rapidly growing team. The ideal candidate will be a hands-on technologist with a proven track record of designing, developing, and deploying robust AI solutions, particularly leveraging agentic frameworks. This role demands a blend of deep technical expertise, strong system design capabilities, and excellent customer-facing skills to deliver impactful real-world products. We are looking for an immediate joiner who can hit the ground running. Key Responsibilities: Solution Design & Architecture: Lead the design and architecture of scalable, high-performance AI solutions, emphasizing agentic frameworks (e.g., Agno, Langgraph) and microservices architectures. Hands-on Development: Develop, implement, and optimize AI models, agents, and supporting infrastructure. Write clean, efficient, and well-documented code, adhering to software engineering best practices. Deployment & Operations: Oversee the deployment of AI solutions into production environments, primarily utilizing AWS services. Implement and maintain CI/CD pipelines to ensure seamless and reliable deployments. System Integration: Integrate AI solutions with existing enterprise systems and data sources, ensuring robust data flow and interoperability. Customer Engagement: Act as a key technical liaison with clients, understanding their business challenges, proposing AI-driven solutions, and presenting technical concepts clearly and concisely. Best Practices & Quality: Champion and enforce best practices in coding, testing, security, and MLOps to ensure the delivery of high-quality, maintainable, and scalable solutions. Problem Solving: Diagnose and resolve complex technical issues related to AI model performance, infrastructure, and integration. Mentorship: Provide technical guidance and mentorship to junior team members, fostering a culture of continuous learning and excellence. Required Qualifications: Experience: 7-9 years of overall experience in software development, AI engineering, or machine learning, with a strong focus on deploying production-grade solutions. Agentic Frameworks: Demonstrated hands-on experience with agentic frameworks such as Langchain, Langgraph, Agno, AutoGen , or similar, for building complex AI workflows and autonomous agents. Microservices Architecture: Extensive experience in designing, developing, and deploying solutions based on microservices architectures. Cloud Platforms: Proven expertise in AWS services relevant to AI/ML and microservices (e.g., EC2, S3, Lambda, ECS/EKS, SageMaker, DynamoDB, API Gateway, SQS/SNS). Programming & MLOps: Strong proficiency in Python. Experience with MLOps practices, including model versioning, monitoring, and pipeline automation. System Design: Excellent understanding and practical experience in system design principles, scalability, reliability, and security. Real-World Deployment: A strong portfolio demonstrating successful deployment of AI products or solutions in real-world, production environments. Customer-Facing: Prior experience in customer-facing roles, with the ability to articulate complex technical concepts to non-technical stakeholders and gather requirements effectively. Immediate Availability: Ability to join immediately. Preferred Qualifications / Bonus Points: Experience with other cloud platforms (Azure, GCP). Knowledge of containerization technologies (Docker, Kubernetes). Familiarity with various machine learning domains (NLP, Computer Vision, Generative AI). Contributions to open-source AI projects.
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Machine Learning Lead Engineer Job Summary: We are looking for a highly skilled Machine Learning Lead Engineer to lead the development and deployment of production-grade ML solutions. You will be responsible for overseeing the engineering efforts of the ML team, ensuring scalable model development, robust data pipelines, and seamless integration with production systems. This is a key technical leadership role for someone passionate about driving machine learning initiatives from design to deployment, while mentoring a team of engineers and collaborating with cross-functional stakeholders. Roles and Responsibilities: Technical Leadership & Delivery Lead the end-to-end development of ML models and pipelines—from data preparation and feature engineering to model training, validation, and deployment. Translate business requirements into scalable ML solutions, ensuring performance, maintainability, and production readiness. Supervise and support team members in their project work; ensure adherence to coding and MLOps best practices. Model Development & Optimization Design and implement machine learning models (e.g., classification, regression, NLP, recommendation) using TensorFlow, PyTorch, or Scikit-learn. Optimize models for accuracy, latency, and efficiency through feature selection, hyperparameter tuning, and evaluation metrics. Conduct model performance analysis and guide the team in continuous improvement strategies. MLOps & Productionization Implement robust ML pipelines using MLflow, Kubeflow, or similar tools for CI/CD, monitoring, and lifecycle management. Work closely with DevOps and platform teams to containerize models and deploy them on cloud platforms like AWS SageMaker, GCP Vertex AI, or Azure ML. Ensure monitoring, alerting, and retraining strategies are in place for models in production. Team Collaboration & Mentorship Guide and mentor a team of ML engineers and junior data scientists. Collaborate closely with architects, data engineers, and product teams to ensure seamless integration of ML components. Contribute to code reviews, design sessions, and knowledge-sharing initiatives. Skills and Qualifications: Must-Have Skills 5+ years of experience in ML engineering or applied data science, with a strong track record of delivering production-grade ML systems. Deep expertise in Python, data structures, algorithms, and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with data pipeline tools (Airflow, Spark, Kafka, Pyspark) and MLOps platforms (MLflow, Kubeflow). Strong knowledge of cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes). Solid understanding of model deployment, monitoring, and lifecycle management. Good-to-Have Skills Prior experience leading small to mid-sized technical teams. Exposure to business intelligence or data analytics. Cloud certifications (e.g., AWS Certified ML Specialty). Familiarity with agile methodologies and project tracking tools like Azure DevOps.
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Job Title: Machine Learning Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated Machine Learning Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Posted 1 month ago
2.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description - AI Data Scientist Location: Remote Department: Data & AI Engineering Employment Type: Full-time Experience Level: Mid-level About the Role: We are seeking an experienced AI Data Engineer to design, build, and deploy data pipelines and ML infrastructure to power scalable AI/ML solutions. This role involves working at the intersection of data engineering, MLOps, and model deployment—supporting the end-to-end lifecycle from data ingestion to model production. Key Responsibilities: Data Engineering & Development Design, develop, and train AI models to solve complex business problems and enable intelligent automation. Design, develop, and maintain scalable data pipelines and workflows for AI/ML applications. Ingest, clean, and transform large volumes of structured and unstructured data from diverse sources (APIs, streaming, databases, flat files). Build and manage data lakes, data warehouses, and feature stores. Prepare training datasets and implement data preprocessing logic. Perform data quality checks, validation, lineage tracking, and schema versioning. Model Deployment & MLOps Package and deploy AI/ML models to production using CI/CD workflows. Implement model inference pipelines (batch or real-time) using containerized environments (Docker, Kubernetes). Use MLOps tools (e.g., MLflow, Kubeflow, SageMaker, Vertex AI) for model tracking, versioning, and deployment. Monitor deployed models for performance, drift, and reliability. Integrate deployed models into applications and APIs (e.g., REST endpoints). Platform & Cloud Engineering Manage cloud-based infrastructure (AWS, GCP, or Azure) for data storage, compute, and ML services. Automate infrastructure provisioning using tools like Terraform or CloudFormation. Optimize pipeline performance and resource utilization for cost-effectiveness. Requirements: Must-Have Skills Bachelor's/Master’s in Computer Science, Engineering, or related field. 2+ years of experience in data engineering, ML engineering, or backend infrastructure. Proficient in Python, SQL, and data processing frameworks (e.g., Spark, Pandas). Experience with cloud platforms (AWS/GCP/Azure) and services like S3, BigQuery, Lambda, or Databricks. Hands-on experience with CI/CD, Docker, and container orchestration (Kubernetes, ECS, EKS). Preferred Skills Experience deploying ML models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Familiarity with API development (Flask/FastAPI) for serving models. Experience with Airflow, Prefect, or Dagster for orchestrating pipelines. Understanding of DevOps and MLOps best practices. Soft Skills: Strong communication and collaboration with cross-functional teams. Proactive problem-solving attitude and ownership mindset. Ability to document and communicate technical concepts clearly.
Posted 1 month ago
3.0 years
2 - 10 Lacs
Bengaluru
On-site
- 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. At AWS AI, we want to make it easy for our customers to train their deep learning workload in the cloud. With Amazon SageMaker Training, we are building customer-facing services to empower data scientists and software engineers in their deep learning endeavors. As our customers rapidly adopt LLMs and Generative AI for their business, we’re building the next-generation AI platform to accelerate their development. We’re seeking a dedicated software engineer to drive building our next-generation AI compute platform that’s optimized for LLMs and distributed training.At AWS AI, we want to make it easy for our customers to train their deep learning workload in the cloud. With Amazon SageMaker Training, we are building customer-facing services to empower data scientists and software engineers in their deep learning endeavors. As our customers rapidly adopt LLMs and Generative AI for their business, we’re building the next-generation AI platform to accelerate their development. We’re seeking a dedicated software engineer to drive building our next-generation AI compute platform that’s optimized for LLMs and distributed training. Key job responsibilities As an SDE, you will be responsible for designing, developing, testing, and deploying distributed machine learning systems and large-scale solutions for our world-wide customer base. In this, you will collaborate closely with a team of ML scientists and customers to influence our overall strategy and define the team’s roadmap. You'll assist in gathering and analyzing business and functional requirements, and translate requirements into technical specifications for robust, scalable, supportable solutions that work well within the overall system architecture. You will also drive the system architecture, spearhead best practices that enable a quality product, and help coach and develop junior engineers. A successful candidate will have an established background in engineering large scale software systems, a strong technical ability, great communication skills, and a motivation to achieve results in a fast paced environment. About You: You are passionate about building platform and products for large scale deep learning model training (100+ billion parameter GPT, 1000s of GPU devices). You have a proven track record of bringing innovative research to customers. You are able to thrive and succeed in an entrepreneurial environment and not be hindered by ambiguity or competing priorities. Ownership, delivering results, thinking big and analytical leadership are essential to success in this role. You have solid experience in multi-threaded asynchronous C++ or Go development. You have prior experience in one of: resource orchestrators like slurm/kubernetes, high performance computing, building scalable systems, experience in large language model training. This is a great team to come to have a huge impact on AWS and the world's customers we serve! A day in the life Every day will bring new and exciting challenges on the job while you: * Build and improve next-generation AI platform * Collaborate with internal engineering teams, leading technology companies around the world and open source community - PyTorch, NVIDIA/GPU * Create innovative products to run at scale on the AI platform, and see them launched in high volume production About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS ? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
5.0 years
1 - 2 Lacs
Bengaluru
On-site
JOB DESCRIPTION We have an exciting and rewarding opportunity for you to take your Predictive Science career to the next level. As an Applied AI ML Lead - Data Scientist- Vice President at JPMorgan Chase within the Commercial & Investment Bank's Global Banking team, you’ll leverage your technical expertise and leadership abilities to support AI innovation. You should have deep knowledge of AI/ML and effective leadership to inspire the team, align cross-functional stakeholders, engage senior leadership, and drive business results. Job Responsibilities: Lead a local AI/ML team with accountability and engagement into a global organization. Mentor and guide team members, fostering an inclusive culture with a growth mindset. Collaborate on setting the technical vision and executing strategic roadmaps to drive AI innovation. Deliver AI/ML projects through our ML development life cycle using Agile methodology, help transform business requirements into AI/ML specifications, define milestones and ensure timely delivery. Work with product and business teams to define goals and roadmaps, maintain alignment with cross-functional stakeholders. Exercise sound technical judgment, anticipate bottlenecks, escalate effectively, and balance business needs versus technical constraints. Design experiments, establish mathematical intuitions, implement algorithms, execute test cases, validate results and productionize highly performant, scalable, trustworthy and often explainable solution. Mentor junior team members in delivering successful projects and building successful career in the firm. Participate and contribute back to firmwide Machine Learning communities through patenting, publications and speaking engagements. Evaluate and design effective processes and systems to facilitate communication, improve execution and ensure accountability. Required qualifications, capabilities, and skills Formal training or certification on Predictive Science concepts and 5+ years applied experience Track record of managing AI/ML or software development teams and experience as a hands-on practitioner developing production AI/ML solutions. Deep knowledge and experience in machine learning, artificial intelligence and ability to set teams up for success in speed and quality, design effective metrics and hypotheses. Expert in at least one of the following areas: Large Language Models, Natural Language Processing, Knowledge Graph, Computer Vision, Speech Recognition, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis. Good understanding of Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics. Demonstrated expertise in machine learning frameworks: Tensorflow, Pytorch, pyG, Keras, MXNet, Scikit-Learn. Strong programming knowledge of python, spark, strong grasp on vector operations using numpy, scipy and strong grasp on distributed computation using Multithreading, Multi GPUs, Dask, Ray, Polars etc. Familiarity with agentic workflows and relevant frameworks, such as LangChain, LangGraph, Auto-GPT etc. Familiarity in AWS Cloud services such as EMR, Sagemaker etc., Strong people management, team-building skills and ability to coach and grow talent, foster a healthy engineering culture and attract/retain talent, build a diverse, inclusive, and high-performing team. Ability to inspire collaboration among teams composed of both technical and non-technical members, effective communication, solid negotiation skills, and strong leadership. Preferred qualifications, capabilities, and skills 14+ years (BE/BTech/BS) or 8+ (ME/MTech/MS) or 5+ (PhD) years of relevant experience in Computer Science, Information Systems, Statistics, Mathematics, or equivalent experience. Experience working at code level ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM J.P. Morgan’s Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough