Jobs
Interviews

1558 Sagemaker Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

1 - 1 Lacs

Hyderābād

On-site

JOB DESCRIPTION We are seeking individuals with advanced expertise in Machine Learning (ML) to join our dynamic team. As an Applied AI ML Lead within our Corporate Sector, you will play a pivotal role in developing machine learning and deep learning solutions, and experimenting with state of the art models. You will contribute to our innovative projects and drive the future of machine learning at AI Technologies. You will use your knowledge of ML tools and algorithms to deliver the right solution. You will be a part of an innovative team, working closely with our product owners, data engineers, and software engineers to build new AI/ML solutions and productionize them. You will also mentor other AI engineers and scientists while fostering a culture of continuous learning and technical excellence. We are looking for someone with a passion for data, ML, and programming, who can build ML solutions at-scale with a hands-on approach with detailed technical acumen. Job responsibilities Serve as a subject matter expert on a wide range of machine learning techniques and optimizations. Provide in-depth knowledge of machine learning algorithms, frameworks, and techniques. Enhance machine learning workflows through advanced proficiency in large language models (LLMs) and related techniques. Conduct experiments using the latest machine learning technologies, analyze results, and tune models. Engage in hands-on coding to transition experimental results into production solutions by collaborating with the engineering team, owning end-to-end code development in Python for both proof of concept/experimentation and production-ready solutions. Optimize system accuracy and performance by identifying and resolving inefficiencies and bottlenecks, collaborating with product and engineering teams to deliver tailored, science and technology-driven solutions. Integrate Generative AI within the machine learning platform using state-of-the-art techniques, driving decisions that influence product design, application functionality, and technical operations and processes Required qualifications, capabilities, and skills Formal training or certification on AI/ML concepts and 5+ years applied experience Hans on experience in programming languages, particularly Python. Manage to apply data science and machine learning techniques to address business challenges. Strong background in Natural Language Processing (NLP) and Large Language Models (LLMs). Expertise in deep learning frameworks such as PyTorch or TensorFlow, and advanced applied ML areas like GPU optimization, finetuning, embedding models, inferencing, prompt engineering, evaluation, and RAG (Similarity Search). Manage to complete tasks and projects independently with minimal supervision, with a passion for detail and follow-through. Excellent communication skills, team player, and demonstrated leadership in collaborating effectively with engineers, product managers, and other ML practitioners Preferred qualifications, capabilities, and skills Exposure with Ray, MLFlow, and/or other distributed training frameworks. MS and/or PhD in Computer Science, Machine Learning, or a related field. Understanding of Search/Ranking, Recommender systems, Graph techniques, and other advanced methodologies. Familiar in Reinforcement Learning or Meta Learning. Understanding of Large Language Model (LLM) techniques, including Agents, Planning, Reasoning, and other related methods. Exposure building and deploying ML models on cloud platforms such as AWS and AWS tools like Sagemaker, EKS, etc. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.

Posted 1 month ago

Apply

4.0 - 8.0 years

1 - 3 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

10.0 years

1 - 6 Lacs

Thiruvananthapuram

On-site

We are seeking a visionary and highly skilled AI Architect to join our leadership team. This pivotal role will be responsible for defining and implementing the end-to-end architecture for deploying our machine learning models, including advanced Generative AI and LLM solutions, into production. You will lead and mentor a talented cross-functional team of Data Scientists, Backend Developers, and DevOps Engineers, fostering a culture of innovation, technical excellence, and operational efficiency. Key responsibilities: Architectural Leadership: Design, develop, and own the scalable, secure, and reliable end-to-end architecture for deploying and serving ML models, with a strong focus on real-time inference and high availability. Lead the strategy and implementation of the in-house API wrapper infrastructure for exposing ML models to internal and external customers. Define architectural patterns, best practices, and governance for MLOps, ensuring robust CI/CD pipelines, model versioning, monitoring, and automated retraining. Evaluate and select the optimal technology stack (cloud services, open-source frameworks, tools) for our ML serving infrastructure, balancing performance, cost, and maintainability. Team Leadership & Mentorship: Lead, mentor, and inspire a diverse team of Data Scientists, Backend Developers, and DevOps Engineers, guiding them through complex architectural decisions and technical challenges. Foster a collaborative environment that encourages knowledge sharing, continuous learning, and innovation across teams. Drive technical excellence, code quality, and adherence to engineering best practices within the teams. Generative AI & LLM Expertise: Architect and implement solutions for deploying Large Language Models (LLMs), including strategies for efficient inference, prompt engineering, and context management. Drive the adoption and integration of techniques like Retrieval Augmented Generation (RAG) to enhance LLM capabilities with proprietary and up-to-date information. Develop strategies for fine-tuning LLMs for specific downstream tasks and domain adaptation, ensuring efficient data pipelines and experimentation frameworks. Stay abreast of the latest advancements in AI , particularly in Generative AI, foundation models, and emerging MLOps tools, and evaluate their applicability to our business needs. Collaboration & Cross-Functional Impact: Collaborate closely with Data Scientists to understand model requirements, optimize models for production, and integrate them seamlessly into the serving infrastructure. Partner with Backend Developers to build robust, secure, and performant APIs that consume and serve ML predictions. Work hand-in-hand with DevOps Engineers to automate deployment, monitoring, scaling, and operational excellence of the AI infrastructure. Communicate complex technical concepts and architectural decisions effectively to both technical and non-technical stakeholders. Requirements (Qualifications/Experience/Competencies) Education: Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or a related quantitative field. Experience: 10+ years of progressive experience in software engineering, with at least 5+ years in an Architect or Lead role. Proven experience leading and mentoring cross-functional engineering teams (Data Scientists, Backend Developers, DevOps). Demonstrated experience in designing, building, and deploying scalable, production-grade ML model serving infrastructure from the ground up. Technical Skills: Deep expertise in MLOps principles and practices , including model versioning, serving, monitoring, and CI/CD for ML. Strong proficiency in Python and experience with relevant web frameworks (e.g., FastAPI, Flask ) for API development. Expertise in containerization technologies (Docker) and container orchestration (Kubernetes) for large-scale deployments. Hands-on experience with at least one major cloud platform (AWS, Google Cloud, Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, Azure ML). Demonstrable experience with Large Language Models (LLMs) , including deployment patterns, prompt engineering, and fine-tuning methodologies. Practical experience implementing and optimizing Retrieval Augmented Generation (RAG) systems. Familiarity with distributed systems, microservices architecture, and API design best practices. Experience with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack, Datadog). Knowledge of infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation). Leadership & Soft Skills: Exceptional leadership, mentorship, and team-building abilities. Strong analytical and problem-solving skills, with a track record of driving complex technical initiatives to completion. Excellent communication (verbal and written) and interpersonal skills, with the ability to articulate technical concepts to diverse audiences. Strategic thinker with the ability to align technical solutions with business objectives. Proactive, self-driven, and continuously learning mindset. Bonus Points: Experience with specific ML serving frameworks like BentoML, KServe/KFServing, TensorFlow Serving, TorchServe, or NVIDIA Triton Inference Server. Contributions to open-source MLOps or AI projects. Experience with data governance, data security, and compliance in an AI context.

Posted 1 month ago

Apply

0 years

7 - 9 Lacs

Gurgaon

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant - AI Senior Engineer! In this role you’ll be leveraging Azure’s advanced AI capabilities or AWS Advance Ai capability, including Azure Machine Learning , Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence,AWS Sage Maker, AWS Bedrocks to deliver scalable and efficient solutions. You will also ensure seamless integration into enterprise workflows and operationalize models with robust monitoring and optimization. Responsibilities AI Orchestration - Design and manage AI Orchestration flow using tools such as: Prompt Flow, Or LangChain; Continuously evaluate and refine models to ensure optimal accuracy, latency, and robustness in production. Document AI and Data Extraction, Build AI-driven workflows for extracting structured and unstructured data fromLearning, receipts, reports, and other documents using Azure AI Document Intelligence, and Azure Cognitive Services. RAG Systems - Design and implement retrieval-augmented generation (RAG) systems using vector embeddings and LLMs for intelligent and efficient document retrieval; Optimize RAG workflows for large datasets and low-latency operations. Monitoring and Optimization - Implement advanced monitoring systems using Azure Monitor, Application Insights, and Log Analytics to track model performance and system health; Continuously evaluate and refine models and workflows to meet enterprise-grade SLAs for performance and reliability. Collaboration and Documentation - Collaborate with data engineers, software developers, and DevOps teams to deliver robust and scalable AI-driven solutions; Document best practices, workflows, and troubleshooting guides for knowledge sharing and scalability. Qualifications we seek in you Minimum Qualifications Proven experience with Machine Learning, Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence, AWS Bedrock, SageMaker; Proficiency in building and optimizing RAG systems for document retrieval and comparison. Strong understanding of AI/ML concepts, including natural language processing (NLP), embeddings, model fine-tuning, and evaluation; Experience in applying machine learning algorithms and techniques to solve complex problems in real-world applications; Familiarity with state-of-the-art LLM architectures and their practical implementation in production environments; Expertise in designing and managing Prompt Flow pipelines for task-specific customization of LLM outputs. Hands-on experience in training LLMs and evaluating their performance using appropriate metrics for accuracy, latency, and robustness; Proven ability to iteratively refine models to meet specific business needs and optimize them for production environments. Knowledge of ethical AI practices and responsible AI frameworks. Experience with CI/CD pipelines using Azure DevOps or equivalent tools; Familiarity with containerized environments managed through Docker and Kubernetes. Knowledge of Azure Key Vault, Managed Identities, and Azure Active Directory (AAD) for secure authentication. Experience with PyTorch or TensorFlow. Proven track record of developing and deploying Azure-based AI solutions for large-scale, enterprise-grade environments. Strong analytical and problem-solving skills, with a results-driven approach to building scalable and secure systems. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Gurugram Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 6:08:04 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

4.0 - 8.0 years

1 - 3 Lacs

Gurgaon

On-site

About the Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA Minimum Years Experience Required Add here AND change text color to black or remove bullet and section title if not applicable Additional Application Instructions Add here AND change text color to black or remove bullet and section title if not applicable

Posted 1 month ago

Apply

2.0 - 6.0 years

1 - 3 Lacs

Ahmedabad

On-site

About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

4.0 - 8.0 years

1 - 3 Lacs

Ahmedabad

On-site

About the Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly strong analytical background to work in our Analytics Consulting practice Senior Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 4+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCP’s Vertex AI platform, AWS SageMaker) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA Minimum Years Experience Required Add here AND change text color to black or remove bullet and section title if not applicable Additional Application Instructions Add here AND change text color to black or remove bullet and section title if not applicable

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Company Description Quantanite is a customer experience (CX)solutions company that helpsfast-growing companies and leading global brandsto transformand grow. We do thisthrough a collaborative and consultative approach,rethinking business processes and ensuring our clients employ the optimalmix of automationand human intelligence.We are an ambitiousteamof professionals spread acrossfour continents and looking to disrupt ourindustry by delivering seamless customerexperiencesforour clients,backed-upwithexceptionalresults.We havebig dreams, and are constantly looking for new colleaguesto join us who share our values, passion and appreciationfordiversity. Job Description About the Role: We are seeking a highly skilled Senior AI Engineer with deep expertise in Agentic frameworks, Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, MLOps/LLMOps, and end-to-end GenAI application development. In this role, you will design, develop, fine-tune, deploy, and optimize state-of-the-art AI solutions across diverse enterprise use cases including AI Copilots, Summarization, Enterprise Search, and Intelligent Tool Orchestration. Key Responsibilities: Develop and Fine-Tune LLMs (e.g., GPT-4, Claude, LLaMA, Mistral, Gemini) using instruction tuning, prompt engineering, chain-of-thought prompting, and fine-tuning techniques. Build RAG Pipelines: Implement Retrieval-Augmented Generation solutions leveraging embeddings, chunking strategies, and vector databases like FAISS, Pinecone, Weaviate, and Qdrant. Implement and Orchestrate Agents: Utilize frameworks like MCP, OpenAI Agent SDK, LangChain, LlamaIndex, Haystack, and DSPy to build dynamic multi-agent systems and serverless GenAI applications. Deploy Models at Scale: Manage model deployment using HuggingFace, Azure Web Apps, vLLM, and Ollama, including handling local models with GGUF, LoRA/QLoRA, PEFT, and Quantization methods. Integrate APIs: Seamlessly integrate with APIs from OpenAI, Anthropic, Cohere, Azure, and other GenAI providers. Ensure Security and Compliance: Implement guardrails, perform PII redaction, ensure secure deployments, and monitor model performance using advanced observability tools. Optimize and Monitor: Lead LLMOps practices focusing on performance monitoring, cost optimization, and model evaluation. Work with AWS Services: Hands-on usage of AWS Bedrock, SageMaker, S3, Lambda, API Gateway, IAM, CloudWatch, and serverless computing to deploy and manage scalable AI solutions. Contribute to Use Cases: Develop AI-driven solutions like AI copilots, enterprise search engines, summarizers, and intelligent function-calling systems. Cross-functional Collaboration: Work closely with product, data, and DevOps teams to deliver scalable and secure AI products. Qualifications Required Skills and Experience: 3-5 years of experience in AI/ML roles, focusing on LLM agent development, data science workflows, and system deployment. Demonstrated experience in designing domain-specific AI systems and integrating structured/unstructured data into AI models. Proficiency in designing scalable solutions using LangChain and vector databases. Deep knowledge of LLMs and foundational models (GPT-4, Claude, Mistral, LLaMA, Gemini). Strong expertise in Prompt Engineering, Chain-of-Thought reasoning, and Fine-Tuning methods. Proven experience building RAG pipelines and working with modern vector stores (FAISS, Pinecone, Weaviate, Qdrant). Hands-on proficiency in LangChain, LlamaIndex, Haystack, and DSPy frameworks. Model deployment skills using HuggingFace, vLLM, Ollama, and handling LoRA/QLoRA, PEFT, GGUF models. Practical experience with AWS serverless services: Lambda, S3, API Gateway, IAM, CloudWatch. Strong coding ability in Python or similar programming languages. Experience with MLOps/LLMOps for monitoring, evaluation, and cost management. Familiarity with security standards: guardrails, PII protection, secure API interactions. Use Case Delivery Experience: Proven record of delivering AI Copilots, Summarization engines, or Enterprise GenAI applications. Additional Information Preferred Skills: Experience in BPO or IT Outsourcing environments. Knowledge of workforce management tools and CRM integrations. Hands-on experience with AI technologies and their applications in data analytics. Familiarity with Agile/Scrum methodologies. Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and stakeholder management skills. Ability to thrive in a fast-paced, dynamic environment.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 month ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Data Engineer Primary Skill: Python, Pyspark Mandatory and AWS Services and pipelines. Location: Hyderabad/ Pune/ Coimbatore Experience: 2 - 4 years of experience Job Summary: We are looking for a Lead Data Engineer who will be responsible for building AWS Data pipelines as per requirements. Should have strong analytical skills, design capabilities, problem solving skills. Based on stakeholders’ requirements, should be able to propose solutions to the customer for review. Discuss pros/cons of different solution designs and optimization strategies. Responsibilities: Provide technical and development support to clients to build and maintain data pipelines. Develop data mapping documents listing business and transformational rules. Develop, unit test, deploy and maintain data pipelines. Design a Storage Layer for storing tabular/semi-structured/unstructured data. Design pipelines for batch/real-time processing of large data volumes. Analyze source specifications and build data mapping documents. Identify and document applicable non-functional code sets and reference data across insurance domains. Understand profiling results and validate data quality rules. Utilize data analysis tools to construct and manipulate datasets to support analyses. Collaborate with and support Quality Assurance (QA) in building functional scenarios and validating results. Requirements: 2+ years’ experience developing and maintaining modern ingestion pipeline using technologies like (AWS pipelines, Lamda, Spark, Apache Nifi etc). Basic understanding of the MLOPs lifecycle (Data prep -> model training -> model deployment -> model inference -> model re-training). Should be able to design data pipelines for batch/real time using Lambda, Step Functions, API Gateway, SNS, S3. Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift & Jupyter Notebooks. Requirements Gathering - Active involvement during requirements discussions with project sponsors, defining the project scope and delivery timelines, Design & Development. Strong in Spark Scala & Python pipelines (ETL & Streaming). Strong experience in metadata management tools like AWS Glue. Strong experience in coding with languages like Java, Python. Good-to-have AWS Developer certified. Good-to-have Postman-API and Apache Airflow or similar schedulers experience. Working with cross-functional teams to meet strategic goals. Experience in high volume data environments. Critical thinking and excellent verbal and written communication skills. Strong problem-solving and analytical abilities should be able to work and deliver individually. Good knowledge of data warehousing concepts. Desired Skill Set : Lambda, Step Functions, API Gateway, SNS, S3 (unstructured data), DynamoDB (semi-structured data), Aurora PostgreSQL (tabular data), AWS Sagemaker, AWS CodeCommit/GitLab, AWS CodeBuild, AWS Code Pipeline, AWS ECR . Aboutthe Company: ValueMomentum is amongst the fastestgrowing insurance-focused IT services providersin North America.Leading insurers trust ValueMomentum with their core, digital and data transformation initiatives. Having grown consistently every year by 24%, we have now grown to over 4000 employees. ValueMomentum is committed to integrity and to ensuring that each team and employee is successful. We foster an open work culture where employees' opinions are valued. We believe in teamwork and cultivate a sense of fun, fellowship, and pride among our employees. Benefits: We at ValueMomentum offer you the opportunity to grow by working alongside the experts. Some of the benefits you can avail are: Competitive compensation package comparable to the best in the industry. Career Advancement : Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management : Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Benefits : Comprehensive health benefits, wellness and fitness programs. Paid time off and holidays. Culture : A highly transparent organization with an open-door policy and a vibrant culture

Posted 1 month ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Requisition ID # 25WD88160 Position Overview Autodesk is seeking a passionate and experienced Software Engineering Manager to join our Analytics Data Group. Our mission is to build a world-class, secure data platform that empowers thousands across Autodesk to make data-driven decisions—fueling product design, machine learning, experimentation, customer insights, and more. As a hands-on Software Engineering Manager, you will lead a talented team of software and ML engineers in designing, developing, and maintaining the next-generation Autodesk machine learning platform. In this role, you will drive the technical strategy, execution, and delivery of scalable AI/ML solutions. You’ll collaborate closely with Product Managers to shape and implement the roadmap, ensure alignment with organizational goals, foster cross-functional collaboration, and champion platform adoption across teams. We’re looking for a leader with strong technical expertise in AI/ML technologies and a proven track record of delivering complex, large-scale systems on time and within budget. You’ve built and led high-performing engineering teams, delivered production-grade platforms for large-scale AI/ML workloads, and thrive in an agile, fast-paced environment. Above all, you have the ability to inspire teams to achieve excellence. Responsibilities Lead and mentor a team of engineers in the development and deployment of AI/ML solutions Collaborate with teams including product management, data science, data and cloud infrastructure to define and execute the AI/ML platform roadmap Provide sound technical guidance and drive crucial technology decisions Stay updated with the latest advancements in AI/ML technologies Provide guidance of the design and architecture of scalable, reliable, and efficient AI/ML systems Ensure adherence to best practices in software development, code quality, and security standards Manage project timelines and resource allocation to drive deliverables Foster a culture of innovation, agility, collaboration, and continuous improvement within the engineering team Actively participate in the hiring process to attract and onboard top-tier engineering talent, ensuring the team possesses the necessary skills and expertise to execute on the AI/ML platform vision Minimum Qualifications BS/MS in Computer Science, Engineering, or a related field. (MS preferred) 12+ years of experience of software engineering, with at least 3 years in a management role Experience leading and mentoring software and ML engineering teams Experience with identifying potential project risks and developing mitigation strategies Background in AI/ML with experience in deep learning, statistical modelling, and neural networks Design and build scalable, high-performance systems with understanding of cloud services (AWS, Azure) and containerization technologies (Docker, Kubernetes) Experience with agile software development methodologies and management practices Expertise in developing/managing CI/CD pipelines, automation tools, and practices for machine learning lifecycle management Excellent communication and written skills Preferred Qualifications Familiarity with cloud platforms such as AWS, specifically AWS SageMaker or Azure Machine Learning, or Google Cloud Platform Prior experience in building AI/ML platforms Understanding of MLOps principles and practices for effectively managing and automating machine learning workflows, including model versioning, monitoring, and deployment Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you’re an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site).

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s In It For You Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About The Role Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s In It For You Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies