Jobs
Interviews

1558 Sagemaker Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

4 Lacs

Indore

On-site

About the Role: We are looking for a highly skilled and forward-thinking AI/ML Engineer with 3–4 years of practical experience in building and deploying AI-powered solutions for industrial automation, computer vision, and LLM-based applications. The ideal candidate should have experience with the latest AI tools and frameworks including LangChain, LangGraph, Vision Transformers, and MLOps on AWS (SageMaker), as well as expertise in building multi-agent chat applications with React agents and vector-based RAG (Retrieval-Augmented Generation) architectures. Responsibilities: · Design, train, and deploy AI/ML models for industrial automation, including computer vision systems using OpenCV and deep learning frameworks. · Develop multi-agent chat applications integrating LLMs, React-based agents, and contextual memory. · Implement Vision Transformers (ViTs) for advanced visual understanding tasks. · Utilize LangChain, LangGraph, and RAG techniques to create intelligent conversational systems with vector embeddings and document retrieval. · Fine-tune pre-trained LLMs for custom enterprise use cases. · Collaborate with frontend teams to build responsive, intelligent UIs using React + AI backends. · Deploy AI solutions on AWS Cloud, leveraging SageMaker, Lambda, S3, and related MLOps tools for model lifecycle management. · Ensure high performance, reliability, and scalability of deployed AI systems. Required Skills · 3–4 years of hands-on experience in AI/ML engineering, preferably with industrial or automation-focused projects. · Proficiency in Python and frameworks like PyTorch, TensorFlow, Scikit-learn. · Strong understanding of LLMs (GPT, Claude, LLaMA, etc.), prompt engineering, and fine-tuning techniques. · Experience with LangChain, LangGraph, and RAG-based architecture using vector databases like FAISS, Pinecone, or Weaviate. · Expertise in Vision Transformers, YOLO, Detectron2, and computer vision techniques. · Familiarity with multi-agent architectures, React agents, and building intelligent UIs with frontend-backend synergy. · Working knowledge of AWS services (SageMaker, Lambda, EC2, S3) and MLOps workflows (CI/CD for ML). · Experience deploying and maintaining models in production environments. Qualifications: · Experience with edge AI, NVIDIA Jetson, or industrial IoT integration. · Prior involvement in developing AI-powered chatbots or assistants with memory and tool integration. · Exposure to containerization (Docker) and model versioning tools like MLflow or DVC. · Contributions to open-source AI projects or published research in AI/ML Job Type: Full-time Pay: From ₹412,334.30 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 3 weeks ago

Apply

3.0 - 5.0 years

8 - 9 Lacs

Calcutta

On-site

3 - 5 Years 4 Openings Kolkata, Pune Role description Role Proficiency: Independently interprets data and analyses results using statistical techniques Outcomes: Independently Mine and acquire data from primary and secondary sources and reorganize the data in a format that can be easily read by either a machine or a person; generating insights and helping clients make better decisions. Develop reports and analysis that effectively communicate trends patterns and predictions using relevant data. Utilizes historical data sets and planned changes to business models and forecast business trends Working alongside teams within the business or the management team to establish business needs. Creates visualizations including dashboards flowcharts and graphs to relay business concepts through visuals to colleagues and other relevant stakeholders. Set FAST goals Measures of Outcomes: Schedule adherence to tasks Quality – Errors in data interpretation and Modelling Number of business processes changed due to vital analysis. Number of insights generated for business decisions Number of stakeholder appreciations/escalations Number of customer appreciations No: of mandatory trainings completed Outputs Expected: Data Mining: Acquiring data from various sources Reorganizing/Filtering data: Consider only relevant data from the mined data and convert it into a format which is consistent and analysable. Analysis: Use statistical methods to analyse data and generate useful results. Create Data Models: Use data to create models that depict trends in the customer base and the consumer population as a whole Create Reports: Create reports depicting the trends and behaviours from the analysed data Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards and process Code: Create efficient and reusable code. Follows coding best practices. Code Versioning: Organize and manage the changes and revisions to code. Use a version control tool like git bitbucket etc. Quality: Provide quality assurance of imported data working with quality assurance analyst if necessary. Performance Management: Set FAST Goals and seek feedback from supervisor Skill Examples: Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching. Communication Skills: Ability to present findings or translate the data into an understandable document Critical Thinking: Ability to look at the numbers trends and data; coming up with new conclusions based on the findings. Attention to Detail: Making sure to be vigilant in the analysis to come with accurate conclusions. Quantitative skills - knowledge of statistical methods and data analysis software Presentation Skills - reports and oral presentations to senior colleagues Mathematical skills to estimate numerical data. Work in a team environment Proactively ask for and offer help Knowledge Examples: Knowledge Examples Proficient in mathematics and calculations. Spreadsheet tools such as Microsoft Excel or Google Sheets Advanced knowledge of Tableau or PowerBI SQL Python DBMS Operating Systems and software platforms Knowledge about customer domain and also sub domain where problem is solved Code version control e.g. git bitbucket etc Additional Comments: Statistical Concepts, SQL, Machine Learning (Regression and Classification), Deep Learning (ANN, RNN, CNN), Advanced NLP, Computer Vision, Gen AI/LLM (Prompt Engineering, RAG, Fine Tuning), AWS Sagemaker/Azure ML/Google Vertex AI, Basic implementation experience of Docker, Kubernetes, kubeflow, MLOps, Python (numpy, panda, sklearn, streamlit, matplotlib, seaborn) Skills Data Science,Python,Deep Learning About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Role Description Role Proficiency: Independently develop data-driven solutions to difficult business challenges by utilize analytical statistical and programming skills to collect analyze and interpret large data sets under supervision. Outcomes Work with stakeholders throughout the organization to identify opportunities for leveraging data from our customers to make models that can generate business insights Create new experimental frameworks or build automated tools to collect data Correlate similar data sets to find actionable results Build predictive models and machine learning algorithms to analyse large amounts of information to discover trends and patterns. Mine and analyse data from company databases to drive optimization and improvement of product development marketing techniques business strategies etc Develop processes and tools to monitor and analyse model performance and data accuracy. Develop Data Visualization and illustrations on given business problem Use predictive modelling to increase and optimize customer experiences and other business outcomes. Coordinate with different functional teams to implement models and monitor outcomes. Set FAST goals and provide feedback on FAST goals of reportees Measures Of Outcomes Number of business processes changed due to vital analysis. Number of Business Intelligent Dashboards developed Number of productivity standards defined for project Number of Prediction and Modelling models used Number of new approaches applied to understand the business trends Quality of data visualization done to help non-technical stakeholders comprehend easily. Number of mandatory trainings completed Outputs Expected Statistical Techniques: Apply statistical techniques like regression properties of distributions statistical tests etc. to analyse data. Machine Learning Techniques Apply machine learning techniques like clustering decision tree learning artificial neural networks etc. to streamline data analysis. Creating Advanced Algorithms Create advanced algorithms and statistics using regression simulation scenario analysis modelling etc. Data Visualization Visualize and present data for stakeholders using: Periscope Business Objects D3 ggplot etc. Management And Strategy Oversees the activities of analyst personnel and ensures the efficient execution of their duties. Critical Business Insights Mines the business’s database in search of critical business insights and communicates findings to the relevant departments. Code Creating efficient and reusable code meant for the improvement manipulation and analysis of data. Version Control Manages project codebase through version control tools e.g. git bitbucket etc. Predictive Analytics Seeks to determine likely outcomes by detecting tendencies in descriptive and diagnostic analysis Prescriptive Analytics Attempts to identify what business action to take Create Reports Creates reports depicting the trends and behaviours from the analysed data Training end users on new reports and dashboards. Document Create documentation for own work as well as perform peer review of documentation of others' work Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Status Reporting Report status of tasks assigned Comply with project related reporting standards and process Skill Examples Excellent pattern recognition and predictive modelling skills Extensive background in data mining and statistical analysis Expertise in machine learning techniques and creating algorithms. Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching. Communication Skills: Communicate effectively with a diverse population at various organization levels with the right level of detail. Critical Thinking: Data Analysts must look at numbers trends and data and come to new conclusions based on the findings. Strong meeting facilitation skills as well as presentation skills. Attention to Detail: Making sure to be vigilant in the analysis to come to correct conclusions. Mathematical Skills to estimate numerical data. Work in a team environment and have strong interpersonal skills to work in collaborative environment Proactively ask for and offer help Knowledge Examples Knowledge Examples Programming languages – Java/ Python/ R. Web Services - Redshift S3 Spark DigitalOcean etc. Statistical and data mining techniques: GLM/Regression Random Forest Boosting Trees text mining social network analysis etc. Google Analytics Site Catalyst Coremetrics Adwords Crimson Hexagon Facebook Insights etc. Computing Tools - Map/Reduce Hadoop Hive Spark Gurobi MySQL etc. Database languages such as SQL NoSQL Analytical tools and languages such as SAS & Mahout. Practical experience with ETL data processing etc. Proficiency in MATLAB. Data visualization software such as Tableau or Qlik. Proficient in mathematics and calculations. Spreadsheet tools such as Microsoft Excel or Google Sheets DBMS Operating Systems and software platforms Knowledge about customer domain and about sub domain where problem is solved Proficient in at least 1 version control tool like git bitbucket Have experience working with project management tool like Jira Additional Comments Must have -Statistical Concepts, SQL, Machine Learning (Regression and Classification), Deep Learning (ANN, RNN, CNN), Advanced NLP, Computer Vision, Gen AI/LLM (Prompt Engineering, RAG, Fine Tuning), AWS Sagemaker/Azure ML/Google Vertex AI, Basic implementation experience of Docker, Kubernetes, kubeflow, MLOps, Python (numpy, panda, sklearn, streamlit, matplotlib, seaborn) Skills Data Management,Data Science,Python

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS Data Engineer- Senior We are seeking a highly skilled and motivated Hands on AWS Data Engineer with 5-10 years of experience in AWS Glue, Pyspark ,AWS Redshift, S3, and Python to join our dynamic team. As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and solutions that support business intelligence, analytics, and large-scale data processing. You will work closely with data scientists, analysts, and other engineering teams to ensure seamless data flow across our systems. Technical Skills : Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Behavioural skills: Willing to work 5 days a week from ODC / client location ( based on project can be hybrid 3 days a week ) Ability to Lead developers and engage with client stakeholders to drive technical decisions Ability to do technical design and POCs- help build / analyse logical data model, required entities, relationships, data constraints and dependencies focused on enabling reporting and analytics business use cases Should be able to work in Agile environment Should have strong communication skills Good to have : Exposure to Financial Services , Wealth and Asset Management Exposure to Data science, Exposure to Fullstack technologies GenAI will be an added advantage EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are looking for a Staff Engineer to lead the design, development, and optimization of AI-powered platforms with a strong focus on Python , API development , and AWS AI services . You will be instrumental in shaping system architecture, mentoring engineers, and driving end-to-end solutions that leverage NLP , cloud services , and modern frontend frameworks . As a Staff Engineer, you’ll be a key technical leader partnering closely with product, design, and engineering teams to build scalable and intelligent systems. Key Responsibilities: Architect and build scalable, high-performance backend systems using Python Design robust RESTful APIs and guide the engineering team on best practices for performance and security Leverage AWS AI/ML services (e.g., Comprehend, Lex, SageMaker) to build intelligent features and capabilities Provide technical leadership on NLP solutions using libraries such as spaCy , transformers , or NLTK Ensure comprehensive unit testing across APIs and databases; advocate for clean, testable code Guide the development of full-stack features involving JavaScript , React , and Next.js Own and evolve system architecture, ensuring modularity, scalability, and resilience Promote strong engineering practices with Git , Bitbucket , and CI/CD tooling Collaborate cross-functionally to drive technical decisions aligned with product goals Mentor engineers across levels and foster a culture of technical excellence Technical Requirements: 8+ years of hands-on software development experience, primarily in Python Proven expertise in API development , system design, and performance tuning Strong background in AWS , particularly AI/ML and NLP services Experience building intelligent features using NLP frameworks Proficiency in front-end technologies : JavaScript, React, Next.js (preferred) Solid understanding of RDBMS (PostgreSQL, MySQL or similar) Expert in version control systems and collaborative workflows Track record of technical leadership, mentoring, and architectural ownership Preferred Qualifications: Experience with microservices , event-driven architectures , or serverless systems Familiarity with Docker , Kubernetes , and infrastructure-as-code tools Prior experience in leading cross-functional engineering initiatives

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As an IoT Engineer with Python expertise, you will develop data-driven applications on AWS IoT for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Collaborate with data scientists and analysts to implement data processing pipelines 4. Participate in architecture discussions and contribute to technical decision-making 5. Ensure the scalability, reliability, and performance of Python applications on AWS 6. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 3 years of experience in Python Programming with integration with AWS IoT core. 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Bachelor of Engineering in Electronics or any related stream

Posted 3 weeks ago

Apply

0 years

0 Lacs

Madurai, Tamil Nadu, India

On-site

Role : AIML Engineer Location : Madurai/ Chennai Language: Python DBs : SQL Core Libraries: Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet SOTA ML : ML Models, Boosting & Ensemble models etc. Explainability : Shap / Lime Required skills: Deep Learning: PyTorch, PyTorch Forecasting, Data Processing: Pandas, NumPy, Polars (optional), PySpark Hyperparameter Tuning: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps: Batch & Realtime with API endpoints, MLFlow Serving: TorchServe, Sagemaker endpoints / batch Containerization: Docker Orchestration & Pipelines: AWS Step Functions, AWS SageMaker Pipelines AWS Services: SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based Inference) ECR, ECS or Fargate (Container Hosting)

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) 15 years full time education

Posted 3 weeks ago

Apply

50.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About The Opportunity Job Type: Permanent Application Deadline: 31 July 2025 Title Senior Analyst - Data Science Department Enterprise Data & Analytics Location Gurgaon Reports To Gaurav Shekhar Level Data Scientist 4 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like you’re part of something bigger. About Your Team Join the Enterprise Data & Analytics team — collaborating across Fidelity’s global functions to empower the business with data-driven insights that unlock business opportunities, enhance client experiences, and drive strategic decision-making. About Your Role As a key contributor within the Enterprise Data & Analytics team, you will lead the development of machine learning and data science solutions for Fidelity Canada. This role is designed to turn advanced analytics into real-world impact—driving growth, enhancing client experiences, and informing high-stakes decisions. You’ll design, build, and deploy ML models on cloud and on-prem platforms, leveraging tools like AWS SageMaker, Snowflake, Adobe, Salesforce etc. Collaborating closely with business stakeholders, data engineers, and technology teams, you’ll translate complex challenges into scalable AI solutions. You’ll also champion the adoption of cloud-based analytics, contribute to MLOps best practices, and support the team through mentorship and knowledge sharing. This is a high-impact role for a hands-on problem solver who thrives on ownership, innovation, and seeing their work directly influence strategic outcomes. About You You have 4–7 years of experience working in data science domain, with a strong track record of delivering advanced machine learning solutions for business. You’re skilled in developing models for classification, forecasting, recommender systems and hands-on with frameworks like Scikit-learn, TensorFlow, or PyTorch. You bring deep expertise in developing and deploying models on AWS SageMaker, strong business problem-solving abilities, and are familiar with emerging GenAI trends. A background in engineering, mathematics, or economics from a Tier 1 institution will be preferred. Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 3 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

About the Role: We are looking for a highly skilled and forward-thinking AI/ML Engineer with 3–4 years of practical experience in building and deploying AI-powered solutions for industrial automation, computer vision, and LLM-based applications. The ideal candidate should have experience with the latest AI tools and frameworks including LangChain, LangGraph, Vision Transformers, and MLOps on AWS (SageMaker), as well as expertise in building multi-agent chat applications with React agents and vector-based RAG (Retrieval-Augmented Generation) architectures. Responsibilities: · Design, train, and deploy AI/ML models for industrial automation, including computer vision systems using OpenCV and deep learning frameworks. · Develop multi-agent chat applications integrating LLMs, React-based agents, and contextual memory. · Implement Vision Transformers (ViTs) for advanced visual understanding tasks. · Utilize LangChain, LangGraph, and RAG techniques to create intelligent conversational systems with vector embeddings and document retrieval. · Fine-tune pre-trained LLMs for custom enterprise use cases. · Collaborate with frontend teams to build responsive, intelligent UIs using React + AI backends. · Deploy AI solutions on AWS Cloud, leveraging SageMaker, Lambda, S3, and related MLOps tools for model lifecycle management. · Ensure high performance, reliability, and scalability of deployed AI systems. Required Skills · 3–4 years of hands-on experience in AI/ML engineering, preferably with industrial or automation-focused projects. · Proficiency in Python and frameworks like PyTorch, TensorFlow, Scikit-learn. · Strong understanding of LLMs (GPT, Claude, LLaMA, etc.), prompt engineering, and fine-tuning techniques. · Experience with LangChain, LangGraph, and RAG-based architecture using vector databases like FAISS, Pinecone, or Weaviate. · Expertise in Vision Transformers, YOLO, Detectron2, and computer vision techniques. · Familiarity with multi-agent architectures, React agents, and building intelligent UIs with frontend-backend synergy. · Working knowledge of AWS services (SageMaker, Lambda, EC2, S3) and MLOps workflows (CI/CD for ML). · Experience deploying and maintaining models in production environments. Qualifications: · Experience with edge AI, NVIDIA Jetson, or industrial IoT integration. · Prior involvement in developing AI-powered chatbots or assistants with memory and tool integration. · Exposure to containerization (Docker) and model versioning tools like MLflow or DVC. · Contributions to open-source AI projects or published research in AI/ML Job Type: Full-time Pay: From ₹412,334.30 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Required) Work Location: In person

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0151765 Date posted 07/07/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity As a Principal Data Engineer you will Provide leadership and technical expertise to the analysis, definition, design and delivery of large, structured or unstructured data across different domains. You will prototype, maintain and create data-sets and data architectures. Responsibilities Manages and influences technical analysis, design, development maintenance and configuration of complex and non-routine data and leads the analysis, definition and design of data and the data architectures, including within R&D. Uses specialized in-depth knowledge of advanced data stores and analysis technologies, consults with data specialists and researchers to define datasets, analyze those within team priorities to support research and finding signals for targets. Contributes to and takes responsibility for creating future state roadmaps for complex data architectures, data explorations, data analyzes and data modelling within a complex Biology, Omics, Chemistry, Competitive Intelligence, Statistical and other relevant domains. Makes recommendations for data architectures, data analyzing methodologies and technology by using deep understanding of data industry trends, data analyzing possibilities, data roadmaps and strategic data plans. Guides decisions with projects and other IT groups by using persuasion and negotiation skills to reach agreement on approach and implementation. Oversees the impact of Medical and Biological data/datasets requests to support different research and leads data investigations. Leads a team of data engineers to support Life Science Research Data Initiatives, choosing the appropriate technologies and developing advanced architectures for the largest data problems, including for R&D Demonstrates advanced tooling and techniques to other engineers and traditional analytics organizations throughout the company Represent the team while working on project across domains and commercial. Is internal and external expert to-go-to in how to drive advanced Computer Science and Engineering skills and techniques Provides expertise to data engineers and peers and specialists, to support research Skills and Qualifications Required More than 10+ years experience in Data and Analytics domain AWS overview – solid knowledge about AWS ecosystem, experience with Lambda, S3, AWS notification systems, AWS SDKs , Athena , Redshift,AWS Secrets Expert Data engineering experience – building scalable and performant ETL data pipelines by using Spark, experience with data extraction/ingestion from relational databases as well as flat files and by pulling via API, data transformation and cleaning Spark job orchestration through Airflow Experience with pub-sub streaming and messaging - messaging queue systems, e.g. AWS SQS StrongPysparkand Pythonprogramming skills Solid bash scripting skills Adequate knowledge of Agile processes, CI/CD tools and setup including automated unit testing, code linting, quality tools Preferred skills: Experience with leading a technical team as senior engineer – guidance,design and code reviews, communication with other technical team. Experience with Databricks and its functionalities (Autoloader, APIs, scheduler) and Glue, Broader AWS knowledge, mostly in data and ML/DL area, e.g. Amazon Sagemaker, AWS RDS, AWS Step Functions Python skills Knowledge of GxP processes and documentation. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time

Posted 3 weeks ago

Apply

2.0 - 6.0 years

1 - 3 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

2.0 - 6.0 years

1 - 3 Lacs

Gurgaon

On-site

About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

0 years

2 - 7 Lacs

Pune

On-site

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Join us at Syensqo, where our IT team is gearing up to enhance its capabilities. We play a crucial role in the group's transformation—accelerating growth, reshaping progress, and creating sustainable shared value. IT team is making operational adjustments to supercharge value across the entire organization. Here at Syensqo, we're one strong team! Our commitment to accountability drives us as we work hard to deliver value for our customers and stakeholders. In our dynamic and collaborative work environment, we add a touch of enjoyment while staying true to our motto: reinvent progress. Come be part of our transformation journey and contribute to the change as a future team member. We are looking for: As a Data/ML Engineer, you will play a central role in defining, implementing, and maintaining cloud governance frameworks across the organization. You will collaborate with cross-functional teams to ensure secure, compliant, and efficient use of cloud resources for data and machine learning workloads. Your expertise in full-stack automation, DevOps practices, and Infrastructure as Code (IaC) will drive the standardization and scalability of our cloud-based data and ML platforms. Key requirements are: Ensuring cloud data governance Define and maintain central cloud governance policies, standards, and best practices for data, AI and ML workloads Ensure compliance with security, privacy, and regulatory requirements across all cloud environments Monitor and optimize cloud resource usage, cost, and performance for data, AI and ML workloads Design and Implement Data Pipelines Co-develop, co-construct, test, and maintain highly scalable and reliable data architectures, including ETL processes, data warehouses, and data lakes with the Data Platform Team Build and Deploy ML Systems Co-design, co-develop, and deploy machine learning models and associated services into production environments, ensuring performance, reliability, and scalability Infrastructure Management Manage and optimize cloud-based infrastructure (e.g., AWS, Azure, GCP) for data storage, processing, and ML model serving Collaboration Work collaboratively with data scientists, ML engineers, security and business stakeholders to align cloud governance with organizational needs Provide guidance and support to teams on cloud architecture, data management, and ML operations. Work collaboratively with other teams to transition prototypes and experimental models into robust, production-ready solutions Data Governance and Quality: Implement best practices for data governance, data quality, and data security to ensure the integrity and reliability of our data assets. Performance and Optimisation: Identify and implement performance improvements for data pipelines and ML models, optimizing for speed, cost-efficiency, and resource utilization. Monitoring and Alerting Establish and maintain monitoring, logging, and alerting systems for data pipelines and ML models to proactively identify and resolve issues Tooling and Automation Design and implement full-stack automation for data pipelines, ML workflows, and cloud infrastructure Build and manage cloud infrastructure using IaC tools (e.g., Terraform, CloudFormation) Develop and maintain CI/CD pipelines for data and ML projects Promote DevOps culture and best practices within the organization Develop and maintain tools and automation scripts to streamline data operations, model training, and deployment processes Stay Current on new ML / AI trends: Keep abreast of the latest advancements in data engineering, machine learning, and cloud technologies, evaluating and recommending new tools and approach Document processes, architectures, and standards for knowledge sharing and onboarding Education and experience Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. (Relevant work experience may be considered in lieu of a degree). Programming: Strong proficiency in Python (essential) and experience with other relevant languages like Java, Scala, or Go. Data Warehousing/Databases: Solid understanding and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) is highly desirable. Big Data Technologies: Hands-on experience with big data processing frameworks (e.g., Spark, Flink, Hadoop). Cloud Platforms: Experience with at least one major cloud provider (AWS, Azure, or GCP) and their relevant data and ML services (e.g., S3, EC2, Lambda, EMR, SageMaker, Dataflow, BigQuery, Azure Data Factory, Azure ML). ML Concepts: Fundamental understanding of machine learning concepts, algorithms, and workflows. MLOps Principles: Familiarity with MLOps principles and practices for deploying, monitoring, and managing ML models in production. Version Control: Proficiency with Git and collaborative development workflows. Problem-Solving: Excellent analytical and problem-solving skills with a strong attention to detail. Communication: Strong communication skills, able to articulate complex technical concepts to both technical and non-technical stakeholders. Bonus Points (Highly Desirable Skills & Experience): Experience with containerisation technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines for data and ML deployments. Experience with stream processing technologies (e.g., Kafka, Kinesis). Knowledge of data visualization tools (e.g., Tableau, Power BI, Looker). Contributions to open-source projects or a strong portfolio of personal projects. Experience with [specific domain knowledge relevant to your company, e.g., financial data, healthcare data, e-commerce data]. Language skills Fluent English What’s in it for the candidate Be part of a highly motivated team of explorers Help make a difference and thrive in Cloud and AI technology Chart your own course and build a fantastic career Have fun and enjoy life with an industry leading remuneration pack About us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply.

Posted 4 weeks ago

Apply

2.0 - 6.0 years

1 - 3 Lacs

Ahmedabad

On-site

About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description- ML Engineer: Strong experience of at-least 2-3 years in Python. 2 + years’ experience of working on feature/data pipelines and feature stores using Py-Spark. Exposure to AWS cloud services such as Sagemaker, Bedrock, Kendra etc. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practice. Knowledge on Docker and Kubernetes. Experience with orchestration/scheduling tools like Argo. Experience building and consuming data from REST APIs. Demonstrable ability to think outside of the box and not be dependent on readily available tools. Excellent communication, presentation and interpersonal skills are a must. Py-Spark AWS Engineer: Good hands-on experience of python and Bash Scripts. 4+ years of good hands-on exposure with Big Data technologies – Pyspark (Data frame and Spark SQL), Hadoop, and Hive Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. Glue, EMR, RedShift, S3, Kinesis) Ability to write Glue jobs and utilise the different core functionalities of Glue. Good understanding of SQL and data warehouse tools like (Redshift). Experience with orchestration/scheduling tools like Airflow. Strong analytical, problem-solving, data analysis and research skills. Demonstrable ability to think outside of the box and not be dependent on readily available tools. Excellent communication, presentation and interpersonal skills are a must. Roles & Responsibilities- Collaborate with data engineers & architects to implement and deploy scalable solutions. Provide technical guidance and code review of the deliverables. Play active role in estimation and planning. Communicate results to diverse technical and non-technical audiences. Generate actionable insights for business improvements. Ability to understand business requirements. Use case derivation and solution creation from structured/unstructured data. Actively drive a culture of knowledge-building and sharing within the team Encourage continuous innovation and out-of-the-box thinking. Good To Have: ML Engineer: Experience researching and applying large language and Generative AI models. Experience with Langchain, LLAMA Index, and Performance Evaluation frameworks. Experience working with model registry, model deployment & monitoring tools. ML-Flow / App. Monitoring tools. Py-Spark AWS Engineer: Experience in migrating workload from on-premises to cloud and cloud to cloud migrations. Experience with Data quality frameworks.

Posted 4 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

0.0 - 6.0 years

0 Lacs

Gurugram, Haryana

On-site

About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

0.0 - 6.0 years

0 Lacs

Gurugram, Haryana

On-site

Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317426 Job Description About The Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title : MLOps Engineer Location : Remote Experience Required : 3+ years Employment Type : Full-Time About The Role We are seeking an experienced MLOps Engineer to join our team and drive the deployment, scaling, and maintenance of machine learning systems in production. In this role, you will bridge the gap between data science and operations to ensure our AI solutions are robust, reliable, and scalable. Key Responsibilities Design, build, and maintain end-to-end ML pipelines including data ingestion, model training, validation, and deployment. Develop and manage CI/CD workflows for machine learning models. Automate model monitoring, logging, and performance tracking in production environments. Implement model versioning, reproducibility, and governance best practices. Manage containerized deployments using Docker and orchestration platforms like Kubernetes. Work closely with data scientists and software engineers to productionize ML solutions. Optimize infrastructure costs and ensure scalability, security, and reliability. Create and maintain documentation, guidelines, and technical standards. Key Skills & Qualifications 3+ years of hands-on experience in MLOps, DevOps, or related roles in production environments. Strong experience with cloud platforms (AWS, Azure, or GCP) and their ML services (SageMaker, Vertex AI, Azure ML). Proficiency with Python and familiarity with ML frameworks (TensorFlow, PyTorch, scikit-learn). Solid understanding of containerization and orchestration tools (Docker, Kubernetes). Experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI) applied to ML workflows. Knowledge of data pipeline frameworks (Airflow, Prefect, Kubeflow). Familiarity with monitoring tools and model drift detection. Strong problem-solving skills and the ability to work independently in a remote environment. Nice To Have Experience with infrastructure-as-code (Terraform, CloudFormation). Understanding of feature stores and model registries. Exposure to big data technologies (Spark, Databricks). Knowledge of security and compliance in ML deployments. Benefits 100% Remote work flexibility. Work on cutting-edge AI solutions with a collaborative team. Learning and development support. Competitive compensation and benefits. (ref:hirist.tech)

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are a technology-led healthcare solutions provider. We are driven by our purpose to enable healthcare organizations to be future-ready. We offer accelerated, global growth opportunities for talent thats bold, industrious, and nimble. With Indegene, you gain a unique career experience that celebrates entrepreneurship and is guided by passion, innovation, collaboration, and empathy. To explore exciting opportunities at the convergence of healthcare and technology, check out www.careers.indegene.com. We understand how important the first few years of your career are, which create the foundation of your entire professional journey. At Indegene, we promise you a differentiated career experience. You will not only work at the exciting intersection of healthcare and technology but also will be mentored by some of the most brilliant minds in the industry. We are offering a global fast-track career where you can grow along with Indegenes high-speed growth. We are purpose-driven. We enable healthcare organizations to be future ready and our customer obsession is our driving force. We ensure that our customers achieve what they truly want. We are bold in our actions, nimble in our decision-making, and industrious in the way we work. If this excites you, then apply below. Position: Hands-On Full-Stack Technology Architect with AI and LLM Expertise The Hands-On Full-Stack Technology Architect with AI and LLM expertise is a senior technical leader who designs scalable, intelligent systems while actively contributing across the tech stack. This role is ideal for someone who combines modern web and backend development with deep experience in integrating and scaling Large Language Model (LLM) platforms such as OpenAI, Claude, Mistral, or custom models via Hugging Face, AWS Bedrock, or Vertex AI. Key Responsibilities: Architecture & Design Define robust, scalable, cloud-native, cloud agnostic architectures for AI platforms. Architect and prototype AI systems/platforms. Design and document system architecture (component diagrams, API contracts, deployment topology). Champion modern design patterns (microservices, micro frontends, event-driven, DDD, etc.). Build composable architecture patterns for integrating LLM APIs with microservices and UI layers. Create end-to-end architectures, build distributions, environments and deploy solutions. Hands-On Development Build and integrate reusable components, LLM-enabled APIs, and workflows with stateful orchestration where needed (e.g., using LangChain or semantic pipelines). Develop and iterate on PoCs and reference implementations of LLM-based features. Write maintainable, high-quality code in both frontend (React) and backend (Python, Node.js, Java, Go) technologies. Build reusable UI components, design systems, and frontend microservices where applicable. Create proof-of-concepts and reference implementations to de-risk architectural decisions. Contribute to CI/CD automation, DevOps pipelines, and infrastructure-as-code setups (Terraform, Helm). Frontend Engineering Architect and implement responsive UIs with state management (Redux, Zustand, or similar). Integrate RESTful and GraphQL APIs, implement frontend performance tuning and security best practices (XSS, CSP, etc.). Define and enforce front-end development standards and reusable design patterns. LLM Engineering & Integration Integrate LLMs via OpenAI, Claude, Mistral, Vertex AI, or AWS Bedrock APIs. Apply techniques like prompt engineering, embeddings, RAG (retrieval-augmented generation), and fine-tuning where applicable. Design and implement vector database-backed search pipelines using Pinecone, FAISS, Weaviate, or Vespa. Collaborate with data scientists and MLOps teams to bring custom LLMs to production securely and reliably. Team Collaboration & Leadership Work with product owners, UX/UI designers, QA, and devs across teams to shape and implement feature architecture. Mentor engineers across levels through pair programming, design reviews, and architectural guidance. Participate in sprint planning, story breakdowns, and estimation with a technical mindset. Lead data science projects from conception to deployment, ensuring timely and successful delivery. Partner with BU teams in driving the collaboration mandates. Communication: Effectively communicate findings and recommendations to both technical and non-technical audiences. Required Skills and Qualifications: Engineering Hands on proficiency in Design and Architecture of the platforms. Good awareness of the design patterns & implementation experience. Hands-on proficiency in JavaScript/TypeScript, Python, or Go. Experience with React or Angular on the frontend. REST/GraphQL API design and integration. Familiarity with event-driven architectures and async processing (Kafka, RabbitMQ, etc.). Strong with AWS/Azure, containerization (Docker, Kubernetes), and Infrastructure-as-Code. Must have worked on production-ready solutions and be abreast with the scaling and complexity solutions involved in creating enterprise solutions. LLM & AI Platforms Experience integrating LLM APIs (OpenAI, Claude, Mistral, Cohere, etc.). Knowledge of LangChain, LlamaIndex, or Semantic Kernel for orchestration. Hands-on with embeddings, vector databases (Pinecone, FAISS, etc.), and prompt engineering. Familiarity with AI/ML deployment platforms (SageMaker, Vertex AI, Hugging Face, etc.). EQUAL OPPORTUNITY Indegene is proud to be an Equal Employment Employer and is committed to the culture of Inclusion and Diversity. We do not discriminate on the basis of race, religion, sex, colour, age, national origin, pregnancy, sexual orientation, physical ability, or any other characteristics. All employment decisions, from hiring to separation, will be based on business requirements, the candidates merit, and qualification. We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, gender identity, sexual orientation, disability status, protected veteran status, or any other characteristics. Locations: Bangalore, KA, IN

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Summary of Position: A career at Alcon is like no other. We are innovative. We are impactful. We are the largest eye care medical device company helping restore sight and allowing millions of people worldwide to see brilliantly. We have a long history of industry firsts and a diverse highly talented team that keeps us on the cutting edge. We are currently hiring a Data Scientist, at our AGS Bangalore. This strategic and inquisitive person will develop and run with data-centered projects. In keeping with this overarching aim, the Data Scientist will be required to outline work requirements, provide guidance to the team on active projects, and harness their mastery of Data Science to consult on high value, high visibility use cases. Key Responsibilities: Strong analytical skills and technical problem solving Solid experience in building code and models Solid experience with building mathematical Optimization models with cvxpy etc. Good Experience with NLP and Computer Vision Algorithms Should have MLOps experience Project Management experience Experience with tools such as: Python, SQL, AWS, Tableau, Sagemaker, etc. Selecting and employing advanced statistical procedures to obtain actionable insights. Cross-validating models to ensure their generalizability. Delegating tasks to Junior Data Analysts or Data Scientists in order to realize the successful completion of projects. Producing and disseminating non-technical reports that detail the successes and limitations of each project – Analytics Translation. Suggesting ways in which insights obtained might be used to inform business strategies. Staying informed about developments in Data Science and adjacent fields to ensure that outputs are always relevant. Key Requirements/Minimum Requirements: Bachelor’s Degree or Equivalent years of directly related experience (Btech.+ 4 yrs and above; M.S.+2 yrs) The ability to fluently read, write, understand and communicate in English 4+ Years of Relevant Experience Experience managing data pipelines and/or model maintenance and registry Preferred Qualifications/Skills/Experience: Strong analytical skills and technical problem solving Solid experience in building code and models Project Management experience Experience with tools such as: Python, SQL, AWS, Sagemaker etc. Work hours: 1 PM to 10 PM IST Relocation assistance: Yes Employment Scams: Alcon is aware of employment scams which make false use of our company name or leader’s names to defraud job seekers. Alcon does not offer any positions without interview and never asks candidates for money. All our current job openings are displayed here on the Careers section of our website, where you can search for open positions and apply directly. If you have encountered a job posting or been approached with a job offer that you suspect may be fraudulent, we strongly recommend you do not respond, send money or personal information, and check our website for current job openings. ATTENTION: Current Alcon Employee/Contingent Worker If you are currently an active employee/contingent worker at Alcon, please click the appropriate link below to apply on the Internal Career site. Find Jobs for Employees Find Jobs for Contingent Worker Alcon is an Equal Opportunity Employer and takes pride in maintaining a diverse environment. We do not discriminate in recruitment, hiring, training, promotion or other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital status, disability, or any other reason.

Posted 4 weeks ago

Apply

3.0 years

1 - 2 Lacs

Bengaluru

On-site

ResMed has always applied the best of technology to improve people's lives. Now our SaaS technology is fueling a new era in the healthcare industry, with dynamic systems that change the way people receive care in settings outside of the hospital–and tools that work every day to help people stay well, longer. We have one of the largest actionable datasets in the industry, creating a complete view of people as they move between care settings. This is how we empower providers–with vital insight to deliver the care people need, right when they need it. We're also ensuring that our health solutions connect to other companies' networks. Because when objectives align, everyone wins. And as we work today to drive better care and lower costs, we're developing more personalized solutions for tomorrow, utilizing machine learning, intelligent care paths, and predictive protocols. If you are an innovator who wants to make an impact we want to talk to you! We have exciting opportunities supporting Brightree by ResMed and MatrixCare by ResMed! About the Team Our innovative Research and Development team is at the forefront of revolutionizing Home Medical Equipment (HME), Durable Medical Equipment (DME), out-of-hospital care, and home health services. Leveraging cutting-edge Artificial Intelligence (AI) and Machine Learning (ML) technologies, we aim to enhance patient outcomes, streamline caregiver workflows, and drive efficiency across the continuum of care. Our intelligent solutions are designed to empower caregivers and healthcare providers. By bridging technology and healthcare, we are shaping the future of care delivery, ensuring it is smarter, more efficient, and more impactful. About the Role Lead the design and development of advanced systems for automated data collection, curation, and ML model training. Write production grade code with proper unit test coverage Collaborate with cross-functional teams, including product, UX, and engineering, to architect ML solutions tailored to the needs of caregivers and home health services. Drive innovation by utilizing the latest deep learning libraries and technologies to develop solutions for predictive analytics, workflow optimization, and patient care improvement. Mentor and provide technical leadership to ML engineers and data scientists, fostering best practices and technical excellence. Build and deploy highly scalable, production-ready ML services, ensuring reliability and high-quality performance. Research, implement, and deploy innovative algorithms to address complex healthcare challenges, such as risk prediction and personalized care plans. Optimize ML pipelines and ensure their seamless integration into production environments using state-of-the-art deployment tools and practices. Analyze large, distributed datasets to uncover actionable insights that improve caregiver workflows and enhance patient care. Ensure models meet healthcare industry standards for explainability, fairness, and compliance with regulations like HIPAA. Let's Talk About You Education: Master’s or Bachelor’s degree in Computer Science, Machine Learning, or a related field. Experience: 3+ years of hands-on experience in ML model development, data pipelines, and feature engineering. Strong expertise in Python programming and frameworks like FastAPI, pydantic, pandas, numpy. Good understanding of Test Driven Development Ability to follow industry standards in Object Oriented Programming software development Experience with cloud platforms, particularly AWS (e.g., SageMaker, S3, Lambda, DynamoDB, API Gateway). Proficient in building CI/CD pipelines and deploying scalable solutions with Kubernetes or similar technologies. Expertise in handling and processing large datasets, including distributed systems such as Hadoop or Spark. Skills: Advanced understanding of machine learning techniques, including deep learning, time series analysis, and recommendation systems. Ability to design end-to-end ML workflows, from data ingestion to production deployment and monitoring. Exceptional problem-solving skills and a passion for leveraging AI to improve home healthcare services. What You Can Expect A supportive environment that focuses on people's development and best implementation Opportunity to design, influence, and be innovative. Work with inclusive global teams and the open sharing of new ideas. We want your ideas! Be supported both inside and outside of the work environment. The opportunity to build something meaningful and see a direct positive impact on people’s lives! Dream big, iterate and experiment to drive innovation. Joining us is more than saying “yes” to making the world a healthier place. It’s discovering a career that’s challenging, supportive and inspiring. Where a culture driven by excellence helps you not only meet your goals, but also create new ones. We focus on creating a diverse and inclusive culture, encouraging individual expression in the workplace and thrive on the innovative ideas this generates. If this sounds like the workplace for you, apply now! We commit to respond to every applicant.

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies