Jobs
Interviews

2993 Dynamodb Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

As a Fullstack SDE - II at NxtWave, you Build applications at a scale and see them released quickly to the NxtWave learners (within weeks )Get to take ownership of the features you build and work closely with the product tea mWork in a great culture that continuously empowers you to grow in your caree rEnjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster )NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidl yBuild in a world-class developer environment by applying clean coding principles, code architecture, etc .Responsibilitie sLead design and delivery of complex end-to-end features across frontend, backend, and data layers .Make strategic architectural decisions on frameworks, datastores, and performance patterns .Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns .Build and maintain shared UI component libraries and backend service frameworks for team reuse .Identify and eliminate performance bottlenecks in both browser rendering and server throughput .Instrument services with metrics and logging, driving SLIs, SLAs, and observability .Define and enforce comprehensive testing strategies: unit, integration, and end-to-end .Own CI/CD pipelines, automating builds, deployments, and rollback procedures .Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices .Partner with Product, UX, and Ops to translate business objectives into technical roadmaps .Facilitate sprint planning, estimation, and retrospectives for predictable deliveries .Mentor and guide SDE-1s and interns; participate in hiring .Qualifications & Skill s3–5 years building production Full stack applications end-to-end with measurable impact .Proven leadership in Agile/Scrum environments with a passion for continuous learning .Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies .Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot) .Expert in designing RESTful and GraphQL APIs and scalable database schemas .Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis) .Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, api gateway etc .Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright) .Frontend profiling (Lighthouse) and backend tracing for performance tuning .Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes .Strong communicator able to convey technical trade-offs to non-technical stakeholders .Experience in reviewing pull requests and providing constructive feedback to the team .Qualities we'd love to find in you : The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality softwa reStrong collaboration abilities and a flexible & friendly approach to working with tea msStrong determination with a constant eye on solutio nsCreative ideas with problem solving mind-s etBe open to receiving objective criticism and improving upon itEagerness to learn and zeal to gr owStrong communication skills is a huge pl usWork Location : Hyderab ad About NxtW aveNxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational backgrou nd.NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capit al.As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellen ce.Some of its prestigious recognitions inclu de:Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globa lly‘Startup Spotlight Award of the Year’ by T-Hub in 2 023‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awa rds‘The Greatest Brand in Education’ in a research-based listing by URS Me diaNxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech educat ionNxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and mo re. Know more about NxtWa ve: https://www.ccb p.inRead more about us in the new s – Economic Times | CNBC | YourStory | VCCi rcle

Posted 1 month ago

Apply

10.0 years

0 Lacs

India

On-site

We seek a Principal Backend Software Engineer with leadership experience. You will work closely with the product owner, area architect, and your team members to clarify business needs and technical requirements and define how to support them best. As our Principal Backend Software Engineer : with your Team and Product Owner, you will work in a challenging Agile environment in close collaboration with other teams. you will lead in the team’s technical/architectural discussions and decisions. you will develop and continuously deliver applications for one of the most successful pet e-commerce platforms in Europe. you will participate in the complete software development life cycle from discovery through coding, testing, deployment, and maintenance. daily, you will learn and grow your skills, striving for mastery using state-of-the-art technologies and practices such as AWS, Microservices, Docker and much more! In close collaboration with other teams, your team will introduce new features and improvements to the process to provide a better experience for more than 11 million of our customers! MUST HAVE Qualifications Excellent English verbal and written communication skills Around 10 years of invaluable experience in Software Development in JVM-related languages Experience and exposure in Microservices, AWS, Lambda, Kibana, Elastic , etc. A profound understanding of software engineering and design fundamentals, complemented by hands-on design and development expertise A track record of successful technical leadership Expertise in agile methodologies and practices Experience working in an internationally distributed environment Any experience with the development of an integration platform will be a plus Ready to work with on-call duties - approximately 1 week on call every 8 weeks. Nice to have Good understanding of software architecture principles and design patterns Experience with Apache Camel Experience in the e-commerce domain. Modelling experience (UML, C4) Live coding requirements TBD Your responsibilities Have a strong commitment to maintaining a high standard of technical excellence by emphasizing best practices and industry trends Actively code and contribute to ongoing features and issues Provide team leadership, technical guidance, and direction for integration platform Work with all stakeholders and enterprise architects to come up with the road map Collaborate with different stakeholders, run the scrum, manage the backlog Support your teams as an agile driver & coach of the software delivery process Actively participate in the recruitment and retention process, ensuring a healthy composition of the team Monitor and optimize budget costs related to product expenses, such as AWS, licenses etc. Technologies we leverage : • Java 11+, Spring framework (Boot, Hibernate) • Apache Camel • Oracle, PostgreSQL • CI/CD with Jenkins pipeline • InfluxDB, Grafana, Sensu, ELK stack • infrastructure as a code, one-click deployment, C4 diagrams • Mesos/Marathon, Docker, Kubernetes • Amazon Web Services and cloud deployments (S3, SNS, SQS, RDS, DynamoDB, etc.), using tools such as Terraform or AWS CLI • Git, Scrum, Pair Programming, Peer Reviewing

Posted 1 month ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Introduction To Team Our team is looking for passionate innovators to enhance our next-generation partner connectivity platform. This cutting-edge platform connects with over 6,500 supply partners, supporting millions of users every year in finding the perfect travel products. You'll work with technologies such as Java, Kotlin, Spring, MSSQL, DynamoDB, Redis, and more. Our platform is the backbone of Expedia's business, creating an efficient two-way marketplace that bridges supply and demand. It supports all primary lines of business—Lodging, Air, Cars, Cruise, Ground Transport, and Activities—with diverse integration methods, from direct connections to sourcing via GDSs. With over 50 Tier-1 services and more than 8TB of data produced each month, our organisation thrives on innovation, offering you the chance to challenge yourself and make a significant impact. In This Role, You Will Design and implement products and solutions that are highly scalable with high-quality, clean, maintainable, and well-documented code across the full technology stack Work collaboratively with all members of the technical staff and other partners to build and ship outstanding software in a fast-paced environment Find opportunities for process and technology improvements, and work towards adoption and implementation Assist with supporting Production systems (includes investigating issues and working towards resolution) Exercise creativity and provide alternative solutions to a given problem removing roadblocks and driving issues to closure Create/update documentation for the purpose of sharing knowledge between team members Actively participate in group technology reviews to critique the work of self and others Experience And Qualifications Bachelor's or master’s in computer science or related technical field; or equivalent related professional experience 2+ years of experience in the field of Software Engineering Proficient with Object Oriented Programming concepts with a strong understanding of Data Structures, Algorithms, and Computer Science fundamentals Experience with Java/Kotlin and Spring MVC along with a good grasp of OO design with strong design patterns knowledge. Experience with cloud-computing platforms such as Amazon Web Services Understanding of testing and monitoring tools Ability to debug applications Understanding how teams’ goals fit a business need Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 1 month ago

Apply

100.0 years

0 Lacs

Greater Kolkata Area

On-site

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Your future team To become a 100 year company, we need a world-class engineering organisation of empowered teams with the tools and infrastructure to do the best work of their careers. As a part of a unified R&D team, Engineering is prioritising key initiatives which support our customers as they increase their adoption of Atlassian Cloud products and services while continuing to support their current needs at extreme enterprise scale. We're looking for people who want to write the future and who believe that we can accomplish so much more together. You will report to one of the Engineering Managers of the R&D teams. What You'll Do Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Your background 4+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Strong understanding of CI/CD reliability principles, including test strategy, security, and performance benchmarking. Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality Qualifications Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .

Posted 1 month ago

Apply

100.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. This is a remote position. Our office is in Bengaluru, but we offer flexibility for eligible candidates to work remotely across India. Whatever your preference - working from home, an office, or in between - you can choose the place that's best for your work and your lifestyle. Your future team To become a 100 year company, we need a world-class engineering organisation made up of empowered teams who have the tools and infrastructure to do the best work of their careers. As a part of a unified R&D team, Engineering is prioritising key initiatives which support our customers in moving to cloud while simultaneously continuing to bring the most value to our customers through investments across our core product suite – such as Jira, Confluence, Trello, and Bitbucket. We're looking for people who want to write the future and who believe that we can accomplish so much more together. You will report to one of the Engineering Managers of the R&D teams. Responsibilities What you'll do: Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Qualifications Your background: 4+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality

Posted 1 month ago

Apply

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Zenwork stands at the forefront of cloud/API-based Tax Automation and Governance, Risk Compliance (GRC) technology, pioneering the future of Tax Tech and GRC Automation. Our comprehensive suite of top-tier AI-SaaS solutions serve a vast clientele of over 500,000, providing effortless tax automation through our APIs for major enterprises. In terms of numbers, during the tax year 2022, we have reported over $413 Billion to the Internal Revenue Service, spanning over 30 million transactions for some of the globe's leading and most forward-thinking firms. As a rapidly expanding digital compliance AI-SaaS Product company, Zenwork boasts a customer base that spans all sizes, partnering with industry giants like Intuit, Bill.com, Xero, and Sage Intacct. Recognized as one of the fastest-growing companies in the U.S. by Inc magazine and a consecutive Accountex award recipient, Zenwork has garnered significant acclaim. Backed by Spectrum Equity Partners, Zenwork has successfully raised over $163M in funding, maintaining profitability as a late-stage entity with operations in both the U.S. and India. Location: Zenwork, Financial District, Manikonda, Hyderabad Experience: 9+ Years Job Type: Full-time Employment Type: Full-time | Work-From-Office About the Role We are seeking a highly skilled Software Architect to lead the design and development of scalable, high-performance applications for our product-based software company. The ideal candidate should have deep expertise in .NET, .NET Core, SQL, Redis, Queuing systems, and AWS , with a strong foundation in modern software design principles, cloud-native solutions, and distributed architectures. Key Responsibilities Architect & Design: Develop scalable, high-performance software architectures for enterprise applications. Technology Leadership: Guide development teams in best practices for .NET, .NET Core, microservices, and cloud-based architectures . Cloud & Infrastructure: Design cloud-native solutions using AWS (EC2, S3, Lambda, RDS, DynamoDB, etc.) . Database Management: Optimize performance and scalability of SQL Server and Redis . Performance Optimization: Implement caching (Redis), queuing (Kafka/RabbitMQ/Azure Service Bus), and event-driven architectures . Security & Compliance: Ensure best practices for security, data protection, and compliance . Mentorship: Lead engineering teams, conduct code reviews, and enforce architectural standards. Innovation & Research: Stay updated with emerging technologies and integrate them into system design. Required Skills & Experience 10+ years of software development experience, with at least 3+ years as a Software Architect . Strong expertise in .NET, .NET Core, C# , and microservices architecture . Proficiency in SQL Server, Redis, and NoSQL databases . Hands-on experience with AWS cloud services . Expertise in event-driven architectures, queuing systems (Kafka, RabbitMQ, Azure Service Bus, SQS, etc.) . Understanding of DevOps, CI/CD, containerization (Docker, Kubernetes) is a plus. Excellent problem-solving and decision-making skills. Strong leadership and communication skills to drive collaboration across teams. Why Join Us? Work in an innovative product-based company solving real-world challenges. Collaborate with top engineering talent and drive technology decisions . Competitive compensation, career growth opportunities, and work-life balance.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Software Engineer/ Developer - 2 Responsibilities: Lead design and delivery of complex end-to-end features across frontend, backend, and data layers. Make strategic architectural decisions on frameworks, datastores, and performance patterns. Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Build and maintain shared UI component libraries and backend service frameworks for team reuse. Identify and eliminate performance bottlenecks in both browser rendering and server throughput. Instrument services with metrics and logging, driving SLIs, SLAs, and observability. Define and enforce comprehensive testing strategies: unit, integration, and end-to-end. Own CI/CD pipelines, automating builds, deployments, and rollback procedures. Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices. Partner with Product, UX, and Ops to translate business objectives into technical roadmaps. Facilitate sprint planning, estimation, and retrospectives for predictable deliveries. Mentor and guide SDE-1s and interns; participate in hiring. Qualifications & Skills: 3–5 years building production Full stack applications end-to-end with measurable impact. Proven leadership in Agile/Scrum environments with a passion for continuous learning. Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies. Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot). Expert in designing RESTful and GraphQL APIs and scalable database schemas. Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis). Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, apigateway etc. Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright). Frontend profiling (Lighthouse) and backend tracing for performance tuning. Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes. Strong communicator able to convey technical trade-offs to non-technical stakeholders. Experience in reviewing pull requests and providing constructive feedback to the team. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad

Posted 1 month ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Experience: 7 to 13 Years Only Job Location: PAN India TCS Hiring for AWS Cloud Data Engineer_Redshift_PAN India Required Technical Skill Set: Working on EMR, good knowledge of CDK and setting up ETL and Data pipeline Coding - Python AWS EMR, Athina, Supergule, Sagemaker, Sagemaker Studio Data security & encryption ML / AI Pipeline Redshift AWS Lambda Nice to have skills & experience: Oracle/SQL Database administration Data modelling RDS & DMS Serverless Architectrure DevOps 3+ years of industry experience in Data Engineering on AWS cloud with glue, redshift , Athena experience. · Ability to write high quality, maintainable, and robust code, often in SQL, Scala and Python. · 3+ Years of Data Warehouse Experience with Oracle, Redshift, PostgreSQL, etc. Demonstrated strength in SQL, python/pyspark scripting, data modeling, ETL development, and data warehousing · Extensive experience working with cloud services (AWS or MS Azure or GCS etc.) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/Glue), data streaming (e.g. Kinesis), storage (e.g. S3) etc. · Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.) Kind Regards, Priyankha M

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Roles & Responsibilities Java Developer/Senior Developer Responsibilities Design and develop software applications using Java and related technologies Develop new product features and enhancements to the existing features Work with development teams and product managers for software solutions Design & Develop client-side and server-side components. Develop and manage well-functioning databases and applications Write effective APIs, database queries, server-side code, and client-side code Test software to ensure responsiveness and efficiency Troubleshoot, debug, and upgrade software Create security and data protection settings Build features and applications with a mobile responsive design Write technical documentation Excellent verbal communication skills. Good analytical & problem-solving skills. Qualifications Fully hands-on frontend & backend. 6 to 10 years of professional experience in software development using Java Proficiency in Spring, Hibernate, SpringBoot, Oracle, DynamoDB, Redis Extensive experience with front-end technologies such as Javascript/ExtJS/Angular/ReactJS Experience with AWS is added advantage; appropriate training can be provided Bachelor's or Master's degree in Computer Science or a related field Strong understanding of object-oriented programming principles and design patterns Ability to work both independently and as part of a team. Strong problem-solving and critical-thinking skills Strong attention to detail and ability to write clean, maintainable code. JD 6-8 years of Experience in Core Java, Databases along with experience in some or the other front-end technology Develop, write, and implement software programming applications following established architecture standards as well as software development methodologies. Write high-quality code using the guidelines and effectively debug the code. Document the code changes, package the code, and unit test it. Perform the task assigned as per their prioritization and implement a solution. Act upon the suggestions provided during the code reviews. Work with the application development team in delivering the project and deliverables within time and with good quality. Assess requirements for new and enhanced functionalities; identify the impact on existing applications, operating systems, hardware, and network. Perform unit testing and ensure quality assurance of applications through system testing. Keep up to date with latest technologies, trends and provides inputs/recommendations to the project manager or the architect as required. Coordinate and communicate with the other tracks and disciplines involved in the project. Knowledge And Skills Must-Have: Working experience on Core Java, REST API, Junit/Mockito, Spring/Hibernate. Strong in coding and good in concepts like OOPS Concepts, Data Structures, Multithreading, and Design Patterns Springboot 2.0. x (web packages) for building RESTful application components. Exposure to any front-end tech like ExtJS\Angular\ReactJS\JavaScript Good exposure to Database queries and comfortable with joins. Practical exposure to operating code repositories like Bitbucket, GitHub, etc. Must have exposure to different application servers, configuration systems Experience 6-8 Years Skills Primary Skill: Java Development Sub Skill(s): Java Development Additional Skill(s): Core Java, Eclipse, Hibernate, J2EE, Java Application/Web Server, Spring, Spring Boot, JavaScript Development About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

TCS Hiring for AWS DevOps with Terraform_PAN India Experience: 5 to 10 Years Only Job Location: PAN India TCS Hiring for AWS DevOps with Terraform_PAN India Required Technical Skill Set: Desired Competencies (Technical/Behavioral Competency) Must-Have: • Should have expertise on AWS services as IAM, EC2, S3, RDS, ELB, EBS, CloudWatch, Cloudtrail, ACM, VPC, Lambda, Dynamodb, SNS • AWS networking Understanding, of AWS load Balancer, VPC, Hosted Zone, Direct Connect , Virtual Interface, VPC Flow Logs • Proficient in infrastructure as Code (Terraform/CloudFormation) Good-to-Have • Basic knowledge in Redhat Linux and Windows Operating Systems • Good at Console and the AWS CLI and APIs • Experience in migration using AWS migration tools. • Hands-on experience on DevOps tools like Jenkins, GIT will be added advantage. Kind Regards, Priyankha M

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TCS Hiring for AWS DevOps with Terraform_PAN India Experience: 5 to 10 Years Only Job Location: PAN India TCS Hiring for AWS DevOps with Terraform_PAN India Required Technical Skill Set: Desired Competencies (Technical/Behavioral Competency) Must-Have: • Should have expertise on AWS services as IAM, EC2, S3, RDS, ELB, EBS, CloudWatch, Cloudtrail, ACM, VPC, Lambda, Dynamodb, SNS • AWS networking Understanding, of AWS load Balancer, VPC, Hosted Zone, Direct Connect , Virtual Interface, VPC Flow Logs • Proficient in infrastructure as Code (Terraform/CloudFormation) Good-to-Have • Basic knowledge in Redhat Linux and Windows Operating Systems • Good at Console and the AWS CLI and APIs • Experience in migration using AWS migration tools. • Hands-on experience on DevOps tools like Jenkins, GIT will be added advantage. Kind Regards, Priyankha M

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s In It For You Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

TCS Hiring for AWS DevOps with Terraform_PAN India Experience: 5 to 10 Years Only Job Location: PAN India TCS Hiring for AWS DevOps with Terraform_PAN India Required Technical Skill Set: Desired Competencies (Technical/Behavioral Competency) Must-Have: • Should have expertise on AWS services as IAM, EC2, S3, RDS, ELB, EBS, CloudWatch, Cloudtrail, ACM, VPC, Lambda, Dynamodb, SNS • AWS networking Understanding, of AWS load Balancer, VPC, Hosted Zone, Direct Connect , Virtual Interface, VPC Flow Logs • Proficient in infrastructure as Code (Terraform/CloudFormation) Good-to-Have • Basic knowledge in Redhat Linux and Windows Operating Systems • Good at Console and the AWS CLI and APIs • Experience in migration using AWS migration tools. • Hands-on experience on DevOps tools like Jenkins, GIT will be added advantage. Kind Regards, Priyankha M

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

TCS Hiring for AWS DevOps with Terraform_PAN India Experience: 5 to 10 Years Only Job Location: PAN India TCS Hiring for AWS DevOps with Terraform_PAN India Required Technical Skill Set: Desired Competencies (Technical/Behavioral Competency) Must-Have: • Should have expertise on AWS services as IAM, EC2, S3, RDS, ELB, EBS, CloudWatch, Cloudtrail, ACM, VPC, Lambda, Dynamodb, SNS • AWS networking Understanding, of AWS load Balancer, VPC, Hosted Zone, Direct Connect , Virtual Interface, VPC Flow Logs • Proficient in infrastructure as Code (Terraform/CloudFormation) Good-to-Have • Basic knowledge in Redhat Linux and Windows Operating Systems • Good at Console and the AWS CLI and APIs • Experience in migration using AWS migration tools. • Hands-on experience on DevOps tools like Jenkins, GIT will be added advantage. Kind Regards, Priyankha M

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TCS Hiring for AWS DevOps with Terraform_PAN India Experience: 5 to 10 Years Only Job Location: PAN India TCS Hiring for AWS DevOps with Terraform_PAN India Required Technical Skill Set: Desired Competencies (Technical/Behavioral Competency) Must-Have: • Should have expertise on AWS services as IAM, EC2, S3, RDS, ELB, EBS, CloudWatch, Cloudtrail, ACM, VPC, Lambda, Dynamodb, SNS • AWS networking Understanding, of AWS load Balancer, VPC, Hosted Zone, Direct Connect , Virtual Interface, VPC Flow Logs • Proficient in infrastructure as Code (Terraform/CloudFormation) Good-to-Have • Basic knowledge in Redhat Linux and Windows Operating Systems • Good at Console and the AWS CLI and APIs • Experience in migration using AWS migration tools. • Hands-on experience on DevOps tools like Jenkins, GIT will be added advantage. Kind Regards, Priyankha M

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Company: Our Client is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over three decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title: Java Developer Location: Hyderabad, Pune Experience: 5+ yrs Job Type : Contract to hire. Notice Period: Immediate joiners. Mandatory Skills: Java, AWS, Springboot Key responsibilities: 5+ years of software development experiences with focus on AWS cloud development and distributed applications development with Java & J2EE. • 1+ years of experience in AWS development using typescript. If not worked on typescript, willing to learn typescript because as per Principal standards typescript is the preferred language for AWS development. • Hands on experience and deploying applications on AWS cloud infrastructure(e.g., EC2, Lambda, S3, DynamoDB, RDS, API Gateway, Event Bridge, SQS, SNS, Fargate etc). • Strong Hands on experience in Java/J2EE, Spring, Spring boot development and good understanding of server less computing. • Experience with REST API and Java Shared Libraries. Good to have: • AWS Cloud practitioner, AWS Certified Developer or AWS certified solutions architect is plus. Responsibilities: • Implementing cloud applications using AWS services, Typescript and Java. • Write clean, maintainable and efficient code while adhering to best practices and coding standards. • Work closely with product manager and engineers in to define and refine requirements. • Provide technical guidance and mentorship to junior engineers in team. • Troubleshoot and resolve complex technical issues and performance bottlenecks. • Create and maintain technical documentation for code and processes. • Stay up-to-date with industry trends and emerging technologies to continuously improve our development practices. Qualifications: • Bachelors degree in engineering and related field. Seniority Level Mid-Senior level Industry IT Services and IT Consulting Employment Type Contract Job Functions Business Development Consulting Skills Java AWS Springboot

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About The Role Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s In It For You Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Gurugram, Haryana

On-site

About the Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

0.0 - 6.0 years

0 Lacs

Gurugram, Haryana

On-site

Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317425 Job Description About The Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies