Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
2 - 6 Lacs
bengaluru
Work from Office
Hands-on experience with AWS services including S3, Lambda,Glue, API Gateway, and SQS. Strong skills in data engineering on AWS, with proficiency in Python ,pyspark & SQL. Experience with batch job scheduling and managing data dependencies. Knowledge of data processing tools like Spark and Airflow. Automate repetitive tasks and build reusable frameworks to improve efficiency. Provide Run/DevOps support and manage the ongoing operation of data services. (Immediate Joiners). Location - Bengaluru,Mumbai,Pune,Chennai,Kolkata,Hyderabad.
Posted 3 months ago
5.0 - 7.0 years
2 - 6 Lacs
mumbai
Work from Office
Hands-on experience with AWS services including S3, Lambda,Glue, API Gateway, and SQS. Strong skills in data engineering on AWS, with proficiency in Python ,pyspark & SQL. Experience with batch job scheduling and managing data dependencies. Knowledge of data processing tools like Spark and Airflow. Automate repetitive tasks and build reusable frameworks to improve efficiency. Provide Run/DevOps support and manage the ongoing operation of data services. (Immediate Joiners).
Posted 3 months ago
6.0 - 11.0 years
12 - 20 Lacs
coimbatore
Remote
Job Description: We are hiring Python Developers across multiple levels to build and scale modern applications. The role involves working on backend systems, microservices, and data platforms while collaborating with cross-functional teams. Responsibilities: Design, develop, and optimize scalable backend systems using Python or C#. Implement and manage microservices architecture with FastAPI, Kafka, PostgreSQL, and Redis. Work with modern data stacks (DBT, Snowflake, AWS Glue). Collaborate with product and engineering teams for solution design and delivery. Ensure system performance, reliability, and scalability. Required Skills: Strong hands-on experience in Python or C#, FastAPI, Kafka, Po...
Posted 3 months ago
6.0 - 9.0 years
10 - 15 Lacs
noida
Work from Office
Design and implement scalable data processing solutions usingApache SparkandJava. Develop and maintain high-performance backend services and APIs. Collaborate with data scientists, analysts, and other engineers to understand data requirements. Optimize Spark jobs for performance and cost-efficiency. Ensure code quality through unit testing, integration testing, and code reviews. Work with large-scale datasets in distributed environments (e.g., Hadoop, AWS EMR, Databricks). Monitor and troubleshoot production systems and pipelines. Experience in Agile Development Process. Experience in leading a 3-5 member team on the technology front Excellent communication skills, problem solving and debugg...
Posted 3 months ago
6.0 - 10.0 years
5 - 9 Lacs
noida
Work from Office
AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Big Data - Big Data - Pyspark Beh - Communication Database - Database Programming - SQL Perks and Benefits for Irisians we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health ...
Posted 3 months ago
10.0 - 20.0 years
50 - 65 Lacs
hyderabad
Hybrid
Key Responsibilities: Team Leadership & Technical Mentorship: Lead, manage, and mentor a team of data engineers, fostering a culture of technical excellence and collaborative problem-solving. Conduct code reviews, provide architectural guidance. Hands-On Technical Contribution: Architecting and implementing critical components of the data platform. Expertise with PySpark, Scala, and SQL will be essential. Architectural Ownership: Design and own the roadmap for the data ecosystem. Lead the architecture of scalable and resilient data solutions, including Data Lakehouse , real-time streaming services using Kafka , and large-scale batch processing jobs using Apache Spark . Platform Modernization...
Posted 3 months ago
7.0 - 12.0 years
30 - 40 Lacs
hyderabad, pune, delhi / ncr
Hybrid
Support enhancements to the MDM and Performance platform Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years in Python and advanced SQL including profiling, refactoring Experience with REST API and Hands on AWS Glue EMR etc Experience with Markit EDM or Semarchy or MDM will be plus
Posted 3 months ago
9.0 - 14.0 years
20 - 35 Lacs
kolkata, hyderabad, bengaluru
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL...
Posted 3 months ago
5.0 - 10.0 years
15 - 25 Lacs
kolkata, hyderabad, bengaluru
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL...
Posted 3 months ago
7.0 - 12.0 years
15 - 25 Lacs
kolkata, hyderabad, bengaluru
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL...
Posted 3 months ago
7.0 - 12.0 years
10 - 14 Lacs
gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As part of a Data Transformation programme you will be part of the Data Marketplace team. In this team you will be responsible for the design and implementation of dashboard for assessing compliance to controls and policies at various stages of the data product lifecycle, with centralised compliance scoring.Preferable having experience with data product lifecycle.Example sk...
Posted 3 months ago
4.0 - 6.0 years
8 - 15 Lacs
hyderabad
Work from Office
Job Summary We are seeking an experienced ETL Developer with 4+ years of expertise in building scalable data pipelines using AWS Glue, PySpark, and Python . The role involves migrating data from sources like S3 and SQL Server to PostgreSQL , ensuring high performance, data quality, and compliance within a modern AWS-based data ecosystem. Key Responsibilities: * Design, develop, and maintain **ETL pipelines** using **AWS Glue, PySpark, and Python**. * Migrate large-scale datasets from **Amazon S3** or **SQL Server** to **PostgreSQL (RDS/On-Prem/Cloud)**. * Optimize data extraction, transformation, and loading for performance and cost efficiency. * Implement **data validation, reconciliation, ...
Posted 3 months ago
5.0 - 7.0 years
2 - 6 Lacs
pune
Work from Office
Hands-on experience with AWS services including S3, Lambda,Glue, API Gateway, and SQS. Strong skills in data engineering on AWS, with proficiency in Python ,pyspark & SQL. Experience with batch job scheduling and managing data dependencies. Knowledge of data processing tools like Spark and Airflow. Automate repetitive tasks and build reusable frameworks to improve efficiency. Provide Run/DevOps support and manage the ongoing operation of data services. (Immediate Joiners).
Posted 3 months ago
3.0 - 7.0 years
8 - 12 Lacs
pune, chennai, bengaluru
Work from Office
About The Role Location: Bangalore Experience: 6 - 9 Years Role Overview: We are looking for a Data Engineer with strong expertise in Python, PySpark, AWS Glue, and Lambda . The ideal candidate should have experience in data processing, transformation, and cloud-based data workflows . A strong foundation in PL/SQL and PostgreSQL is required, and experience with AWS services such as SNS and Step Functions, along with Java API development, would be a plus . Key Responsibilities: Develop and optimize ETL pipelines using AWS Glue, Lambda, and PySpark . Work with large-scale data processing and transformation using Python and Spark . Design and implement data workflows for structured and unstruct...
Posted 3 months ago
5.0 - 10.0 years
4 - 8 Lacs
noida, hyderabad, bengaluru
Work from Office
Your Role IT experience with a minimum of 5+ years of experience in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. preferably - Life Sciences Domain Experience with cloud storage, cloud database, cloud Data ware housing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Experience with cloud storage, cloud database, cloud Data ware housing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure...
Posted 3 months ago
7.0 - 10.0 years
25 - 32 Lacs
bengaluru
Work from Office
This is 100% WFO opportunity at our Whitefield, Bengaluru office. Please don't apply if you can't comply. Position Overview: We seek a highly skilled and experienced Data Engineering Lead to join our team. This role demands deep technical expertise in Apache Spark, Hive, Trino (formerly Presto), Python, AWS Glue, and the broader AWS ecosystem. The ideal candidate will possess strong hands-on skills and the ability to design and implement scalable data solutions, optimise performance, and lead a high-performing team to deliver data-driven insights. Key Responsibilities: Technical Leadership Lead and mentor a team of data engineers, fostering best practices in coding, design, and delivery. Dri...
Posted 3 months ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Developer in AWS Data Engineering, you will have the opportunity to design, build, and optimize large-scale data pipelines on AWS, enabling high-performance analytics and AI-ready data platforms. **Key Responsibilities:** - Build & manage data pipelines using Glue, Spark & S3 - Optimize Redshift queries, schema & performance - Support large-scale data ingestion & transformation - Integrate with upstream APIs & external platforms - Ensure data reliability, observability & monitoring **Qualifications Required:** - Strong expertise in AWS Glue, S3 & Redshift - Proficiency in Python, SQL & Spark - Solid understanding of data warehouse & lake concepts - Experience with incremental pip...
Posted 3 months ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Engineer at our company, you will play a crucial role in leading the design, development, and optimization of scalable, secure, and high-performance data solutions in Databricks. Your responsibilities will include: - Leading Data Architecture & Development by designing and optimizing data solutions in Databricks - Building and maintaining ETL/ELT pipelines, integrating Databricks with Azure Data Factory or AWS Glue - Implementing machine learning models and AI-powered solutions for business innovation - Collaborating with data scientists, analysts, and business teams to translate requirements into technical designs - Enforcing data validation, governance, and security best p...
Posted 3 months ago
10.0 - 15.0 years
6 - 10 Lacs
bengaluru
Work from Office
Job Duties: Wyndham Group of Hotels is advancing its modern data platform on Databricks (AWS) and is seeking a senior offshore data engineering lead to build high-performance, production-grade data pipelines. This role is not migration-focused initially but will evolve into one. The current mandate is to ingest semi-structured JSON data dropped via SFTP to S3, transform and validate it using Databricks, and deliver curated datasets into Amazon Redshift. This is a player-coach role, ideal for a technically hands-on leader capable of delivering pipelines while mentoring junior engineers and ensuring best practices across the offshore team. Initial Scope and Responsibilities:Develop high-perfor...
Posted 3 months ago
1.0 - 4.0 years
4 - 8 Lacs
bengaluru
Work from Office
Job Duties: Migrate ETL workflows from SAP BODS to AWS Glue/dbt/Talend. Develop and maintain scalable ETL pipelines in AWS. Write PySpark scripts for large-scale data processing. Optimize SQL queries and transformations for AWS PostgreSQL. Work with Cloud Engineers to ensure smooth deployment and performance tuning. Integrate data pipelines with existing Unix systems. Document ETL processes and migration steps. Minimum Skills Required: Strong hands-on experience with SAP BODS. Proficiency in PySpark and Python scripting. Experience with AWS PostgreSQL (schema design, performance tuning, migration). Strong SQL and data modelling skills. Experience with Unix/Linux and shell scripting. Knowledg...
Posted 3 months ago
3.0 - 8.0 years
6 - 15 Lacs
bengaluru
Remote
We are looking for an experienced Data Engineer to join our team and support the Data Analytics Platform. This role focuses on building and maintaining scalable, secure, and high-performance data infrastructure using AWS services.
Posted 3 months ago
7.0 - 12.0 years
25 - 40 Lacs
kochi, bengaluru
Work from Office
Expertise in Tableau Experience integrating with Redshift or other DWH Platforms and understanding DWH concepts. Python or similar scripting languages Experience with AWS Glue, Athena, or other AWS data services.Strong SQL skills, AWS QuickSight
Posted 3 months ago
7.0 - 12.0 years
15 - 25 Lacs
chennai
Hybrid
Senior Data Engineer - Cloud Data Platform Position Overview We are seeking a highly skilled Senior Data Engineer to join our data platform team, responsible for designing and implementing scalable, cloud-native data solutions. This role focuses on building modern data infrastructure using AWS and Azure services to support our growing analytics and machine learning initiatives. Key Responsibilities Business Translation : Collaborate with business stakeholders to understand requirements and translate them into scalable data models, architectures, and robust end-to-end data pipelines ETL/ELT Implementation : Design, develop, and maintain high-performance batch and real-time data pipelines usin...
Posted 3 months ago
7.0 - 10.0 years
25 - 32 Lacs
bengaluru
Work from Office
Position Overview: We seek a highly skilled and experienced Data Engineering Lead to join our team. This role demands deep technical expertise in Apache Spark, Hive, Trino (formerly Presto), Python, AWS Glue, and the broader AWS ecosystem. The ideal candidate will possess strong hands-on skills and the ability to design and implement scalable data solutions, optimise performance, and lead a high-performing team to deliver data-driven insights. Key Responsibilities: Technical Leadership Lead and mentor a team of data engineers, fostering best practices in coding, design, and delivery. Drive the adoption of modern data engineering frameworks, tools, and methodologies to ensure high-quality and...
Posted 3 months ago
3.0 - 6.0 years
10 - 14 Lacs
bengaluru
Work from Office
Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and comput...
Posted 3 months ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
192783 Jobs | Dublin
Wipro
61786 Jobs | Bengaluru
EY
49321 Jobs | London
Accenture in India
40642 Jobs | Dublin 2
Turing
35027 Jobs | San Francisco
Uplers
31887 Jobs | Ahmedabad
IBM
29626 Jobs | Armonk
Capgemini
26439 Jobs | Paris,France
Accenture services Pvt Ltd
25841 Jobs |
Infosys
25077 Jobs | Bangalore,Karnataka