Jobs
Interviews

869 Aws Glue Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

25 - 35 Lacs

Gurugram

Remote

Job description Data Engineer III/ IV - IN Work Location - Remote Job Description Summary The Data engineer is responsible for managing and operating upon Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI/Tableau. The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description Provides Level 3/4 operational coverage: Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch ...

Posted 2 months ago

Apply

8.0 - 12.0 years

12 - 18 Lacs

Noida

Work from Office

General Roles & Responsibilities: Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security. Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards. Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement. Collaboration: Work closely with system architects, ...

Posted 2 months ago

Apply

6.0 - 7.0 years

6 - 11 Lacs

Noida

Work from Office

Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best pra...

Posted 2 months ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Noida

Work from Office

Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best pra...

Posted 2 months ago

Apply

5.0 - 10.0 years

6 - 11 Lacs

Noida

Work from Office

5+ years of experience in data engineering with a strong focus on AWS services . Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshift for data warehousing and analytics Proficiency in SQL , Python , or PySpark for data processing. Experience with data modeling , partitioning strategies , and performance optimization . Familiarity with orchestration tools like AWS Step Functions , Apache Airflow , or Glue Workflows . Strong understanding of data lake and data warehouse architectures. Excellent problem-solving and communication skills. Mandatory Competencies Beh - Communication ETL - ETL - AWS Glue Bi...

Posted 2 months ago

Apply

4.0 - 8.0 years

20 - 27 Lacs

Chennai

Hybrid

Key Responsibilities Design, develop, and maintain ETL pipelines using IBM DataStage (CP4D) and AWS Glue/Lambda for ingestion from varied sources like flat files, APIs, Oracle, DB2, etc. Build and optimize data flows for loading curated datasets into Snowflake , leveraging best practices for schema design, partitioning, and transformation logic. Participate in code reviews , performance tuning, and defect triage sessions. Work closely with data governance teams to ensure lineage, privacy tagging, and quality controls are embedded within pipelines. Contribute to CI/CD integration of ETL components using Git, Jenkins, and parameterized job configurations. Troubleshoot and resolve issues in QA/...

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 37 Lacs

Pune

Hybrid

Job Title Data Engineer Job Description Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the e...

Posted 2 months ago

Apply

6.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Key Responsibilities Infrastructure as Code (IaC): Develop, manage, and maintain infrastructure using tools like AWS CloudFormation and Terraform. Continuous Integration/Continuous Delivery (CI/CD): Implement and manage CI/CD pipelines using Jenkins to automate the build, test, and deployment processes. Serverless Computing: Design and deploy serverless applications using AWS Lambda to ensure scalability and cost-efficiency. Data Management: Utilize AWS S3 for data storage, backups, and content distribution, and AWS Glue for data integration and preparation. Security and Access Management: Manage IAM roles and policies to control access to AWS services and resources, ensuring a secure cloud ...

Posted 2 months ago

Apply

5.0 - 8.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve cl...

Posted 2 months ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Chennai

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As an AWS Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. In this role, you'll be engin...

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

On DevOn, a leading provider of innovative technology solutions focusing on data-driven decision-making, cloud computing, and advanced analytics, the dynamic team is dedicated to solving complex business problems through technology. We are currently seeking a skilled and motivated Data Engineer Lead to join our team. As a Data Engineer Lead, your primary responsibility will be to lead the design, development, and maintenance of data pipelines and ETL workflows utilizing modern cloud technologies. You will collaborate closely with cross-functional teams to ensure data availability, reliability, and scalability, facilitating data-driven decision-making throughout the organization. This role ne...

Posted 2 months ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Cloud Data Integration Consultant, you will be responsible for leading a complex data integration project that involves API frameworks, a data lakehouse architecture, and middleware solutions. The project focuses on technologies such as AWS, Snowflake, Oracle ERP, and Salesforce, with a high transaction volume POS system. Your role will involve building reusable and scalable API frameworks, optimizing middleware, and ensuring security and compliance in a multi-cloud environment. Your expertise in API development and integration will be crucial for this project. You should have deep experience in managing APIs across multiple systems, building reusable components, and ensuring bid...

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Kolkata, Hyderabad, Pune

Work from Office

Airflow Data Engineer in AWS platform Job Title Apache Airflow Data Engineer ROLE” as per TCS Role Master • 4-8 years of experience in AWS, Apache Airflow (on Astronomer platform), Python, Pyspark, SQL • Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement. • Experience in creating data pipelines and orchestrating using Apache Airflow • Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses, Data Lake and Data Marts. • Good to have: Experience with cloud ETL and ELT in one of the tools like DBT/Glue/EMR or Matillion or any other ELT tool • Excellent communication skills to liaise with Business & ...

Posted 2 months ago

Apply

5.0 - 8.0 years

18 - 22 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Job Title: AWS Data Engineer Experience Required: 5+ Years Interested? Send your resume to: aditya.rao@estrel.ai Kindly include: Updated Resume Current CTC Expected CTC Notice Period / Availability (Looking only for Immediate Joiner) LinkedIn Profile Job Overview: We are seeking a skilled and experienced Data Engineer with a minimum of 5 years of experience in Python-based data engineering solutions, real-time data processing, and AWS Cloud technologies. The ideal candidate will have hands-on expertise in designing, building, and maintaining scalable data pipelines, implementing best practices, and working within CI/CD environments. Key Responsibilities: Design and implement scalable and rob...

Posted 2 months ago

Apply

4.0 - 6.0 years

13 - 18 Lacs

Bengaluru

Remote

About BNI: Established in 1985, BNI is the world’s largest business referral network. With over 325,000 small-to medium-size business Members in over 11,000 Chapters across 77 Countries, we are a global company with local footprints. Our proven approach provides Members with a structured, positive, and professional referral program that enables them to sharpen their business skills, develop meaningful, long-term relationships, and experience business growth. Visit to learn how BNI has impacted the lives of our Members and how it can help you achieve your business goals. Position Summary The Database Developer will be a part of BNI’s Global Information Technology Team and will primarily have ...

Posted 2 months ago

Apply

7.0 - 9.0 years

7 - 17 Lacs

Pune

Remote

Requirements for the candidate: The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests...

Posted 2 months ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You will be joining a company that values innovation and maintains an open, friendly culture while benefiting from the support of a well-established parent company with a strong ethical reputation. The company is dedicated to guiding customers towards the future by leveraging the potential of their data and applications to address digital challenges, ultimately delivering positive outcomes for both business and society. As an Infor M3 Support professional, you will play a crucial role in providing technical and functional support for the Infor M3 Cloud platform. Your responsibilities will include expertise in M3 integrations, data engineering, analytics, and cloud technologies. You will be p...

Posted 2 months ago

Apply

6.0 - 11.0 years

22 - 27 Lacs

Hyderabad, Bengaluru

Work from Office

Job description: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with...

Posted 2 months ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Hi All, Looking for Python with AWS experience for one of the banking client Experience-6+years Notice period-30days or Immediate Max ctc-22LPA Location-Hyderabad Interview Mode-Face to face(2rounds in the same day) Work mode-5days work from office Client-Banking JD : AWS working experience AWS Glue or equivalent product experience Lambda functions Python programming Kubernetes knowledge

Posted 2 months ago

Apply

4.0 - 7.0 years

24 - 40 Lacs

Hyderabad

Work from Office

Design and optimize scalable data pipelines using Python, Scala, and SQL. Work with AWS services, Redshift, Terraform, Docker, and Jenkins. Implement CI/CD, manage infrastructure as code, and ensure efficient data flow across systems.

Posted 2 months ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl...

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for developing and modifying programs using Python, AWS Glue/Redshift, and PySpark technologies. Your role will involve writing effective and scalable code, as well as identifying areas for program modifications. Additionally, you must have a strong understanding of AWS cloud technologies such as CloudWatch, Lambda, Dynamo, API Gateway, and S3. Experience in creating APIs from scratch and integrating with 3rd party APIs is also required. This is a full-time position based in Hyderabad/Chennai/Bangalore, and the ideal candidate should have a maximum notice period of 15 days.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have advanced proficiency in Python, with a solid understanding of inheritance and classes. Additionally, the candidate should be well-versed in EMR, Athena, Redshift, AWS Glue, IAM roles, CloudFormation (CFT is optional), Apache Airflow, Git, SQL, Py-Spark, Open Metadata, and Data Lakehouse. Experience with metadata management is highly desirable, particularly with AWS Services such as S3. The candidate should possess the following key skills: - Creation of ETL Pipelines - Deploying code in EMR - Querying in Athena - Creating Airflow Dags for scheduling ETL pipelines - Knowledge of AWS Lambda and ability to create Lambda functions This role is fo...

Posted 2 months ago

Apply

6.0 - 12.0 years

0 Lacs

karnataka

On-site

Your role as a Supervisor at Koch Global Services India (KGSI) will involve being part of a global team dedicated to creating new solutions and enhancing existing ones for Koch Industries. With over 120,000 employees worldwide, Koch Industries is a privately held organization engaged in manufacturing, trading, and investments. KGSI is being established in India to expand its IT operations and serve as an innovation hub within the IT function. This position offers the chance to join at the inception of KGSI and play a pivotal role in its development over the coming years. You will collaborate closely with international colleagues, providing valuable global exposure to the team. In this role, ...

Posted 2 months ago

Apply

8.0 - 13.0 years

30 - 40 Lacs

Noida, Hyderabad

Hybrid

Job Title: Data Engineer Location : Noida / Hyderabad (Hybrid 3 days/week) Shift Timings : 2:30 PM to 10:30 PM IST Start Date : Immediate / July 2025 Experience : 8+ years Tech Stack : AWS, Python, PySpark, EMR, Athena, Glue, Lambda, EC2, S3, Git, Data Warehousing, Parquet, Avro, ORC Job Description : We're hiring experienced Data Engineers with a strong background in building scalable data pipelines using AWS and PySpark. You'll work with distributed systems, big data tools, and analytics services to deliver solutions for high-volume data processing. Key Responsibilities : Build and optimize PySpark applications Work with AWS services: EMR, Glue, Lambda, Athena, etc. Implement data modeling...

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies