1831 Aws Glue Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

10 - 14 Lacs

bengaluru

Hybrid

We are looking for a skilled AWS Data Engineer with 3 to 5 years of experience in designing, building, and maintaining scalable data pipelines and analytics solutions on the AWS cloud platform. The ideal candidate should have strong hands-on experience in Python, SQL, AWS services, PySpark, and Pandas, with the ability to process large-scale data and support data-driven business initiatives. Key Responsibilities:- -Design, develop, and maintain scalable data pipelines on AWS. -Build and optimize ETL/ELT workflows using Python and PySpark.- -Write complex and performance-optimized SQL queries for analytics and reporting. -Work with large datasets using PySpark and Pandas. -Implement data inge...

Posted 1 week ago

AI Match Score
Apply

4.0 - 8.0 years

12 - 20 Lacs

kolkata, hyderabad, chennai

Work from Office

Must have atleast 5+ yrs of overall IT experience and 3+ years' experience working as a Data Engineer working on AWS platform with Python and Pyspark programming and SQL Python Application Development - Object oriented programming, test driven development using Pytest, implementing modules & packages, developing production ready frameworks and working with core and extended Python libraries, working with files, database, API, familiar with PEP coding standards, GIT version control Hands-on experience on developing data pipelines using AWS Data Services like S3, Lambda, Glue, Step Functions, RDS, Redshift, Event Bridge, DynamoDB, SNS, SQS, IAM, EMR, EC2, ECS, Fargate, SSM, Secrets Manager, Co...

Posted 1 week ago

AI Match Score
Apply

6.0 - 8.0 years

20 - 25 Lacs

hyderabad, chennai

Work from Office

5+ years of experience in data engineering, with strong focus on PySpark /Spark for big data processing. Expertise in building data pipelines and ingestion frameworks from relational, semi-structured (JSON, XML), and unstructured sources (logs, PDFs). Proficiency in Python with strong knowledge of data processing libraries. Strong SQL skills for querying and validating data in platforms like Amazon Redshift, PostgreSQL , or similar. Experience with distributed computing frameworks (e.g., Spark on EMR, Databricks). Familiarity with workflow orchestration tools (e.g., AWS Step Functions, or similar). Solid understanding of data lake / data warehouse architectures and data modeling basics.

Posted 1 week ago

AI Match Score
Apply

5.0 - 8.0 years

10 - 13 Lacs

hyderabad

Hybrid

Role & responsibilities: The Data Engineer, Specialist is responsible for building and maintaining scalable data pipelines to ensure seamless integration and efficient data flow across various platforms. This role involves designing, developing, and optimizing ETL (Extract, Transform, Load) processes, managing databases and leveraging big data technologies to support analytics and business intelligence initiatives. Preferred candidate profile: Design, develop and maintain scalable ETL (Extract, Transform, Load) processes to efficiently extract data from various structured and unstructured sources, ensuring accuracy, consistency and performance optimization. Architect and manage database syst...

Posted 1 week ago

AI Match Score
Apply

6.0 - 8.0 years

13 - 17 Lacs

bengaluru

Hybrid

Databricks Engineer Job Description Role Overview The Data Engineer will be responsible for designing, building, and optimizing data pipelines and platforms to enable advanced analytics and business intelligence across the organization. This role requires strong expertise in Databricks , SQL , AWS , and Airflow , combined with a solid foundation in data analysis and engineering best practices. The ideal candidate will collaborate with cross-functional stakeholders and ensure scalable, secure, and high-performing data solutions. Location: Bengaluru Key Responsibilities Technical Design Develop, and maintain data pipelines and ETL workflows using Databricks and Airflow. Good understanding of D...

Posted 1 week ago

AI Match Score
Apply

8.0 - 13.0 years

19 - 32 Lacs

hyderabad

Hybrid

About the Role We are seeking an experienced AWS Data Engineer to design, build, and maintain scalable data pipelines and cloud-based data platforms on Amazon Web Services. The ideal candidate will have strong experience with big data technologies, data modeling, ETL/ELT development, and cloud-oriented architecture. Key Responsibilities Design, develop, and maintain data pipelines using AWS services such as Glue, Lambda, Step Functions, EMR, Kinesis, and Data Pipeline . Build and optimize data lakes , data warehouses , and analytics platforms using S3, Redshift, Athena, Lake Formation , and Glue Catalog . Develop and implement ETL/ELT workflows for structured and unstructured data. Work with...

Posted 2 weeks ago

AI Match Score
Apply

10.0 - 20.0 years

35 - 45 Lacs

hyderabad

Remote

Company Name: Egen Job Title: AWS- Senior Data Engineer Experience: 10+ Years Location: Hyderabad / Remote Work Mode: WFO Salary: Competitive Employment Type: Full-time Role Overview The Senior Data Engineer designs, builds, and optimizes scalable, high-performance data platforms on AWS using Python. The role owns end-to-end data pipelines across batch and streaming, enabling analytics and AI workloads for cross-functional teams. It emphasizes modern data architecture, data quality, governance, and automation with strong focus on cost and performance. The engineer leads technical decisions, conducts code reviews, mentors junior staff, and sets best practices for reliability and observability...

Posted 2 weeks ago

AI Match Score
Apply

3.0 - 7.0 years

0 Lacs

chennai, all india

On-site

As an AWS Data Engineer at Viraaj HR Solutions, you will play a crucial role in designing and implementing data pipelines using AWS tools and services. Your responsibilities will include developing ETL processes to efficiently integrate data from various sources, creating and maintaining data models to support analytics and reporting needs, and optimizing SQL queries for data retrieval performance. It will be your duty to ensure data quality and integrity throughout the data lifecycle, collaborate with data scientists and analysts to understand data requirements, and support data warehousing initiatives by designing database schemas. Additionally, you will monitor data ingestion processes, t...

Posted 2 weeks ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

chennai, all india

On-site

As a Data Engineer at our company, you will play a crucial role in designing, building, and maintaining data pipelines and systems to ensure the seamless collection, management, and conversion of raw data into actionable insights for our business analysts. **Key Responsibilities:** - Develop and maintain scalable data pipelines to support data processing - Extract, transform, and load (ETL) data from multiple sources efficiently - Design and implement data infrastructure for optimized data delivery - Automate manual processes to enhance data quality and operational efficiency - Create analytical tools for generating valuable insights on business performance metrics - Collaborate with stakeho...

Posted 2 weeks ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

chennai, all india

On-site

Role Overview: As a Data Engineer at the company, your primary responsibility will be to build and deploy infrastructure for ingesting high-volume support data from consumer interactions, devices, and apps. You will design and implement processes to turn data into insights, model and mine the data for system behavior description and future action prediction. Your role will also involve enabling data-driven change by creating effective visualizations and reports for stakeholders, both internal and external. Additionally, you will develop and maintain data-related code in an automated CI/CD environment, and generate specific reports for insights into agent performance and business effectivenes...

Posted 2 weeks ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 18 Lacs

hyderabad, pune, bengaluru

Hybrid

Educational Requirements Bachelor of Engineering, BTech, BCA, MTech, MCA ,B.Sc, M.SC Job description Full Stack Data Engineer- AWS Mandatory Skills: AWS,GAWS Glue ETL,Python/SQL Responsibilities A day in the life of an InfoscionAs part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code r...

Posted 2 weeks ago

AI Match Score
Apply

8.0 - 13.0 years

18 - 30 Lacs

bengaluru

Work from Office

Role & responsibilities : Need Java +very strong AWS experience (at least 8yrs+)+AWS Services such as AWS EC2, S3, ALB, Lambda, APIGW+ serverless architectures, microservices, and containerization (ECS, EKS, Docker, Kubernetes) Preferred candidate profile : Key Responsibilities: Design and implement scalable, reliable, and secure AWS cloud architectures Evaluate and recommend AWS services to meet business and technical requirements Optimize AWS usage for performance, cost, and operational efficiency Work with cross-functional teams and stakeholders to understand business requirements and translate them into technical solutions Develop and maintain comprehensive documentation for cloud archit...

Posted 2 weeks ago

AI Match Score
Apply

9.0 - 16.0 years

25 - 30 Lacs

bengaluru

Work from Office

Design and develop data integration solutions using AWS services such as Glue, Airflow, and DynamoDB. Collaborate with cross-functional teams to identify business requirements and develop technical solutions. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Create and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using AWS native services. Participate in client design workshops and provide tradeoffs and recommendation...

Posted 2 weeks ago

AI Match Score
Apply

5.0 - 12.0 years

0 Lacs

kolkata, all india

On-site

As a Data Services AWS professional at LTIMindtree, you will be a part of a dynamic team working towards leveraging transformative technological advancements to empower enterprises, people, and communities to build a better future, faster. Your role will involve the following key responsibilities: - Utilize AWS Glue for data preparation and transformation tasks - Develop serverless applications using AWS Lambda - Manage and optimize AWS RDS databases - Implement scalable storage solutions using AWS S3 - Work with Dynamo DB for NoSQL database requirements - Utilize PySpark for data processing and analysis In addition to the exciting role overview and key responsibilities, LTIMindtree is dedic...

Posted 2 weeks ago

AI Match Score
Apply

3.0 - 5.0 years

10 - 15 Lacs

pune

Work from Office

Role :- Data Engineer Experience: 3 to 5 Years Location: Pune (Baner) Type: Full-time Role Overview We are looking for a skilled Data Engineer with deep AWS expertise to help operationalize and scale our SaaS platform. The ideal candidate should have strong hands-on experience in AWS analytics services (EMR, Glue, Athena, S3, Postgres/RDS) , data pipeline development, and cloud-native architectures. You will design, build, and maintain scalable data pipelines, implement secure cloud data architectures, optimize storage and compute performance, and ensure end-to-end operational reliability of our SaaS platform. While the role is primarily AWS-focused, some exposure to AI, vector search, or mo...

Posted 2 weeks ago

AI Match Score
Apply

5.0 - 10.0 years

0 - 1 Lacs

pune, chennai, bengaluru

Work from Office

Skill -AWS Data engineer. Experience -5 to 10 Years. Location -Bangalore,Chennai,Hyderabad,Pune,Kochi,Bhubaneshawar,Kolkata. Notice period- 30 days only. Key Skills AWS Lambda, Python, Boto3 Must have Skills Strong experience in Python to package, deploy and monitor data science apps Knowledge in Python based automation Knowledge of Boto3 and related Python packages Working experience in AWS and AWS Lambda Good to have (Knowledge) Bash scripting and Unix Data science models testing, validation and tests automation Knowledge of AWS SageMaker

Posted 2 weeks ago

AI Match Score
Apply

3.0 - 8.0 years

15 - 25 Lacs

bengaluru

Hybrid

Data Engineer / Python Developer Design, build & maintain data pipelines. Develop Python apps, ETL workflows, APIs. Work with SQL, cloud, big data tools. Ensure data quality, performance & scalability. Collaborate with analytics & product teams. Required Candidate profile BE/BTech or equivalent with 3–8 yrs experience. Strong Python, SQL & ETL. Experience in cloud platforms, data warehouses & big data tools. Good problem-solving, analytical and communication skills

Posted 2 weeks ago

AI Match Score
Apply

8.0 - 12.0 years

0 Lacs

bangalore rural, bengaluru

Work from Office

Role & responsibilities Administer and manage the AWS Data Lake infrastructure, ensuring high availability, security, and performance. Configure and manage AWS S3, Lake Formation, and Glue Data Catalog to organize, secure, and catalog data within the data lake. Set up and manage Redshift Spectrum for querying and analyzing data stored in S3 using SQL and Redshift. Implement and manage data ingestion pipelines to ingest structured and unstructured data from various sources into the data lake using AWS services like Glue ETL, Lambda , and other orchestration tools. Define and enforce data governance policies, access control, and security measures using AWS Lake Formation, ensuring compliance w...

Posted 2 weeks ago

AI Match Score
Apply

8.0 - 13.0 years

10 - 20 Lacs

pune, chennai, bengaluru

Hybrid

Hi , Urgent opening for AWS Cloud Native- Full Stack Architect with EY GDS in Pan India Location. EXP :8-14Yrs Mode: Hybrid Required Skills: AWS Services : Proficiency in AWS services such as Glue, Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Opensearch, Athena, Eventbridge, Redshift, EMR. Programming : Strong programming skills in languages such as Python or Angular/React/Typescript CI/CD: Experience with CI/CD tools and practices, including GithubActions, AWS CodePipeline, CodeBuild, and CodeDeploy. Infrastructure as Code: Familiarity with IaC tools like AWS CloudFormation or Terraform for automating application infrastructure. Security : Understanding of AWS security best practic...

Posted 2 weeks ago

AI Match Score
Apply

4.0 - 7.0 years

5 - 15 Lacs

pune, chennai, bengaluru

Hybrid

Hi , Urgent opening for AWS Cloud Native - Full Stack Engineer -Senior with EY GDS in Pan India Location. EXP :4-7 Yrs Mode: Hybrid Required Skills: AWS Services: Proficiency in AWS services such as Glue, Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Opensearch, Athena, Eventbridge, Redshift, EMR. Programming : Strong programming skills in languages such as Python or Angular/React/Typescript CI/CD: Experience with CI/CD tools and practices, including GithubActions, AWS CodePipeline, CodeBuild, and CodeDeploy. Infrastructure as Code: Familiarity with IaC tools like AWS CloudFormation or Terraform for automating application infrastructure. Security : Understanding of AWS security best ...

Posted 2 weeks ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 18 Lacs

kolkata, hyderabad

Work from Office

Role: AWS DATA ENGINEER LOCATION: Hyderabad, Kolkata EXP:5-8 YEARS

Posted 2 weeks ago

AI Match Score
Apply

5.0 - 8.0 years

15 - 20 Lacs

hyderabad

Hybrid

Required Qualifications: 5+ years of professional work experience as described below. Strong communication skills with the ability to coordinate meetings, work independently and drive results. Expert level SQL coding skills RDBMS experience in either of Teradata, Oracle, MS-SQL, DB2, RedShift, Snowflakes, etc. Scripting language (shell scripting, PL/SQL, Python, Java , Spark, Scala) Experience with AWS cloud native services such as S3, RDS, Glue, Athena, etc Hands on experience in building data solutions in big data / cloud platforms. Experience with tools such as Talend, Bitbucket, Autosys. Performance tuning and optimization of code and peer code reviews. Ability to work in a high paced te...

Posted 2 weeks ago

AI Match Score
Apply

3.0 - 9.0 years

0 Lacs

hyderabad, all india

On-site

As a Lead Data Engineer with a focus on Informatica Cloud and Hybrid technologies, you will be responsible for the following key areas: - Utilize your 10+ years of hands-on experience in the data engineering domain to drive forward innovative solutions within the organization. - Demonstrate your deep understanding and 5+ years of practical experience with Informatica IICS (Informatica Intelligent Cloud Services) to lead data engineering projects effectively. - Leverage your 5+ years of experience working with Cloud Data Warehouse Platforms such as Snowflake to architect robust data solutions. - Showcase your expertise in utilizing tools like Spark, Athena, AWS Glue, Python/PySpark, etc., to ...

Posted 2 weeks ago

AI Match Score
Apply

9.0 - 13.0 years

0 Lacs

all india, gurugram

On-site

Role Overview: At Markovate, we are looking for a highly experienced and innovative Senior Data Engineer who can design, build, and optimize robust, scalable, and production-ready data pipelines across both AWS and Azure platforms. As a Senior Data Engineer, you will be responsible for developing hybrid ETL/ELT pipelines, processing files from AWS S3 and Azure Data Lake Gen2, implementing event-based orchestration, creating scalable ingestion workflows, integrating with metadata and lineage tools, and collaborating with cross-functional teams to align solutions with AI/ML use cases. Key Responsibilities: - Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF...

Posted 2 weeks ago

AI Match Score
Apply

8.0 - 12.0 years

0 Lacs

navi mumbai, all india

On-site

Role Overview: As a Data Catalog Implementation Lead, you will be responsible for overseeing the end-to-end implementation of a data cataloging solution within AWS. Your role will involve establishing and managing metadata frameworks for structured and unstructured data assets in data lake and data warehouse environments. Additionally, you will collaborate with various project teams to define metadata standards, data classifications, and stewardship processes. Key Responsibilities: - Lead the implementation of a data cataloging solution within AWS, preferably AWS Glue Data Catalog or third-party tools like Apache Atlas, Alation, Collibra, etc. - Establish and manage metadata frameworks for s...

Posted 2 weeks ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies