Aws Data Engineer

4 - 9 years

9 - 19 Lacs

Posted:11 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential

Responsibilities:

  • Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager
  • Build and maintain ETL/ELT pipelines for both batch and streaming data.
  • Work with structured and unstructured datasets at scale. Apply Data Modeling principles and advanced SQL techniques. Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats.
  • Collaborate with product teams to understand requirements and deliver optimized data solutions. Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code.
  • Work independently with minimal supervision and strong ownership of deliverables.

Must Have:

  • 4+ years of experience in Data Engineering on AWS Cloud.
  • Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift

• Strong SQL skills and performance tuning experience on large datasets.

• Good understanding of CI/CD pipelines, especially using DBX and AWS tools.

• Experience with environment setup, cluster management, user roles, and authentication in Databricks.

• Certified as a Databricks Certified Data Engineer Professional (mandatory)

Good To Have:

• Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks.

• Experience with Databricks ML or Spark 3.x upgrades.

• Familiarity with Airflow, Step Functions, or other orchestration tools.

• Experience integrating Databricks with AWS services in a secured, production-ready environment.

• Experience with monitoring and cost optimization in AWS.

Key Skills:

• Languages: Python, SQL, PySpark

• Big Data Tools: Apache Spark, Delta Lake, Iceberg

• Databricks on AWS

• AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager

• Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild

• Other: Data Modeling, ETL Methodology, Performance Optimizatio

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Celebal Technologies logo
Celebal Technologies

Technology Consulting and Services

Ahmedabad

RecommendedJobs for You

Noida, Hyderabad, Pune