Posted:2 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title:

Location:

Experience:

Employment Type:

Notice Period:


About the Company

global digital transformation leader


We are looking for an experienced AWS Data Engineer with strong expertise in building scalable data pipelines, cloud data platforms, and analytics-ready datasets. The candidate should have hands-on experience with core AWS data services, ETL/ELT processes, data modeling, and performance optimization.

Key Responsibilities

1. Data Pipeline Development

  • Design, build, and maintain end-to-end

    ETL/ELT pipelines

    using AWS services.
  • Implement batch and real-time ingestion pipelines using:
  • AWS Glue

  • AWS Lambda

  • AWS Kinesis / Kafka

  • AWS Data Migration Service (DMS)

  • Optimize pipelines for scalability, performance, and reliability.

2. Data Architecture & Modeling

  • Design and implement data models for:
  • Data Lake
  • Data Warehouse
  • Analytics
  • Work with

    star schema, snowflake schema

    , and normalized models.
  • Build efficient storage solutions using

    S3, Redshift, DynamoDB, RDS, Aurora

    .

3. AWS Cloud Engineering

  • Develop data solutions leveraging:
  • S3, Athena, Glue Catalog

  • Redshift / Redshift Spectrum

  • EMR / PySpark

  • Step Functions, CloudWatch

  • Implement CI/CD for data pipelines using

    CodePipeline, CodeBuild, Jenkins, GitHub Actions

    , etc.

4. Data Quality & Governance

  • Implement data validation, reconciliation, and quality frameworks.
  • Work with metadata management and data catalogs.
  • Ensure compliance with security and governance standards.

5. Collaboration

  • Work closely with data analysts, data scientists, and business stakeholders.
  • Translate business requirements into technical data solutions.
  • Participate in architecture reviews and solution design sessions.

6. Performance Optimization

  • Improve query performance on Redshift, Athena, and Spark.
  • Optimize storage formats (Parquet, ORC, Delta).
  • Partitioning, bucketing, and compression strategies.

Core Skills Required

AWS Services (Mandatory)

  • S3
  • Glue (ETL & Catalog)
  • Lambda
  • EMR / Spark
  • Redshift
  • Kinesis
  • IAM, CloudWatch, Step Functions

Programming

  • Python (mandatory)
  • PySpark / Spark
  • SQL (advanced level)

Tools & Frameworks

  • Airflow (or similar orchestration tools)
  • Terraform / CloudFormation (infrastructure-as-code)
  • Git, Jenkins, Docker (nice to have)

Data Expertise

  • ETL/ELT pipeline development
  • Data warehousing concepts
  • Data lake architecture
  • Performance tuning
  • Data modeling (OLTP & OLAP)

Qualifications

  • Bachelor’s/Master’s degree in Computer Science, Information Technology, or related field.
  • Minimum

    7+ years of experience

    in data engineering.
  • Strong knowledge of AWS cloud data ecosystem.
  • Experience working in Agile environments.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, pune, bengaluru