Senior Data Engineer -- Python & AWS || Top MNC || Hyderabad

6 - 10 years

25 - 30 Lacs

Posted:6 days ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Job Title:

Experience:

Location:

Employment Type:

Notice Period:

About the Role:

Senior Data Engineer

Key Responsibilities:

  • Architect and implement

    end-to-end data pipelines

    using

    AWS Glue, Lambda, EMR, Step Functions, and Redshift

    .
  • Design and manage

    data lakes and warehouses

    on

    Amazon S3, Redshift, and Athena

    .
  • Develop

    Python-based ETL/ELT frameworks

    and reusable transformation modules.
  • Integrate diverse data sources

    RDBMS, APIs, Kafka/Kinesis, SaaS platforms

    — into unified data models.
  • Lead

    data modeling, partitioning

    , and

    schema design

    for performance and cost optimization.
  • Ensure

    data quality, observability

    , and

    lineage

    using

    AWS Data Catalog

    ,

    Glue Data Quality

    , or similar tools.
  • Implement

    data governance, security, and compliance

    best practices (IAM, encryption, access control).
  • Collaborate with

    Data Science, Analytics, Product, and DevOps teams

    to support analytical and ML workloads.
  • Set up

    CI/CD pipelines

    for data workflows using

    AWS CodePipeline

    ,

    GitHub Actions

    , or

    Cloud Build

    .
  • Provide

    technical leadership

    , conduct

    code reviews

    , and mentor junior data engineers.
  • Monitor data infrastructure performance and handle troubleshooting and capacity planning.

Required Skills & Qualifications:

  • 5–10 years

    of experience in

    data engineering or data platform development

    .
  • Strong expertise in

    Python

    (pandas, PySpark, boto3, SQLAlchemy).
  • Advanced experience with

    AWS Data Services

    :
    • Glue, Lambda, EMR, Step Functions, Redshift, Athena, S3, Kinesis

      .
    • IAM, CloudWatch, CloudFormation/Terraform

      for infrastructure automation.
  • Strong proficiency in

    SQL

    ,

    data modeling

    , and

    performance tuning

    .
  • Proven experience with

    ETL/ELT

    ,

    data lakes

    ,

    warehousing

    , and

    streaming architectures

    .
  • Hands-on with

    Git

    and

    CI/CD

    for data pipelines.
  • Knowledge of

    Docker/Kubernetes

    and

    DevOps concepts

    is a plus.
  • Excellent communication, analytical, and problem-solving skills.

Preferred Skills (Good to Have):

  • Experience with

    Apache Spark / PySpark

    on

    AWS EMR or Glue

    .
  • Familiarity with

    Airflow, dbt, or Dagster

    for orchestration.
  • Exposure to

    Kafka

    ,

    Kinesis Data Streams

    , or

    Firehose

    .
  • Knowledge of

    AWS Lake Formation

    ,

    Glue Studio

    , or

    DataBrew

    .
  • Experience integrating with

    SageMaker

    or

    QuickSight

    for ML/analytics.
  • Certifications:

    AWS Certified Data Analytics – Specialty or AWS Solutions Architect.

Soft Skills:

  • Strong

    ownership and accountability mindset

    .
  • Excellent collaboration and

    mentoring abilities

    .
  • Clear communication with

    technical and non-technical stakeholders

    .
  • Agile and team-oriented working approach.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You