DE&A - Core - AWS Data Engineer DE&A - Core - AWS Data Engineer

6 - 11 years

2 - 13 Lacs

Posted:1 week ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

We are seeking a highly skilled

AWS Data Engineer

with strong expertise in designing, building, and optimizing large-scale data pipelines and data lake/warehouse solutions on AWS. The ideal candidate will have extensive experience in

data engineering, ETL development, cloud-based data platforms, and modern data architecture practices

.

Key Responsibilities

Design, build, and maintain scalable

data pipelines and ETL workflows

using AWS services.
Develop, optimize, and maintain

data lake and data warehouse solutions

(e.g., S3, Redshift, Glue, Athena, EMR, Snowflake on AWS).
Work with structured and unstructured data from multiple sources, ensuring

data quality, governance, and security

.
Collaborate with data scientists, analysts, and business stakeholders to enable

analytics and AI/ML use cases

.
Implement

best practices

for data ingestion, transformation, storage, and performance optimization.
Monitor and troubleshoot data pipelines to ensure reliability and scalability.
Contribute to

data modeling, schema design, partitioning, and indexing strategies

.
Support

real-time and batch data processing

using tools like Kinesis, Kafka, or Spark.
Ensure compliance with

security and regulatory standards

(IAM, encryption, GDPR, HIPAA, etc.).

Required Skills & Experience

6+ years

of experience in Data Engineering, with at least

3+ years on AWS

cloud ecosystem.
Strong programming skills in

Python, PySpark, or Scala

.
Hands-on experience with

AWS services

:
Data Storage: S3, DynamoDB, RDS, Redshift
Data Processing: Glue, EMR, Lambda, Step Functions
Query & Analytics: Athena, Redshift Spectrum, QuickSight
Streaming: Kinesis / MSK (Kafka)
Strong experience with

SQL

(query optimization, stored procedures, performance tuning).
Knowledge of

ETL/ELT tools

(Glue, AWS Data Pipeline, Informatica, Talend, DBT preferred).
Experience with

data modeling

(dimensional, star/snowflake schema).
Knowledge of

DevOps practices

for data (CI/CD, IaC using Terraform/CloudFormation).
Familiarity with

monitoring & logging

tools (CloudWatch, Datadog, ELK, Prometheus).
Strong understanding of

data governance, lineage, cataloging

(Glue Data Catalog, Collibra, Alation).

Preferred Skills (Good to Have)

Experience with

Snowflake, Databricks, or Apache Spark on AWS

.
Exposure to

machine learning pipelines

(SageMaker, Feature Store).
Knowledge of

containerization & orchestration

(Docker, Kubernetes, ECS, EKS).
Exposure to

Agile methodology

and

DataOps practices

.
AWS

certifications

(AWS Certified Data Analytics Specialty / Solutions Architect / Big Data).

Education

Bachelor s/Master s degree in

Computer Science, Information Technology, Data Engineering, or related field

.

Mock Interview

Practice Video Interview with JobPe AI

Start Machine Learning Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Zensar logo
Zensar

Information Technology and Services

Mumbai

RecommendedJobs for You

kolkata, mumbai, new delhi, hyderabad, pune, chennai, bengaluru