Posted:9 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Industry & Sector:

IT Services — Cloud & Data Engineering practice focused on building enterprise-scale analytics platforms, cloud data lakes, and production-grade ETL/ELT solutions on AWS for large commercial and enterprise clients.Location: India (On-site). Role: AWS Data Engineer — hands-on contributor building scalable, secure data pipelines and analytics back-ends.AWS Data EngineerRole & Responsibilities
  • Design, build and operate scalable ETL/ELT data pipelines on AWS (S3, Glue, Redshift, Athena) to ingest, transform and serve structured and semi-structured data.
  • Develop and optimise PySpark / Python jobs for performance, cost-efficiency and data quality; troubleshoot failures and implement retry/backfill strategies.
  • Implement orchestration and workflow automation using Airflow or AWS Step Functions; manage scheduling, dependencies and SLA monitoring.
  • Define data models, partitioning strategies and storage formats (Parquet/ORC), and manage metadata with Glue Data Catalog or Lake Formation.
  • Instrument pipelines with monitoring, alerting and observability (CloudWatch, logging, metrics) and own incident resolution and root-cause analysis.
  • Apply Infrastructure-as-Code (Terraform/CloudFormation) and CI/CD practices to deploy pipelines securely (IAM, VPC, encryption) and reproducibly.

Skills & Qualifications

Must-Have

  • 3+ years of hands-on experience as a Data/ETL Engineer on AWS with core services: S3, Glue, Redshift, Athena, Lambda.
  • Strong Python and PySpark skills plus advanced SQL for complex transformations and performance tuning.
  • Proven experience designing ETL/ELT patterns, data modeling, partitioning and ensuring data quality and lineage.
  • Familiarity with orchestration tools (Airflow or Step Functions), version control (Git) and production monitoring (CloudWatch).

Preferred

  • Experience with Terraform or CloudFormation, containerization (Docker) and CI/CD pipelines for data workloads.
  • Exposure to streaming technologies (Kinesis, Kafka), data cataloging (Glue Data Catalog, Lake Formation) and Redshift performance tuning.
Benefits & Culture Highlights
  • On-site, hands-on engineering role with high ownership and visible impact on enterprise analytics initiatives.
  • Support for professional development and AWS certifications; learning-focused environment with mentoring.
  • Collaborative, fast-paced team that values engineering excellence, automation and measurable outcomes.
Why apply: This role is ideal for mid-level data engineers who want deep technical ownership of cloud-native data platforms on AWS, work on end-to-end pipeline delivery, and accelerate their cloud engineering career in an on-site India setting.
Skills: pyspark,python,aws,data bricks

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You