Home
Jobs

Senior Data Engineer - AWS

7 years

0 Lacs

Posted:1 week ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

IMMEDIATE JOINERS ONLY

Job Purpose

As a Senior Data Engineer, you will be responsible for designing, developing, and maintaining robust data pipelines and systems across both AWS-native and open-source stacks. Your role is critical to support the implementation for real-time and batch data processing needs across the organization, sourcing data from different types of data sources, such as social media APIs, Internal SAP ERP, ticketing data, and many more. Ensuring high data availability, integrity, and scalability. You will work closely with other data professionals, analysts, and business teams to enable data-driven decision-making.

Duties and Responsibilities

  • Design and build scalable and secure data pipelines using AWS Glue, Kinesis, and Apache Spark.
  • Develop and manage data warehousing solutions using Amazon Redshift and Amazon Iceberg on S3.
  • Implement real-time data ingestion and processing workflows for high-volume datasets.
  • Use Apache Airflow to orchestrate and monitor batch and streaming data pipelines.
  • Maintain and enhance data lake structures and table formats with schema evolution using Iceberg.
  • Collaborate with analytics, data science, and business teams to deliver reliable data products.
  • Ensure data quality, lineage, versioning, and governance standards are met.
  • Troubleshoot and resolve issues related to data flow, transformations, and job failures.
  • Optimize data systems for cost, performance, and scale.
  • Design, develop, and deploy data pipelines through the DevOps lifecycle, enabling automated CI/CD workflows.

Education and Experience

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 7+ years of experience in data engineering or a similar technical role.
  • Proven experience building on AWS data services, especially Glue, Redshift, Iceberg, and Kinesis.
  • Demonstrated experience working with Apache Spark and Apache Airflow in production environments.


Knowledge, Skills, and Abilities

  • Strong command of Python and SQL.
  • Deep understanding of distributed data systems and real-time architecture.
  • Experience working with structured, semi-structured, and unstructured data.
  • Solid understanding of data formats such as Parquet, ORC, Avro, and JSON.
  • Familiarity with DevOps principles, CI/CD pipelines, and Git version control.
  • Knowledge of metadata management and governance tooling (Glue Catalog, OpenMetadata, and Great Expectation is a plus).
  • Strong analytical and problem-solving skills.
  • Effective communication and stakeholder engagement skills.

 

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You