Data Engineer (AWS + Apache Flink + Java)

6 years

20 - 30 Lacs

Posted:2 weeks ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Contractual

Job Description

Timing:

9 AM – 6 PM

Experience:

6+ Years

Employment Type:

Contract

Location:

Remote

About The Role

We are seeking a skilled Data Engineer with strong expertise in

AWS, real-time streaming, Apache Flink, and Java

. The role involves designing and building highly scalable streaming pipelines, optimizing distributed data systems, and collaborating closely with architecture, product, and data science teams to enable low-latency data processing.

Key Responsibilities

  • Real-Time Pipeline Development
  • Build and maintain real-time data pipelines using Apache Flink on AWS.
  • Develop complex Flink DataStream / SQL applications using Java.
  • Implement stateful processing, event-time handling, windows, fault tolerance.
  • AWS Data Engineering
  • Work with AWS services:
    • Kinesis, MSK (Kafka)
    • S3, Glue, EMR, Lambda
    • Redshift, DynamoDB, Athena
  • Optimize cloud infrastructure for cost, scalability, and performance.
  • Data Integration & Architecture
  • Integrate pipelines with S3, DWs, NoSQL, APIs.
  • Ensure data quality, schema evolution, and efficient serialization (Avro/Parquet).
  • Contribute to real-time data architecture and ingestion strategies.
  • Optimization, Monitoring & Reliability
  • Tune Flink jobs for latency, throughput, checkpointing, parallelism.
  • Implement monitoring via CloudWatch, Prometheus, Grafana, etc.
  • Troubleshoot distributed streaming systems.
  • Collaboration & Best Practices
  • Work with architects, analysts, ML teams.
  • Follow CI/CD, automated testing, IaC (Terraform/CloudFormation).
  • Maintain documentation and reusable libraries.

Required Skills & Experience

  • 3–7+ years as a Data/Streaming Engineer.
  • Strong in Java (multithreading, functional programming).
  • Mandatory: Hands-on Apache Flink development.
  • Proficient with AWS data stack: Kinesis/MSK, S3, Glue, Redshift, EMR, Lambda.
  • Solid knowledge of distributed systems, streaming patterns, SQL, data lakes.
  • Experience with Git, Jenkins, GitHub Actions, IaC.

Good to Have

  • Kafka internals.
  • Flink Table/SQL API, PyFlink.
  • Python scripting.
  • Docker, Kubernetes, AWS EKS.
  • Observability tools (Datadog, New Relic, OpenTelemetry).

Education

Bachelor’s or Master’s in

Computer Science, Engineering, Information Systems

, or equivalent.Skills: s3,flink,apache,pipelines,java,architecture,sql,glue,data,aws

Mock Interview

Practice Video Interview with JobPe AI

Start Java Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Java Skills

Practice Java coding challenges to boost your skills

Start Practicing Java Now

RecommendedJobs for You