AWS Data Engineer (Spark Scala)

5 - 10 years

10.0 - 20.0 Lacs P.A.

Chennai, Bengaluru, Hyderabad

Posted:2 months ago| Platform: Naukri logo

Apply Now

Skills Required

SCALAETLAWSApache Spark

Work Mode

Work from Office

Job Type

Full Time

Job Description

Location: Chennai, Bangalore, Hyderabad, Pune & Gurgaon Role & responsibilities Design and develop scalable data pipelines using Apache Spark and Scala . Implement ETL processes to ingest, transform, and store data efficiently. Work with AWS services such as S3, Glue, EMR, Lambda, Redshift, DynamoDB, and Kinesis . Optimize big data processing for performance, scalability, and cost-effectiveness. Develop and maintain data lake architectures and real-time streaming solutions. Ensure data quality, governance, and security within the AWS ecosystem. Work closely with data scientists, analysts, and business teams to provide data-driven insights. Automate data workflows using CI/CD pipelines and Infrastructure as Code (IaC) tools . Troubleshoot and resolve performance bottlenecks in data processing. Stay updated with the latest big data, cloud, and analytics trends .

RecommendedJobs for You

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Pune, Bengaluru, Mumbai (All Areas)

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Bengaluru, Hyderabad, Mumbai (All Areas)

Hyderabad, Gurgaon, Mumbai (All Areas)