Jobs
Interviews

6 Aws Data Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

17 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

Location : Pune ,Mumbi, Gurugram , Bangalore , Hyderabad. Experience : 6 Years to 10Years Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities: - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 3-8 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure) is a plus. - Familiarity with Spark Streaming is a bonus.

Posted 4 weeks ago

Apply

8.0 - 12.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Lead design, development, and deployment of cloud-native and hybrid solutions on AWS and GCP. Ensure robust infrastructure using services like GKE, GCE, Cloud Functions, Cloud Run (GCP) and EC2, Lambda, ECS, S3, etc. (AWS).

Posted 1 month ago

Apply

6.0 - 11.0 years

25 - 40 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

-Design, build & deployment of cloud-native and hybrid solutions on AWS and GCP -Exp in Glue, Athena, PySpark & Step function, Lambda, SQL, ETL, DWH, Python, EC2, EBS/EFS, CloudFront, Cloud Functions, Cloud Run (GCP), GKE, GCE, EC2, ECS, S3, etc

Posted 1 month ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Location : Pune ,Mumbi, Gurugram , Bangalore , Hyderabad. Experience : 3.5years to 8 Years Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities: - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 3-8 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure) is a plus. - Familiarity with Spark Streaming is a bonus.

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Pune

Work from Office

We are organizing a direct walk-in drive at Pune location. Please find below details and skills for which we have a walk-in at TCS - Pune on 21st June 2025 Experience: 5- 10 years Skill Name :- (1) Dot Net (2) AWS data, Pyspark, Redshift (3) AWS Node JS (4) Azure Devops with Terraform (5) Java Springboot Microservice (6) Mainframe, CICS, COBOL, DB2

Posted 1 month ago

Apply

5 - 10 years

0 Lacs

Kolkata

Hybrid

Skills Required: AWS Data Engineer- DATA-SERVICES-AWS Skills : Aws Glue; Aws Lambda; AWS RDS; AWS S3; Dynamo Db; PySpark Location : Bangalore Experience : 5+ Years only If interested, please share your updated resume with below details on meeta.padaya@ltimindtree.com Total Experience - Relevant Experience in AWS Data - AWS PAAS Services- PySpark- Available for F2F interview on 26th Apr, Saturday (Yes/No): Company - CCTC- ECTC- NP (If serving, kindly mention LWD) - Current / Preferred Location -

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies