Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
15 - 25 Lacs
Pune
Work from Office
Develop and optimize Big Data solutions using Apache Spark. Work extensively with PySpark and Data Engineering tools. Handle real-time data processing using Kafka and Spark Streaming. Design and implement ETL pipelines and migrate workflows to Spark. Required Candidate profile Hands-on experience with Hadoop, HDFS, YARN. Strong programming skills in Scala, Java, and Python. Exposure to CI/CD automation for Big Data workflows.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16869 Jobs | Dublin
Wipro
9024 Jobs | Bengaluru
EY
7266 Jobs | London
Amazon
5652 Jobs | Seattle,WA
Uplers
5629 Jobs | Ahmedabad
IBM
5547 Jobs | Armonk
Oracle
5387 Jobs | Redwood City
Accenture in India
5156 Jobs | Dublin 2
Capgemini
3242 Jobs | Paris,France
Tata Consultancy Services
3099 Jobs | Thane