Posted:1 day ago|
Platform:
Work from Office
Full Time
Develop and optimize Big Data solutions using Apache Spark. Work extensively with PySpark and Data Engineering tools. Handle real-time data processing using Kafka and Spark Streaming. Design and implement ETL pipelines and migrate workflows to Spark. Required Candidate profile Hands-on experience with Hadoop, HDFS, YARN. Strong programming skills in Scala, Java, and Python. Exposure to CI/CD automation for Big Data workflows.
Talent Corner Hr Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
15.0 - 25.0 Lacs P.A.
Hyderabad
15.0 - 20.0 Lacs P.A.
Nagpur, Pune
20.0 - 30.0 Lacs P.A.
20.0 - 30.0 Lacs P.A.
15.0 - 25.0 Lacs P.A.
Gurugram
9.0 - 14.0 Lacs P.A.
Hyderabad, Telangana, India
Salary: Not disclosed
Hyderabad
22.5 - 30.0 Lacs P.A.
Gurgaon, Haryana, India
Salary: Not disclosed
Bengaluru
40.0 - 60.0 Lacs P.A.