Posted:1 week ago|
Platform:
Work from Office
Full Time
Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets.
Practical Methods
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
32.5 - 40.0 Lacs P.A.
Hyderabad, Pune
0.5 - 2.5 Lacs P.A.
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
10.0 - 11.0 Lacs P.A.
Chennai
7.0 - 10.0 Lacs P.A.
6.0 - 10.0 Lacs P.A.
1.0 - 2.0 Lacs P.A.
14.0 - 20.0 Lacs P.A.
Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Pune, Gurugram
20.0 - 35.0 Lacs P.A.
6.0 - 14.0 Lacs P.A.