Posted:2 months ago|
Platform:
Hybrid
Full Time
- Design and implement high-performance data pipelines using Apache Spark and Scala.
- Optimize Spark jobs for efficiency and scalability.
- Collaborate with diverse data sources and teams to deliver valuable insights.
- Monitor and troubleshoot production pipelines to ensure smooth operations.
- Maintain thorough documentation for all systems and code.
- Minimum of 3 years hands-on experience with Apache Spark and Scala.
- Strong grasp of distributed computing principles and Spark internals.
- Proficiency in working with big data technologies like HDFS, Hive, Kafka, and HBase.
- Ability to write optimized Spark jobs using Scala effectively.
Xoriant
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
hyderabad, chennai
0.5 - 23.0 Lacs P.A.
19.92 - 22.8 Lacs P.A.
hyderabad, telangana, india
2.0 - 5.0 Lacs P.A.
bengaluru, karnataka, india
2.0 - 5.0 Lacs P.A.
chennai, tamil nadu, india
2.0 - 5.0 Lacs P.A.
chennai, tamil nadu, india
8.0 - 10.0 Lacs P.A.
chennai, tamil nadu, india
8.0 - 10.0 Lacs P.A.
bengaluru, karnataka, india
8.0 - 10.0 Lacs P.A.
bengaluru
25.0 - 37.5 Lacs P.A.
hyderabad, chennai, bengaluru
5.0 - 9.0 Lacs P.A.