Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
10 - 15 Lacs
Ahmedabad, Gujarat, India
On-site
What You'll Do: Design and Develop: Analytics workloads using Apache Spark and Scala for big data processing. Create and Optimize: Data transformation pipelines using Spark or Apache Flink. Migrate Workloads: From cloud platforms to open-source Apache Spark infrastructure on Kubernetes. Implement Optimization: Performance techniques for large-scale data processing Expertise You'll Bring: Scala Programming: Focus on functional programming paradigm. Apache Spark: Extensive experience with core concepts and APIs, including: Spark SQL and DataFrame APIs Spark Structured Streaming Spark MLlib for analytics Distributed Computing: Strong understanding of big data processing frameworks. Data Modeling: Expertise in optimization techniques for large-scale datasets. Performance Tuning: Proficiency in optimizing Spark jobs. Lakehouse Storage: Good understanding of technologies like Delta Lake and Apache Iceberg
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough