Posted:4 days ago|
Platform:
On-site
Must-Have:
1.Minimum 5+ years of experience in development of Spark Scala
2.Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop
3.Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing.
4.Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc
5.Experience in debugging the Spark code
6.Working knowledge of basic UNIX commands and shell script
7.Experience of Autosys, Gradle
Desired Candidate Profile
Tata Consultancy Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
mumbai, maharashtra
Salary: Not disclosed
bengaluru, karnataka, india
Experience: Not specified
Salary: Not disclosed
bengaluru, karnataka
Experience: Not specified
Salary: Not disclosed
4.0 - 6.0 Lacs P.A.
mumbai, maharashtra
Salary: Not disclosed