Posted:5 days ago|
Platform:
On-site
Contractual
Our client is a global technology company headquartered in Santa Clara, California. it focuses on helping organisations harness the power of data to drive digital transformation, enhance operational efficiency, and achieve sustainability. over 100 years of experience in operational technology (OT) and more than 60 years in IT to unlock the power of data from your business, your people and your machines. We help enterprises store, enrich, activate and monetise their data to improve their customers’ experiences, develop new revenue streams and lower their business costs. Over 80% of the Fortune 100 trust our client for data solutions.
The company’s consolidated revenues for fiscal 2024 (ended March 31, 2024). approximately $57.5 billion USD., and the company has approximately 296,000 employees worldwide. It delivers digital solutions utilising Lumada in five sectors, including Mobility, Smart Life, Industry, Energy and IT, to increase our customers’ social, environmental and economic value.
1. Create Scala/Spark/Pyspark jobs for data transformation and aggregation Produce unit tests for Spark transformations and helper methods Used Spark and Spark-SQL to read the parquet data and create the tables in hive using the Scala API.
2. Work closely with Business Analysts team to review the test results and obtain sign off
3. Prepare necessary design/operations documentation for future usage Perform peers
4. Code quality review and be gatekeeper for quality checks Hands-on coding, usually in a pair programming environment
5. Working in highly collaborative teams and building quality code
6. The candidate must exhibit a good understanding of data structures, data manipulation, distributed processing, application development, and automation.
7. Familiar with Oracle, Spark streaming, Kafka, ML.
8. To develop an application by using Hadoop tech stack and delivered effectively, efficiently, on-time, in-specification and in a cost-effective manner.
9. Ensure smooth production deployments as per plan and post-production deployment verification.
10. This Hadoop Developer will play a hands-on role to develop quality applications within the desired timeframes and resolving team queries.
11. Requirements Hadoop data engineer with total (4 - 6 years) & ( 6 -9 Years) of experience and should have strong experience in Hadoop, Spark, Scala, Java, Hive, Impala, CI/CD, Git, Jenkins, Agile Methodologies, DevOps, Cloudera Distribution. Strong Knowledge in data warehousing Methodology
People Prime Worldwide
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Java coding challenges to boost your skills
Start Practicing Java Nowchennai, bengaluru
10.0 - 12.0 Lacs P.A.
bengaluru, karnataka, india
Salary: Not disclosed
bengaluru
Experience: Not specified
5.9 - 7.9 Lacs P.A.
Experience: Not specified
8.0 - 12.0 Lacs P.A.
bengaluru
Salary: Not disclosed
bengaluru
8.4 - 10.8 Lacs P.A.
bengaluru
1.2 - 1.24 Lacs P.A.
bengaluru
5.25 - 9.0 Lacs P.A.
4.5 - 4.5 Lacs P.A.
Salary: Not disclosed