Posted:5 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

Our client is a global technology company headquartered in Santa Clara, California. it focuses on helping organisations harness the power of data to drive digital transformation, enhance operational efficiency, and achieve sustainability. over 100 years of experience in operational technology (OT) and more than 60 years in IT to unlock the power of data from your business, your people and your machines. We help enterprises store, enrich, activate and monetise their data to improve their customers’ experiences, develop new revenue streams and lower their business costs. Over 80% of the Fortune 100 trust our client for data solutions.

The company’s consolidated revenues for fiscal 2024 (ended March 31, 2024). approximately $57.5 billion USD., and the company has approximately 296,000 employees worldwide. It delivers digital solutions utilising Lumada in five sectors, including Mobility, Smart Life, Industry, Energy and IT, to increase our customers’ social, environmental and economic value.


Job Title:

Location: Hyderabad

Experience: 4+Years

Job Type :

Notice Period:

Mandatory Skills:

1. Create Scala/Spark/Pyspark jobs for data transformation and aggregation Produce unit tests for Spark transformations and helper methods Used Spark and Spark-SQL to read the parquet data and create the tables in hive using the Scala API.

2. Work closely with Business Analysts team to review the test results and obtain sign off

3. Prepare necessary design/operations documentation for future usage Perform peers

4. Code quality review and be gatekeeper for quality checks Hands-on coding, usually in a pair programming environment

5. Working in highly collaborative teams and building quality code

6. The candidate must exhibit a good understanding of data structures, data manipulation, distributed processing, application development, and automation.

7. Familiar with Oracle, Spark streaming, Kafka, ML.

8. To develop an application by using Hadoop tech stack and delivered effectively, efficiently, on-time, in-specification and in a cost-effective manner.

9. Ensure smooth production deployments as per plan and post-production deployment verification.

10. This Hadoop Developer will play a hands-on role to develop quality applications within the desired timeframes and resolving team queries.

11. Requirements Hadoop data engineer with total (4 - 6 years) & ( 6 -9 Years) of experience and should have strong experience in Hadoop, Spark, Scala, Java, Hive, Impala, CI/CD, Git, Jenkins, Agile Methodologies, DevOps, Cloudera Distribution. Strong Knowledge in data warehousing Methodology

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Java Skills

Practice Java coding challenges to boost your skills

Start Practicing Java Now

RecommendedJobs for You

bengaluru, karnataka, india