Hadoop Lead

7 - 12 years

3 - 12 Lacs

Posted:2 weeks ago| Platform: Foundit logo

Apply

Skills Required

data-pipeline)

Work Mode

On-site

Job Type

Full Time

Job Description

Big Data Engineer

Key Responsibilities

  • Data Pipeline & Transformation

    : You'll have experience with data pipelines using technologies like

    Apache Kafka, Storm, Spark

    , or

    AWS Lambda

    . The role requires at least 2 years of experience writing

    PySpark

    for data transformation. You'll also work with

    terabyte data sets

    using relational databases and

    SQL

    .
  • Data Warehousing & ETL

    : The position demands at least 2 years of experience with data warehouse technical architectures,

    ETL/ELT

    processes, and data security. You'll be responsible for designing data warehouse solutions and integrating various technical components.
  • Project Leadership

    : You'll have 2 or more years of experience leading data warehousing and analytics projects, specifically utilizing

    AWS

    technologies like

    Redshift, S3

    , and

    EC2

    .
  • Methodologies & Tools

    : You'll use

    Agile/Scrum

    methodologies to iterate on product changes and work through backlogs. Exposure to reporting tools like

    QlikView

    or

    Tableau

    is a plus, as is familiarity with

    Linux/Unix scripting

    .

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

gurgaon, haryana, india

bengaluru, karnataka, india