8 - 10 years

8 - 10 Lacs

Delhi Delhi India

Posted:1 day ago| Platform: Foundit logo

Apply

Skills Required

Work Mode

On-site

Job Type

Full Time

Job Description

Big Data Engineer

Key Responsibilities:

  • Design and develop

    high-performance, scalable data pipelines

    for batch and streaming processing.
  • Implement

    data transformations and ETL workflows

    using

    Spark, Snowflake (Snowpark), Pig, Sqoop

    , and related tools.
  • Manage

    large-scale data ingestion

    from various structured and unstructured data sources.
  • Work with

    Hadoop ecosystem

    components including

    MapReduce, HBase, Hive,

    and

    HDFS

    .
  • Optimize storage and query performance for high-throughput, low-latency systems.
  • Collaborate with

    data scientists, analysts, and product teams

    to define and implement end-to-end data solutions.
  • Ensure

    data integrity, quality, governance, and security

    across all systems.
  • Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs.

Must-Have Skills:

  • Strong hands-on experience with:
  • Snowflake & Snowpark

  • Apache Spark

  • Hadoop, MapReduce

  • Pig, Sqoop, HBase, Hive

  • Expertise in

    data ingestion, transformation, and pipeline orchestration

  • In-depth knowledge of

    distributed computing and big data architecture

  • Experience in

    data modeling, storage optimization

    , and

    query performance tuning

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now
coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

Chennai, Tamil nadu, India