Big Data Engineer

3 years

4 - 6 Lacs

Posted:7 hours ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Position Required: Hadoop & ETL Developer(Big Data Engineer)

Experience Required: 3+ Years

Job location: PMU ICJS Delhi

Job Summary:

We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs.

Key Responsibilities:

· Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies.

· Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation.

· Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte.

· Develop and manage workflow orchestration using Apache Airflow.

· Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage.

· Optimize MapReduce and Spark jobs for performance, scalability, and efficiency.

· Ensure data quality, governance, and consistency across the pipeline.

· Collaborate with data engineering teams to build scalable and high-performance data solutions.

· Monitor, debug, and enhance big data workflows to improve reliability and efficiency.

Required Skills & Experience:

· 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark).

· Strong expertise in ETL processes, data transformation, and data warehousing.

· Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte.

· Proficiency in SQL and handling structured and unstructured data.

· Experience with NoSQL databases like MongoDB.

· Strong programming skills in Python or Scala for scripting and automation.

· Experience in optimizing Spark and MapReduce jobs for high-performance computing.

.Good understanding of data lake architectures and big data best practices

Qualification:

UG: B. Tech/B.E. in Information Technology, Computer Science & Engineering, Electronics and Communication Engineering

PG: M Tech. in Computer Science & Engineering, Information Technology, Electronics and Communication Engineering

Job Types: Full-time, Permanent, Contractual / Temporary
Contract length: 12 months

Pay: ₹40,000.00 - ₹50,000.00 per month

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, chennai, bengaluru