Data Engineer (Spark + Databricks + Python/Scala)

8 - 12 years

10 - 20 Lacs

Posted:2 hours ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Data Engineer

Key Responsibilities

  • Design, build and support

    ETL pipelines

    using Spark & Databricks.
  • Analyze requirements, create technical specifications, and deliver high-quality code.
  • Ensure solutions follow enterprise best practices, architecture, and compliance standards.
  • Collaborate with global teams to ensure consistent and high-quality deliverables.
  • Conduct unit testing, debugging, deployment, and code management.
  • Work on migration of legacy ETL pipelines into Spark-based modern frameworks.
  • Ensure data governance, quality checks, and data lineage across the pipeline.
  • Handle incident, change, and problem management activities.
  • Work on cloud data engineering solutions using

    AWS

    .
  • Interface with cross-functional teams, downstream systems, and cybersecurity teams.
  • Actively participate in Agile ceremonies and follow Agile best practices.
  • Stay updated with latest cloud & big data technologies; contribute to continuous improvement.

Required Skills & Experience

  • 7+ years

    of IT experience in application development & production support.
  • 7+ years

    of experience designing & developing ETL pipelines.
  • 5+ years

    hands-on experience in

    Spark, Databricks

    , and

    Python/Scala

    .
  • Strong experience in

    Spark SQL, Hive

    , and data warehousing/data modeling concepts.
  • Experience with

    AWS Cloud

    services (S3, EC2, EMR, etc.).
  • Good understanding of

    Kafka

    or any streaming framework.
  • Knowledge of

    Core Java

    , Linux, SQL, and scripting languages.
  • Experience working with relational databases (Oracle preferred).
  • Exposure to

    CI/CD pipelines

    , containers, and orchestration tools.
  • Experience working in

    Agile/Scrum

    environments.
  • Excellent communication skills and ability to work with global teams.
  • Fast learner with ability to adapt to new technologies quickly.

Good to Have

  • Experience with visualization or analytics tools (Tableau, R).
  • Prior experience modernizing legacy ETL pipelines.
  • Knowledge of data governance, observability, and quality frameworks.

Education

  • Bachelors degree in Computer Science, Engineering, or related field.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Elabs Infotech logo
Elabs Infotech

Information Technology and Services

New York

RecommendedJobs for You