8 - 13 years

20 - 35 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Title: Data Engineer Role Overview. W

  • Design, develop, and maintain

    scalable ETL pipelines and data workflows

     using Databricks and Apache Spark.
  • Build and optimize

    batch and streaming data pipelines

     to ensure high performance, reliability, and data quality.
  • Work with structured and unstructured data using

    SQL, Python, and NoSQL databases

    .
  • Implement data transformations using

    Delta Lake

     and ensure efficient data storage and retrieval.

Databricks Platform Engineering

  • Develop and manage

    Databricks workspaces, clusters, notebooks, and workflows

    .
  • Deep understanding of

    Databricks architecture

    , including:
    • Databricks SQL
    • Delta Lake
    • Databricks Runtime
    • Databricks Workflows
  • Optimize cluster performance, cost, and availability.
  • Support data exploration, analytics, and development through Databricks notebooks.

Cloud & Infrastructure

  • Hands-on experience with

    Azure Cloud

     and cloud-native data solutions.
  • Work with cloud storage services such as

    Azure Data Lake Storage (ADLS)

    .
  • Support

    CI/CD pipelines and Azure infrastructure

     for data platform deployments.
  • Collaborate with DevOps teams to automate deployments and platform operations.

Security & Governance

  • Implement

    Databricks security best practices

    , including:
    • Authentication and authorization
    • Role-based access control (RBAC)
    • Data encryption (at rest and in transit)
  • Ensure secure access to data and protect API endpoints.
  • Stay updated on security vulnerabilities and compliance standards.

Collaboration & Version Control

  • Collaborate with cross-functional teams including data scientists, analysts, and platform engineers.
  • Use

    Git

     and

    Azure DevOps

     for version control, code reviews, and CI/CD workflows.
  • Follow best practices for code quality, documentation, and release management.

Required Technical SkillsCore Technologies

  • Databricks

      Expert
  • ETL & Data Pipelines

      Expert
  • Cloud Platforms (Azure)

      Expert
  • Security

     – Advanced
  • Architecture & System Design

     – Advanced

Programming & Data

  • Python

     – Intermediate (for data engineering, automation, and scripting)
  • SQL

     – Intermediate
  • Apache Spark (PySpark / Scala Spark)

     – Strong hands-on experience
  • NoSQL Databases

     – Intermediate

DevOps & Tooling

  • Git

     – Intermediate
  • Azure DevOps

     – Intermediate
  • CI/CD & Azure Infrastructure

     – Intermediate

Big Data & Distributed Systems

  • Strong understanding of

    distributed computing concepts

    .
  • Hands-on experience with

    Apache Spark

     and its APIs.
  • Exposure to other big data technologies such as

    Hadoop, Hive, and Kafka

     is a plus.

Nice to Have

  • Experience with

    Scala

     for performance-critical Spark applications.
  • Experience working in large-scale, enterprise data platforms.
  • Knowledge of data governance, monitoring, and observability tools.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hubli, mangaluru, mysuru, bengaluru, belgaum