Big Data Engineer

8 years

0 Lacs

Posted:3 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

K&K Talents


Title: Big Data Engineer (Scala & Spark)

Location: PAN India (Hybrid)

Employment Type: Permanent


Notice Period: 15 Days


Required Experience: 8 Years and above


Role

Senior Data Engineer


Responsibilities

  • Design and build

    scalable, high-performance data pipelines

    using Azure Databricks, ADF, and Spark.
  • Develop and optimize

    Spark (Scala/PySpark) jobs

    for large-volume data processing and transformation.
  • Manage and integrate data from diverse sources including

    ADLS, Blob Storage, SQL databases, NoSQL stores, and streaming platforms

    .
  • Implement and support

    real-time data streaming

    using Kafka, Spark Streaming, or Flink.
  • Create and maintain

    ETL workflows

    , ensuring data quality, reliability, and consistency.
  • Work with

    Azure Synapse Analytics, Snowflake, or BigQuery

    for warehousing and analytics solutions.
  • Build and maintain

    CI/CD pipelines

    using Azure DevOps/Jenkins for automated deployments.
  • Collaborate with data architects, analysts, and data scientists to enable analytics, ML workflows, and business insights.
  • Optimize query performance, storage, cost, and compute usage across pipelines and cloud environments.
  • Ensure

    data governance

    , security, lineage, and compliance best practices are followed.
  • Participate in Agile processes, code reviews, documentation, and production support.


Skills & Experience

  • 7–12 years of hands-on experience as a

    Data Engineer

    or

    Big Data Engineer

    .
  • Strong hands-on experience with

    Azure cloud

    , including:
  • Azure Databricks
  • Azure Data Factory (ADF)
  • Azure Data Lake Storage (ADLS/Gen2)
  • Azure Synapse/SQL
  • Expert-level proficiency in

    Apache Spark

    , Spark SQL, PySpark, or Scala.
  • Experience with

    Kafka, Spark Streaming, or Flink

    for streaming pipelines.
  • Strong SQL skills with experience in

    performance tuning and query optimization

    .
  • Hands-on experience with

    ETL tools

    (ADF, NiFi, Talend, Sqoop, Airflow, etc.).
  • Good understanding of

    data modeling

    , warehouse concepts, and distributed systems.
  • Proficiency with

    Git, Azure DevOps, Jenkins

    , or similar CI/CD tools.
  • Strong background in

    Linux/Unix

    , Bash/PowerShell scripting.
  • Experience with

    Snowflake, BigQuery, or other cloud DWH

    solutions is a plus.
  • Familiarity with

    NoSQL

    technologies (MongoDB, HBase, Cassandra, Cosmos DB).


Preferred Qualifications

  • Experience in

    on-premise to cloud migration

    projects.
  • Exposure to

    machine learning feature pipelines

    or MLOps workflows.
  • Knowledge of

    data governance

    , security practices, and metadata/catalog tools.
  • Certifications in

    Azure Data Engineer

    or related cloud technologies.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

hyderabad, telangana, india

pune, bengaluru, greater noida