Lead Data Engineer - Azure and Microsoft Fabric

7 - 12 years

9 - 14 Lacs

Posted:Just now| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

About US:-

We turn customer challenges into growth opportunities.
Material is a global strategy partner to the world s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.
Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe

Job Responsibilities:

Design and Develop Data Pipelines:

Development and optimisation of scalable data pipelines within

Microsoft Fabric

, leveraging fabric based notebooks,

Dataflows Gen2

,

Data Pipelines

, and

Lakehouse architecture

. Build robust pipelines using both batch and real-time processing techniques. Integrate with

Azure Data Factory

or

Fabric-native orchestration

for seamless data movement.

Microsoft Fabric Architecture:

Work with the

Data Architecture

team to implement scalable, governed data architectures within

OneLake

and Microsoft Fabrics unified compute and storage platform. Align models with business needs, promoting performance, security, and cost-efficiency.

Data Pipeline Optimisation:

Continuously monitor, enhance, and optimise Fabric pipelines, notebooks, and lakehouse artifacts for performance, reliability, and cost. Implement best practices for managing large-scale datasets and transformations in a Fabric-first ecosystem.

Collaboration with Cross-functional Teams:

Work closely with analysts, BI developers, and data scientists to gather requirements and deliver high-quality, consumable datasets. Enable

self-service analytics

via certified and reusable

Power BI datasets

connected to Fabric Lakehouses.

Documentation and Knowledge Sharing:

Maintain clear, up-to-date documentation for all data pipelines, semantic models, and data products. Share knowledge of

Fabric

best practices and mentor junior team members to support adoption across teams.

Microsoft Fabric Platform Expertise:

Use your expertise in

Microsoft Fabric

, including

Lakehouses

,

Notebooks

,

Data Pipelines

, and

Direct Lake

, to build scalable solutions integrated with

Business Intelligence layers

,

Azure Synapse

, and other Microsoft data services.

Required Skills and Qualifications:

  • Experience in

    Microsoft Fabric / Azure Eco System

    : 7+ years working with Azure eco system , Relavent experience in Microsoft Fabric, including Lakehouse,oine lake, Data Engineering, and Data Pipelines components.
  • Proficiency in

    Azure Data Factory

    and/or

    Dataflows Gen2

    within Fabric for building and orchestrating data pipelines.
  • Advanced Data Engineering Skills: Extensive experience in data ingestion, transformation, and

    ELT/ETL pipeline design

    . Ability to enforce data quality, testing, and monitoring standards in cloud platforms.
  • Cloud Architecture Design: Experience designing modern data platforms using

    Microsoft Fabric

    ,

    OneLake

    , and

    Synapse

    or equivalent.
  • Strong / Indeapth SQL and Data Modelling: Expertise in

    SQL

    and

    data modelling

    (e.g., star/snowflake schemas) for

    Data

    intergation

    /

    ETL

    , reporting and

    analytics

    use cases.
  • Collaboration and Communication: Proven ability to work across business and technical teams, translating business requirements into scalable data solutions.
  • Cost Optimisation: Experience tuning pipelines and cloud resources (Fabric, Databricks, ADF) for cost-performance balance.

Preferred Skills:

  • Deep understanding of

    Azure

    ,

    Microsoft Fabric ecosystem

    , including

    Power BI integration

    ,

    Direct Lake

    , and

    Fabric-native security and governance

    .
  • Familiarity with

    OneLake

    ,

    Delta Lake

    , and

    Lakehouse architecture

    as part of a modern data platform strategy.
  • Experience using

    Power BI

    with Fabric Lakehouses and

    DirectQuery/Direct Lake

    mode for enterprise reporting.
  • Working knowledge of

    PySpark

    ,

    strong SQL

    , and

    Python scripting

    within Fabric or Databricks notebooks.
  • Understanding of

    Microsoft Purview

    ,

    Unity Catalog

    , or Fabric-native governance tools for lineage, metadata, and access control.
  • Experience with

    DevOps practices

    for Fabric or Power BI, including version control, deployment pipelines, and workspace management.
  • Knowledge of

    Azure Databricks

    : Familiarity with building and optimising Spark-based pipelines and

    Delta Lake

    models as part of a modern data platform. is an added advantage.

Mock Interview

Practice Video Interview with JobPe AI

Start Business Intelligence Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Srijan
Srijan

Information Technology and Services

New Delhi

RecommendedJobs for You