Senior Data Engineer

7 - 12 years

20 - 32 Lacs

Posted:6 days ago| Platform: Naukri logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Roles & Responsibilities

  • Fabric Data Pipeline Development:

    Design, build, and maintain pipelines in

    Microsoft Fabric Data Factory & Dataflows Gen2

    to extract, transform, and load (ETL/ELT) data from multiple sources into

    OneLake

    .
  • DataOps & Governance:

    Implement and monitor

    DataOps practices

    (observability, quality checks, lineage) using

    Microsoft Purview, Fabric Monitoring, and Fabric Data Activator

    , ensuring compliance, reliability, and continuous improvement.
  • CI/CD & DevOps:

    Create and maintain

    CI/CD pipelines

    for Fabric workspaces, notebooks, and data pipelines using

    Azure DevOps or GitHub Actions

    , with automated testing and environment promotion.
  • Data Infrastructure Management:

    Manage Fabric resources including

    Lakehouses, Warehouses, and Real-Time Analytics (KQL DBs)

    , ensuring optimal performance, scalability, and cost efficiency.
  • Cloud Modernization & Migration:

    Support migration from legacy Oracle and on-premises systems into

    Microsoft Fabric/OneLake

    , covering data ingestion, cleansing, integration, and reporting alignment.
  • Security & Compliance by Design:

    Apply best practices for secure data storage, role-based access, and compliance with PDPL/GDPR. Ensure Fabric workspaces are designed with scalability, resilience, and reliability in mind.
  • Data Quality & Validation:

    Implement

    Data Quality rules

    (deduplication, profiling, validation) within Fabric pipelines and integrate with governance policies to ensure data integrity.
  • Data Modeling:

    Develop

    semantic models

    in Fabric (Power BI datasets, Lakehouse models) to align with business requirements and analytics needs.
  • Performance Optimization:

    Tune pipelines, warehouses, and models for high performance and efficient compute/storage usage.
  • Collaboration & Business Enablement:

    Partner with BI teams, analysts, and business units to translate business requirements into Fabric-based technical solutions, enabling

    self-service BI and AI adoption

    .

Key Skills:

  • Strong proficiency in

    SQL, Python, PySpark

    , and data manipulation.
  • Hands-on experience with

    Microsoft Fabric components

    :
    • Data Factory (Pipelines, Dataflows Gen2)
    • OneLake & Lakehouses
    • Warehouses (SQL Endpoint)
    • Real-Time Analytics (KQL DBs)
    • Power BI & Semantic Models
  • Experience with

    Azure DevOps/GitHub

    for CI/CD and environment management.
  • Familiarity with

    Microsoft Purview

    for governance, lineage, and cataloging.
  • Experience building and optimizing

    semantic data models

    for Power BI.
  • Knowledge of

    data warehousing, medallion architecture, and data quality frameworks

    .
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration skills.
  • Experience in

    data migration and system integration

    (Oracle/ERP/CRM to Fabric) is a plus.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Sutherland logo
Sutherland

Business Process Outsourcing (BPO)

Denver

RecommendedJobs for You

hyderabad, pune, bengaluru

chennai, thiruvananthapuram

mumbai metropolitan region

hyderabad, telangana, india