Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Responsibilities

Design and Develop Data Pipelines:

Microsoft Fabric

Fabric Data Platform Implementation


Data Pipeline Optimisation

Monitor and improve performance of data pipelines and notebooks in Microsoft Fabric. Apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains.

Collaboration With Cross-functional Teams

self-service BI


Documentation And Reusability

Document pipeline logic, lakehouse architecture, and semantic layers clearly. Follow development standards and contribute to internal best practices for Microsoft Fabric-based solutions.

Microsoft Fabric Platform Execution

Lakehouses

Required Skills And Qualifications

  • 5+ years of experience in data engineering within the Azure ecosystem, with relevant hands-on experience in Microsoft Fabric, including Lakehouse, Dataflows Gen2, and Data Pipelines.
  • Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2.
  • Solid experience with data ingestion, ELT/ETL development, and data transformation across structured and semi-structured sources.
  • Strong understanding of OneLake architecture and modern data lakehouse patterns.
  • Strong command of SQL,Pyspark, Python applied to both data integration and analytical workloads.
  • Ability to collaborate with cross-functional teams and translate data requirements into scalable engineering solutions.
  • Experience in optimising pipelines and managing compute resources for cost-effective data processing in Azure/Fabric.


Preferred Skills

  • Experience working in the Microsoft Fabric ecosystem, including Direct Lake, BI integration, and Fabric-native orchestration features.
  • Familiarity with OneLake, Delta Lake, and Lakehouse principles in the context of Microsoft’s modern data platform.
  • expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks.
  • Understanding of Microsoft Purview or Unity Catalog, or Fabric-native tools for metadata, lineage, and access control.
  • Exposure to DevOps practices for Fabric and Power BI, including Git integration, deployment pipelines, and workspace governance.
  • Knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines is a plus.

 

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

navi mumbai, maharashtra, india

greater kolkata area

hyderabad, telangana

bengaluru, karnataka, india

hyderabad, telangana, india