Fabric Data Engineer

25 years

0 Lacs

Posted:2 days ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Part Time

Job Description

Job Requirements

At Quest Global, it’s not just what we do but how and why we do it that makes us different. With over 25 years as an engineering services provider, we believe in the power of doing things differently to make the impossible possible. Our people are driven by the desire to make the world a better place—to make a positive difference that contributes to a brighter future. We bring together technologies and industries, alongside the contributions of diverse individuals who are empowered by an intentional workplace culture, to solve problems better and faster.

Role Overview

We are seeking an experienced Fabric Data Engineer with strong hands-on skills in Fabric pipelines, ADLS Gen2, Lakehouse/Warehouse, and PySpark. The ideal candidate should also have the ability to understand and translate complex Oracle PL/SQL logic (procedures, packages, functions, views) into scalable and optimized modern data solutions. The role requires strong SQL/PySpark expertise, performance tuning capability, and the ability to build reusable, production-grade data pipelines.

Key Responsibilities

  • Understand and interpret complex Oracle PL/SQL procedures, packages, functions, and view definitions, and translate the business logic into Fabric-based pipelines and PySpark notebooks.
  • Build scalable, reusable, and modular data pipelines using Microsoft Fabric (Pipelines, Dataflows, ADLS Gen2, Lakehouse, Warehouse).
  • Apply Medallion Architecture (Bronze–Silver–Gold) principles to design clean, maintainable, and traceable data layers.
  • Use Lakehouse and Warehouse efficiently to design incremental, full, and merge-based data loads.
  • Write dynamic PySpark code for ingestion, transformation, Delta Lake operations (MERGE, audit logging), and performance tuning.
  • Optimize SQL and PySpark transformations for performance, cost efficiency, and minimal run time.
  • Troubleshoot performance bottlenecks in queries, Delta operations, Fabric notebooks, and pipelines.
  • Collaborate with cross-functional teams to understand data requirements and convert them into end-to-end data solutions.
  • Manage metadata-driven pipelines and reusable frameworks for ingestion, validation, error handling, and loading.
  • Design and implement incremental and full-load strategies using Lakehouse/Warehouse tables, ensuring efficient data refreshes and minimal compute usage.
  • Implement best practices around version control, CI/CD, job scheduling, and code modularity.
  • Ensure data quality, governance, security, and compliance across the platform.

We are known for our extraordinary people who make the impossible possible every day. Questians are driven by hunger, humility, and aspiration. We believe that our company culture is the key to our ability to make a true difference in every industry we reach. Our teams regularly invest time and dedicated effort into internal culture work, ensuring that all voices are heard.


We wholeheartedly believe in the diversity of thought that comes with fostering a culture rooted in respect, where everyone belongs, is valued, and feels inspired to share their ideas. We know embracing our unique differences makes us better, and that solving the worlds hardest engineering problems requires diverse ideas, perspectives, and backgrounds. We shine the brightest when we tap into the many dimensions that thrive across over 21,000 difference-makers in our workplace.


Work Experience

Required Skills & Experience

  • Intermediate-level hands-on experience with Microsoft Fabric:
    Pipelines, Dataflows Gen2, ADLS Gen2, Notebooks, Lakehouse, Warehouse, Delta tables.
  • Strong understanding of Oracle PL/SQL including packages, procedures, functions, and application logic.
  • Expertise in SQL (complex transformations, joins across modules, optimization, indexing strategies).
  • Strong hands-on experience in PySpark, Delta Lake operations, and scalable distributed processing.
  • Ability to build scalable and optimized data pipelines following best engineering practices.
  • Experience with performance tuning, query optimization, and pipeline runtime improvement.
  • Strong understanding of incremental load vs. full load mechanisms, including change detection, Delta merge operations, surrogate key handling, and data reconciliation.
  • Experience working with Fabric notebooks, merging/joining large datasets, and handling incremental vs full loads.
  • Good communication and ability to understand business logic behind complex legacy systems.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Quest Global logo
Quest Global

Engineering Services

Beachwood

RecommendedJobs for You