Data Engineer (Microsoft Azure, Microsoft Fabric) 5+years-Remote

5 - 10 years

16 - 27 Lacs

Posted:20 hours ago| Platform: Naukri logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Job Profile:

The Data Engineer builds and maintains the data pipelines, transformations, and integrations thatpower AI-ready analytics on Microsoft Fabric and the Azure data ecosystem. This is a hands-on, delivery-focused role responsible for implementing data architectures, optimizing performance, and enabling analytics and AI use cases through well-designed, secure, and scalable data pipelines.You will work across the Bronze Silver Gold layers of Fabric to build ingestion, transformation, and curation pipelines that support business intelligence and AI-driven workloads..

Key Responsibilities:

  • Own and deliver Fabric pipelines: Build, optimize, and maintain ingestion, transformation, and

curation processes across Fabric workspaces.

  • Implement data replication and CDC: Develop efficient data replication, synchronization, and

change-data-capture (CDC) mechanisms across Fabric and connected systems.

  • Optimize data lake performance: Improve performance through partitioning, file format tuning (for example, Delta or Parquet), and cost-efficient storage strategies.
  • Data integration: Build robust connections to diverse sources and design efficient pipelines for batch and streaming ingestion.
  • Lakehouse and semantic alignment: Collaborate with Data Architects and analysts to align

lakehouse structures and semantic models with analytical and AI workloads.

  • AI data enablement: Prepare structured and semi-structured data for AI workloads such as

retrieval-augmented generation (RAG), semantic search, and Copilot integration,implementing

metadata tagging and vector embedding pipelines where applicable.

  • Automation and orchestration: Develop orchestration and monitoring solutions using Azure Data Factory, Synapse, or equivalent Azure tools.
  • CI/CD for data: Build and maintain automated deployment pipelines using GitHub Actions or Azure DevOps for reliable, auditable data delivery.
  • Security and compliance: Apply enterprise security and compliance practices across all pipelines, including encryption at rest and in transit, role-based access (RBAC), and data masking. Ensure alignment with NIST, ISO 27000, and SOC 2 standards while maintaining traceable lineage through Microsoft Purview.
  • Testing and observability: Implement data validation, logging, and monitoring frameworks using Azure Monitor or Fabric Data Activator to ensure data quality and reliability.
  • Infrastructure as Code (IaC): Support reproducible infrastructure deployment

through Terraform, Bicep, or ARM templates.

  • Continuous improvement: Experiment with new Fabric and Azure capabilities, and leverage AI-assisted tools to accelerate delivery and improve efficiency.

Must-Have Skills and Experience:

  • 4 to 7 years of experience in data engineering or data platform development, ideally within

consulting or enterprise delivery environments.

  • Strong hands-on experience with Microsoft Fabric, Azure Data Factory, Synapse, and Power BI.
  • Proficiency in SQL, Python, and data pipeline automation frameworks.
  • Experience implementing CDC, data lake optimization, and ELT/ETL processes at scale.
  • Proven ability to deliver CI/CD pipelines using Git-based workflows (GitHub Actions, Azure

DevOps, etc.).

  • Familiarity with data lakehouse architecture (bronze/silver/gold) and schema evolution.
  • Understanding of AI data patterns, including embeddings, semantic search, and retrieval-

augmented generation (RAG).

  • Experience applying security and compliance controls in Azure data environments, including

encryption, RBAC, and data masking.

  • Hands-on experience with Microsoft Purview for data lineage and governance tracking.
  • Strong collaboration skills and ownership mindset, comfortable working in AI-ready, iterative

environments.

Tools and Technologies

Microsoft Fabric, Azure Data Factory, Synapse, Power BI, Azure SQL, Databricks, Data Lake Storage,Microsoft Purview, Python, SQL, Git, Terraform, Bicep, Azure Monitor.

Nice to Have

  • Experience with Azure AI Search, Fabric Data Activator, or vector database integration.
  • Exposure to data product thinking, domain-driven design, or data mesh principles.
  • Familiarity with data quality or observability tools such as Monte Carlo.
  • Experience optimizing performance and costs in OneLake environments.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Novizco Infotech logo
Novizco Infotech

Information Technology

Tech City

RecommendedJobs for You