Lead Data Engineer

5 - 10 years

5 - 10 Lacs

Posted:9 hours ago| Platform: Foundit logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

  • Design and Develop Data Pipelines:

    Hands on with development, and optimisation of scalable and reusable - data pipelines in

    Azure Microsoft Fabric Synapse Data Engineering

    , leveraging both

    batch

    and

    real-time processing techniques

    . Ensure smooth integration with

    Azure Data Factory

    for orchestration and workflow management.
  • Cloud Data Architecture:

    Collaborate with the

    Data Architecture team

    to design and implement robust data architectures in the

    Azure environment

    , ensuring they align with business needs while optimising performance, scalability, and cost-efficiency.
  • Pipeline Optimisation :

    Continuously monitor and optimise the performance, cost, and reliability of data pipelines, ensuring efficient processing, storage, and management of large datasets.
  • Cross-functional Collaboration:

    Work closely with

    data engineering teams

    ,

    analysts

    , and

    business stakeholders

    to understand data requirements, developing solutions that enable self-service analytics and support the decision-making process.
  • Documentation Knowledge Sharing:

    Contribute to internal documentation, fostering a culture of knowledge-sharing. Provide

    mentorship

    and guidance to junior engineers, helping to elevate team skills and improve overall team performance.
  • Microsoft Fabric Experience:

    Apply your knowledge of

    Azure Tech Stack

    on Data Engineering (

    or your willingness to learn

    )

    Fabric-based development

    to manage end-to-end data orchestration, governance, and security across cloud and on-premises systems, ensuring seamless data movement and integration across hybrid environments.
  • Data Modelling Expertise:

    Leverage your deep expertise in

    Azure

    to design and implement

    data models

    , create processing pipelines, and integrate with other

    Azure services

    like

    Data Lake

    and

    Synapse

    to support data storage and analytics needs.

Required Skills and Qualifications:

  • Experience with Azure Ecosystem (Preferably Synapse):

    5+ years of hands-on experience with

    Azure Ecosystem

    , including

    Synapse

    ,

    Spark

    ,

    OneLake

    , and other Fabric tools. Expertise in optimising

    Fabric notebooks

    and efficiently managing large-scale data workloads.
  • Proficiency in Azure Data Factory:

    Strong experience with designing and orchestrating complex data pipelines using

    Azure Data Factory

    , with an emphasis on seamless data flow integration across various Azure services.
  • Familiarity with Microsoft Fabric:

    A working knowledge or eagerness to learn

    Azure Data Fabric

    , focusing on cross-platform data orchestration, governance, and security.
  • Advanced Data Engineering Skills:

    Extensive experience in data engineering, including the design and implementation of

    ETL processes

    and working with large datasets. Proven expertise in

    data quality

    , monitoring, and testing practices.
  • Cloud Architecture Design Expertise:

    Experience designing and implementing

    data architectures

    in the

    Azure ecosystem

    , including tools such as

    Data Lake

    ,

    Synapse

    , and

    Azure Storage

    .
  • SQL and Data Modelling Expertise:

    Strong skills in

    SQL

    and

    data modelling

    , with the ability to design optimised data structures, tables, and views. Knowledge of both transactional and analytical data modelling.
  • Collaboration and Communication Skills:

    Strong ability to work cross-functionally with teams from various domains. Ability to communicate complex technical concepts to both technical and non-technical stakeholders.
  • Cost Optimisation:

    Proven experience optimising

    data engineering processes

    and

    Azure resources

    for both performance and cost, particularly in large-scale cloud environments.

Preferred Skills:

  • Data Lakehouse Experience:

    Familiarity with

    Data Lakehouse

    architectures, particularly with tools like

    Delta Lake

    ,

    OneLake

    , etc.
  • Azure Ecosystem Familiarity:

    Knowledge of

    Azure s full ecosystem

    for end-to-end data integration and ETL processes.
  • Proficiency in PySpark and Python:

    Expertise in

    PySpark

    for data processing tasks, with a solid foundation in

    Python

    .
  • Fabric Integration:

    Familiarity with

    Fabric

    and how it integrates with other services within the

    Azure ecosystem

    .
  • Databricks Experience:

    Experience with

    Databricks

    is a plus.

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Srijan logo
Srijan

Information Technology and Services

New Delhi

RecommendedJobs for You

gurgaon, haryana, india