Senior Data Engineer (Azure Databricks, ADF, PySpark)

8 years

0 Lacs

Posted:2 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About Company

Papigen

is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently.

About The Role

We are seeking a

Senior Data Engineer

to join our growing cloud data team. In this role, you will design and implement scalable

data pipelines

and

ETL processes

using

Azure Databricks

,

Azure Data Factory

,

PySpark

, and

Spark SQL

. You’ll work with cross-functional teams to develop high-quality, secure, and efficient data solutions in a

data lakehouse architecture

on Azure.

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines using Databricks, ADF, PySpark, Spark SQL, and Python
  • Build robust ETL workflows to transform and load data into a lakehouse architecture on Azure
  • Ensure data quality, security, and compliance with data governance and privacy standards
  • Collaborate with stakeholders to gather business requirements and deliver technical data solutions
  • Create and maintain technical documentation for workflows, architecture, and data models
  • Work within an Agile environment and track tasks using tools like Azure DevOps

Required Skills & Experience

  • 8+ years of experience in data engineering and enterprise data platform development
  • Proven expertise in Azure Databricks, Azure Data Factory, PySpark, and Spark SQL
  • Strong understanding of Data Warehouses, Data Marts, and Operational Data Stores
  • Proficient in writing complex SQL / PL-SQL queries and understanding data models and data lineage
  • Knowledge of data management best practices: data quality, lineage, metadata, reference/master data
  • Experience working in Agile teams with tools like Azure DevOps
  • Strong problem-solving skills, attention to detail, and the ability to multi-task effectively
  • Excellent communication skills for interacting with both technical and business teams

Benefits And Perks

  • Opportunity to work with leading global clients
  • Exposure to modern technology stacks and tools
  • Supportive and collaborative team environment
  • Continuous learning and career development opportunities
Skills: lineage,data modeling,pyspark,metadata,spark sql,data marts,azure databricks,sql,azure data factory,pl-sql,spark,pl/sql,adf,data governance,python,data warehouses

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Java Skills

Practice Java coding challenges to boost your skills

Start Practicing Java Now

RecommendedJobs for You