Posted:1 day ago|
Platform:
On-site
Full Time
Title: Data Engineer – ADF | PySpark | Databricks | Medallion Architecture
Experience: 5+ years
Location: WFO – Hyderabad / Gurgaon (HYD / GGN)
Type: Full-time
Role Overview
We’re looking for a hands-on Data Engineer with deep practical experience in Azure Data Factory (ADF), Databricks, and PySpark, with strong understanding of Medallion Data Architecture and data migration pipelines in production. The engineer should be comfortable owning complex ETL/ELT workflows end-to-end — from ingestion to gold-layer delivery — with minimal hand-holding, and able to operate in both individual contributor and collaborative team settings.
Key Responsibilities
Design, build, and maintain data pipelines using ADF, Databricks, and PySpark for large-scale structured and semi-structured datasets.
Implement and optimize Medallion architecture (Bronze–Silver–Gold layers) in Delta Lake or Lakehouse environments.
Develop and tune PySpark jobs for ingestion, transformation, aggregation, and cleansing at scale.
Perform data migration and synchronization between heterogeneous sources (PostgreSQL, MS SQL Server, Cosmos DB, etc.) and cloud targets.
Design ADF pipelines with parameterization, triggers, linked services, and data flow optimizations.
Work closely with architects to ensure solutions follow data governance, lineage, and security best practices.
Troubleshoot pipeline failures, cluster performance issues, and optimize data processing workloads.
Contribute to code reviews, CI/CD automation, and documentation for production data workflows.
Technical Skills
Core ExpertiseAzure Data Factory (ADF): Data pipelines, triggers, parameterized datasets, data flow activities.
Databricks: Cluster configuration, notebook orchestration, Delta tables, workspace management.
PySpark: Transformation logic, distributed computing, performance tuning, UDFs, job parallelization.
SQL: Advanced query writing, stored procedures, optimization (especially on PostgreSQL and MS SQL Server).
Medallion Architecture: Design and implementation of bronze/silver/gold layer models.
Data Migration: Working with production-scale datasets, incremental load, and data validation frameworks.
Nice to Have
Exposure to Azure Cosmos DB (SQL API or Mongo API).
Knowledge of Azure Synapse Analytics, Delta Lake, or Unity Catalog.
Familiarity with Git, CI/CD pipelines, and Infrastructure-as-Code (IaC) for data projects.
Understanding of cloud cost optimization and cluster auto-scaling strategies.
Soft Skills
Strong analytical thinking and problem-solving attitude.
Ability to work independently with minimal oversight.
Clear and structured written and verbal communication for cross-team collaboration.
Ownership mindset — able to drive deliverables from design to production release.
Preferred Background
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related technical field.
5+ years of hands-on experience building data pipelines in Azure Cloud.
Prior experience in production-grade data migration or modern data platform implementations.
Job Type: Full-time
Pay: ₹500,000.00 - ₹1,200,000.00 per year
Benefits:
Work Location: In person
Mobile Programming India Pvt Ltd
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
bengaluru
20.0 - 30.0 Lacs P.A.
mumbai, delhi / ncr, bengaluru
20.0 - 35.0 Lacs P.A.
chennai
4.0 - 10.0 Lacs P.A.
hyderabad, chennai, bengaluru
12.0 - 22.0 Lacs P.A.
4.0 - 8.0 Lacs P.A.
bengaluru
5.0 - 9.0 Lacs P.A.
chennai
25.0 - 40.0 Lacs P.A.
hyderābād
5.2125 - 6.8625 Lacs P.A.
bengaluru, karnataka, india
Salary: Not disclosed
hyderabad, telangana, india
Experience: Not specified
Salary: Not disclosed