Senior Data Engineer – Azure | ADF | Databricks | PySpark | AWS

5 years

15 - 24 Lacs

Posted:1 month ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title: Senior Data Engineer – Azure | ADF | Databricks | PySpark | AWS

Location: Bangalore, Hyderabad, Chennai (Hybrid Mode)
Experience Required: 5+ Years
Notice Period: Immediate

Job Description

We are looking for a Senior Data Engineer who is passionate about designing and developing scalable data pipelines, optimizing data architecture, and working with advanced big data tools and cloud platforms. This is a great opportunity to be a key player in transforming data into meaningful insights by leveraging modern data engineering practices on Azure, AWS, and Databricks.

You will be working with cross-functional teams including data scientists, analysts, and software engineers to deliver robust data solutions. The ideal candidate will be technically strong in Azure Data Factory, PySpark, Databricks, and AWS services, and will have experience in building end-to-end ETL workflows and driving business impact through data.

Key Responsibilities

  • Design, build, and maintain scalable and reliable data pipelines and ETL workflows
  • Implement data ingestion and transformation using Azure Data Factory (ADF) and Azure Databricks (PySpark)
  • Work across multiple data platforms including Azure, AWS, Snowflake, and Redshift
  • Collaborate with data scientists and business teams to understand data needs and deliver solutions
  • Optimize data storage, processing, and retrieval for performance and cost-effectiveness
  • Develop data quality checks and monitoring frameworks for pipeline health
  • Ensure data governance, security, and compliance with industry standards
  • Lead code reviews, set data engineering standards, and mentor junior team members
  • Propose and evaluate new tools and technologies for continuous improvement

Must-Have Skills

  • Strong programming skills in Python, SQL, or Scala
  • Azure Data Factory, Azure Databricks, Synapse Analytics
  • Hands-on with PySpark, Spark, Hadoop, Hive
  • Experience with cloud platforms (Azure preferred; AWS/GCP acceptable)
  • Data Warehousing: Snowflake, Redshift, BigQuery
  • Strong ETL/ELT pipeline development experience
  • Workflow orchestration tools such as Airflow, Prefect, or Luigi
  • Excellent problem-solving, debugging, and communication skills

Nice to Have

  • Experience with real-time streaming tools (Kafka, Flink, Spark Streaming)
  • Exposure to data governance tools and regulations (GDPR, HIPAA)
  • Familiarity with ML model integration into data pipelines
  • Containerization and CI/CD exposure: Docker, Git, Kubernetes (basic)
  • Experience with Vector databases and unstructured data handling

Technical Environment

  • Programming: Python, Scala, SQL
  • Big Data Tools: Spark, Hadoop, Hive
  • Cloud Platforms: Azure (ADF, Databricks, Synapse), AWS (S3, Glue, Lambda), GCP
  • Data Warehousing: Snowflake, Redshift, BigQuery
  • Databases: PostgreSQL, MySQL, MongoDB, Cassandra
  • Orchestration: Apache Airflow, Prefect, Luigi
  • Tools: Git, Docker, Azure DevOps, CI/CD pipelines

Soft Skills

  • Strong analytical thinking and problem-solving abilities
  • Excellent verbal and written communication
  • Collaborative team player with leadership qualities
  • Self-motivated, organized, and able to manage multiple projects

Education & Certifications

  • Bachelor’s or Master’s Degree in Computer Science, IT, Engineering, or equivalent
  • Cloud certifications (e.g., Microsoft Azure Data Engineer, AWS Big Data) are a plus

Key Result Areas (KRAs)

  • Timely delivery of high-performance data pipelines
  • Quality of data integration and governance compliance
  • Business team satisfaction and data readiness
  • Proactive optimization of data processing workloads

Key Performance Indicators (KPIs)

  • Pipeline uptime and performance metrics
  • Reduction in overall data latency
  • Zero critical issues in production post-release
  • Stakeholder satisfaction score
  • Number of successful integrations and migrations

Job Types: Full-time, Permanent

Pay: ₹1,559,694.89 - ₹2,441,151.11 per year

Benefits:

  • Provident Fund

Schedule:

  • Day shift

Supplemental Pay:

  • Performance bonus

Application Question(s):

  • What is your notice period in days?

Experience:

  • Azure Data Factory, Azure Databricks, Synapse Analytics: 5 years (Required)
  • Python, SQL, or Scala: 5 years (Required)

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You