Data Engineer - Azure / AWS Databricks

7 years

0 Lacs

Posted:5 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

This role is for one of Weekday's clientsMin Experience: 7 yearsLocation: GurugramJobType: full-time

Requirements

We are looking for an experienced

Data Engineer

with deep expertise in

Azure and/or AWS Databricks

to join our growing data engineering team. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing data pipelines, enabling seamless data integration and real-time analytics. This role is ideal for professionals who have hands-on experience with cloud-based data platforms, big data processing frameworks, and a strong understanding of data modeling, pipeline orchestration, and performance tuning.You will work closely with data scientists, analysts, and business stakeholders to deliver scalable and reliable data infrastructure that supports high-impact decision-making across the organization.

Key Responsibilities:

  • Design and Development of Data Pipelines:
    • Design, develop, and maintain scalable and efficient data pipelines using Databricks on Azure or AWS.
    • Integrate data from multiple sources including structured, semi-structured, and unstructured datasets.
    • Implement ETL/ELT pipelines for both batch and real-time processing.
  • Cloud Data Platform Expertise:
    • Use Azure Data Factory, Azure Synapse, AWS Glue, S3, Lambda, or similar services to build robust and secure data workflows.
    • Optimize storage, compute, and processing costs using appropriate services within the cloud environment.
  • Data Modeling & Governance:
    • Build and maintain enterprise-grade data models, schemas, and lakehouse architecture.
    • Ensure adherence to data governance, security, and privacy standards, including data lineage and cataloging.
  • Performance Tuning & Monitoring:
    • Optimize data pipelines and query performance through partitioning, caching, indexing, and memory management.
    • Implement monitoring tools and alerts to proactively address pipeline failures or performance degradation.
  • Collaboration & Documentation:
    • Work closely with data analysts, BI developers, and data scientists to understand data requirements.
    • Document all processes, pipelines, and data flows for transparency, maintainability, and knowledge sharing.
  • Automation and CI/CD:
    • Develop and maintain CI/CD pipelines for automated deployment of data pipelines and infrastructure using tools like GitHub Actions, Azure DevOps, or Jenkins.
    • Implement data quality checks and unit tests as part of the data development lifecycle.

Skills & Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 7+ years of experience in data engineering roles with hands-on experience in Azure or AWS ecosystems.
  • Strong expertise in Databricks (including notebooks, Delta Lake, and MLflow integration).
  • Proficiency in Python and SQL; experience with PySpark or Spark strongly preferred.
  • Experience with data lake architectures, data warehouse platforms (like Snowflake, Redshift, Synapse), and lakehouse principles.
  • Familiarity with infrastructure as code (Terraform, ARM templates) is a plus.
  • Strong analytical and problem-solving skills with attention to detail.
  • Excellent verbal and written communication skills.

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You