Azure Data Engineer(SDE)

10 years

0 Lacs

Posted:1 week ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Role Overview: SDE/DE

We are seeking technically strong Senior Data Engineers / Leads who can take ownership of designing and developing cutting-edge data and AI platforms using big data and cloud technologies on Microsoft Azure.You will play a critical role in building scalable data pipelines, modern data architectures, and intelligent analytics solutions.

Key Responsibilities

  • Design and implement scalable, metadata-driven frameworks for data ingestion, quality, and transformation across both batch and streaming datasets.
  • Develop and optimize end-to-end data pipelines to process structured and unstructured data, enabling the creation of analytical data products.
  • Build robust exception handling, logging, and monitoring mechanisms for better observability and operational support.
  • Take ownership of complex modules and lead the development of critical data workflows and components.
  • Provide guidance to data engineers and peers on best practices, conduct code reviews, and enforces standards.
  • Collaborate with cross-functional teams—including business consultants, data architects, scientists, and application developers—to deliver impactful analytics solutions.

External Skills And Expertise

  • 5–10 years of total IT experience, including at least 3 years in big data engineering on Microsoft Azure.
  • Strong SQL expertise with experience in writing, optimizing, and troubleshooting complex queries on Azure SQL, Synapse, or similar cloud databases.
  • Solid understanding of Spark architecture and core APIs - RDD, Dataframe, Dataset.
  • Strong understanding of the Databricks ecosystem, including Notebooks, Workflows, Unity Catalog, SQL Warehouse, Serverless compute, latest databricks features.
  • Proven expertise in designing and developing scalable, high-performance data pipelines and automated workflows in Azure Databricks leveraging PySpark, Spark SQL.
  • Experience in designing, building and orchestrating complex, parameterized pipelines in Azure Data Factory.
  • Familiarity with both batch and streaming data processing.
  • Solid understanding of data-modelling techniques (dimensional, 3NF) and data warehousing concepts
  • Experience delivering at least one end-to-end Data Lakehouse solution in Azure using the Medallion Architecture
  • Knowledge of different file formats such as Delta Lake, Avro, Parquet, JSON, and CSV
  • Advanced programming, unit-testing, and debugging skills in Python, PySpark, and SQL
  • Knowledge of data security and lifecycle management policies across Azure environment.
  • Familiarity with DevOps practices (CI/CD, Git, automated deployments).
  • Collaborative mindset with enthusiasm for working with stakeholders across the organization and taking ownership of deliverables.

Good To Have

  • Exposure to developing LLM/Generative AI-powered applications.
  • Knowledge about NoSQL database.
  • Experience in supporting BI and Data Science teams in consuming the data in a secure and governed manner.
  • Relevant Certifications on Microsoft Azure or Databricks are valuable addition.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Tiger Analytics logo
Tiger Analytics

Business Consulting and Services

Santa Clara CA

RecommendedJobs for You