Sr. Data Engineer

0 years

13 - 16 Lacs

Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About The Role

We’re looking for a highly skilled

Data Engineer

with deep expertise in

Azure

,

Databricks

, and

relational database systems

. The ideal candidate is passionate about designing, building, and optimizing large-scale data pipelines, and thrives in an environment that values problem-solving, collaboration, and continuous improvement.

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines and ETL workflows using Databricks (PySpark/Python/SQL).
  • Build and orchestrate complex data workflows using Azure Data Factory (ADF) to integrate data from diverse sources.
  • Perform data transformations and data integrations using REST APIs, Kafka, and other streaming or batch mechanisms.
  • Develop unit tests and ensure high-quality, reliable, and maintainable code.
  • Manage data ingestion from multiple sources — including Azure Storage, HDFS, Kafka, Hive, and structured/unstructured files.
  • Implement version control and deployment automation using Git, GitHub, and Azure DevOps.
  • Collaborate cross-functionally with analysts, engineers, and business stakeholders to deliver robust, production-grade data solutions.

Core Skills & Expertise

  • Databricks: Proficient in building and maintaining scalable data pipelines and notebooks using PySpark, Python, and SQL.
  • Azure Ecosystem: Hands-on experience with ADF, ADB, and ADLS.
  • Programming: Strong proficiency in Python and PySpark for data engineering and transformation tasks.
  • Data Integration: Experience integrating data using REST APIs, Kafka, and event-driven architectures.
  • ETL & Data Architecture: Expertise in designing and maintaining ETL workflows and data ingestion frameworks.
  • Version Control & CI/CD: Solid understanding of Git, GitHub, and Azure DevOps for collaborative development and deployment.

Nice to Have

  • Experience with real-time data streaming using Kafka or REST-based event processing.
  • Familiarity with Splunk or Grafana for pipeline monitoring and observability.
  • Exposure to data quality frameworks and data governance practices.

Behavioral & Soft Skills

  • Self-driven and takes ownership of outcomes.
  • Quick learner with the ability to adapt to new tools and technologies.
  • Collaborative and thrives in a team-oriented environment.
  • Strong attention to detail with a focus on reliability and quality.
  • Excellent communication and interpersonal skills.
Skills: dbt,spark,kafka,snowflake,airflow,databricks,git,aws,devops,python,pyspark,restapi,azure,sql

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

pune, maharashtra, india

hyderabad, telangana, india

noida, uttar pradesh, india