Senior Data Engineering Analyst

6 - 11 years

15 - 30 Lacs

Posted:4 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Role:

Experience:

Location:

Notice Period:

Interview:

Mode:

We are building a scalable data ingestion and streaming platform that ingests change data capture (CDC) events from diverse source systems (databases and applications), processes them in real time and lands curated data into our analytics lake. The platform uses Confluent connectors (Debezium/Oracle CDC) to emit Parquet files into cloud storage and leverages Databricks Auto Loader to incrementally ingest, deduplicate and write this data into Delta Lake Bronze table. To ensure broad applicability, the following job description emphasizes generic streaming and data engineering skills while highlighting the core technologies used in our solution.

Mandatory Skills:

Azure Data Engineer, Azure Databricks, Azure Data Factory, Delta Live Tables, ADLS, CDC (Change data capture), Data Ingestion, RDD – Data Frames, Data Streaming.

Key Responsibilities:

  • 5 - 10 years of experience designing and building data pipelines using Apache Spark, Databricks or equivalent bigdata frameworks
  • Handson expertise with streaming and messaging systems such as Apache Kafka (publish subscribe architecture), Confluent Cloud, RabbitMQ or Azure Event Hub. Experience creating producers, consumers and topics and integrating them into downstream processing.
  • Deep understanding of relational databases and CDC. Proficiency in SQL Server, Oracle, or other RDBMSs; experience capturing change events using Debezium or native CDC tools and transforming them for downstream consumption.
  • Implement CDC and deduplication logic. Capture change events from source databases using Debezium, built-in CDC features of SQL Server/ Oracle or other connectors. Apply watermarking and drop duplicate strategies based on primary keys and event timestamps.
  • Proficiency in programming languages such as Python, Scala or Java and solid knowledge of SQL for data manipulation and transformation.
  • Cloud platform expertise. Experience with Azure or AWS services for data storage, compute, and orchestration (e.g., ADLS, S3, Azure Data Factory, AWS Glue, Airflow, DBX, DLT).
  • Data modelling and warehousing. Knowledge of data Lakehouse architectures, Delta Lake, partitioning strategies, and performance optimisation.
  • Version control and DevOps. Familiarity with Git and CI/CD pipelines; ability to automate deployment and manage infrastructure as code.
  • Strong problem solving and communication skills. Ability to work with cross functional teams and articulate complex technical concepts to nontechnical stakeholders.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Optimum Solutions logo
Optimum Solutions

Information Technology

Tech City

RecommendedJobs for You