Senior Data Engineer/Architect || Immediate Joiners || Remote

9 - 14 years

27 - 40 Lacs

Posted:4 hours ago| Platform: Naukri logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Experience Required: 10+Years

Mode of work: Remote

Skills Required: Azure DataBricks, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark, Databricks Lakehouse Platform

Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within 29th September 2025)

Key Responsibilities

  • Translate business rules into

    technical specifications

    and implement scalable data solutions.
  • Manage a team of Data Engineers and oversee

    deliverables across multiple markets

    .
  • Apply

    performance optimization techniques

    in Databricks to handle large-scale datasets.
  • Collaborate with the

    Data Science team

    to prepare datasets for AI/ML model training.
  • Partner with the

    BI team

    to understand reporting expectations and deliver high-quality datasets.
  • Perform

    hands-on data modeling

    , including schema changes and accommodating new data attributes.
  • Implement

    data quality checks

    before and after data transformations to ensure reliability.
  • Troubleshoot and debug data issues, collaborating with source system/data teams for resolution.
  • Contribute across project phases:

    requirement analysis, development, code review, SIT, UAT, and production deployment

    .
  • Utilize

    GIT

    for version control and manage

    CI/CD pipelines

    for seamless deployment across environments.
  • Adapt to

    dynamic business requirements

    and ensure timely delivery of solutions.

Required Skills & Experience:

  • 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions.

  • Strong expertise in

    Azure Databricks, PySpark, and SQL

    .
  • Proven experience in

    data engineering leadership

    and handling cross-market deliverables.
  • Solid understanding of

    data modeling

    and ETL/ELT pipelines.
  • Hands-on experience with

    performance optimization

    in big data processing.
  • Proficiency in

    Git, CI/CD pipelines

    , and cloud-based deployment practices.
  • Strong problem-solving and debugging skills with large, complex datasets.
  • Excellent communication skills and ability to collaborate with cross-functional teams.

Nice-to-Have Skills

  • Exposure to AI/ML workflows and data preparation for model training.
  • Knowledge of BI tools and reporting structures.
  • Familiarity with Agile methodologies.

Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Enable Data Incorporated logo
Enable Data Incorporated

Information Technology

Data City

RecommendedJobs for You

hyderabad, pune, bengaluru

hyderabad, chennai, bengaluru

bhubaneswar, indore, nagpur

kolkata, chennai, coimbatore

pune, bengaluru, mumbai (all areas)