Senior Principal Engineer - Data Engineering

8 - 12 years

17 - 22 Lacs

Posted:1 hour ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

What can you expect?
As a Data Engineer, you will be responsible for designing and implementing scalable data pipelines and AI Based solution using Databricks. You will handle end-to-end ETL/ELT processes, manage large datasets, and work with tools like Python, PySpark, and AWS S3 to ensure data is transformed and optimized for analytical use.
Youll work on cutting-edge cloud and hybrid data projects, transforming raw data into meaningful insights and AI Analytics. Youll be hands-on from day one, collaborating closely with architects and business stakeholders.
What is in it for you?
  • Hybrid way of working
  • Diversify your experience and learn new skills
  • Opportunity to work with stakeholders globally to learn and grow
We will count on you to:
  • Develop and maintain data pipelines using Databricks and the Medallion Architecture (Bronze, Silver, Gold layers).
  • Design AI Based Solution using Databricks Genie and E2E integration.
  • Knowledge of exposing/consuming Databricks features via API using cloud-native tools or other application.
  • Write data transformation scripts using Python and PySpark.
  • Store and manage real time data in AWS S3 and integrate with other cloud-based services.
  • Use SQL to query, clean, and manipulate large datasets.
  • Collaborate with cross-functional teams to ensure data is accessible for business intelligence and analytics.
  • Monitor and troubleshoot data pipelines for performance and reliability.
  • Document data processes and follow best practices for scalability and maintainability.
  • Ingest and process structured and unstructured data across batch and streaming sources.
What you need to have:
  • Experience with Databricks components like : Pipeline, scheduled / event based job , Genie , Unity Catalog and Datawarehouse.
  • Proficiency in Python, PySpark, and SQL for data processing and transformation using AWS S3 data.
  • Experience in Data Governance , data access security , and information of configuring Job compute for different Jobs in Databricks.
  • Familiarity with version control using Git.
  • Understanding of Databricks API and its integration with different Tools and application.
  • Bulk data and real time data streaming understanding.
  • Experience with Delta Lake and other Databricks technologies. Knowledge of additional AWS services (e.g., Athena, Glue, Lambda, S3, DMS ).

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Mercer logo
Mercer

Consulting

New York

RecommendedJobs for You