8 years

15 - 25 Lacs

Posted:5 hours ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Experience: 8+ Years (Minimum 5 years of relevant experience in Databricks)

Location: Hyderabad

Work Mode: Onsite (5 Days a Week)

Notice Period: ImmediatePosition Overview

We are seeking a highly skilled Data Engineer to design and implement scalable data platformsleveraging Databricks. The ideal candidate will have deep expertise in data architecture,pipeline development, and integration of diverse data sources including SQL Server,MongoDB, and InfluxDB. This requires proficiency in both real-time and batch dataprocessing, with a strong foundation in cloud data solutions (preferably Azure).This position offers the opportunity to work on advanced analytics, machine learningenablement, and enterprise-scale data solutions that drive business insights and innovation.

Key Responsibilities

 Design, build, and maintain a robust data platform on Databricks.

 Develop scalable ETL/ELT pipelines to ingest data from multiple sources (SQL Server,MongoDB, InfluxDB) into Databricks Delta Lake.

 Implement both real-time and batch data ingestion strategies using Kafka, AzureEvent Hubs, or equivalent tools.

 Optimize data storage and processing for performance, scalability, and cost efficiency.

 Build and maintain data models supporting BI, analytics, and machine learning usecases.

 Collaborate closely with Data Scientists, Analysts, and Product Teams to define anddeliver data requirements.

 Ensure data quality, security, and governance across all pipelines and data repositories.

 Conduct performance tuning, monitoring, and troubleshooting to ensure reliability ofdata workflows.

Required Skills & Qualifications

 Proven hands-on experience in Databricks, including Delta Lake, Spark, PySpark, andSQL.

 Strong understanding of data integration from heterogeneous systems — SQL Server,MongoDB, and InfluxDB.

 Expertise in ETL/ELT pipeline development and workflow orchestration using toolslike Apache Airflow, Azure Data Factory, or similar.

 Proficiency in data modeling, data warehousing, and performance optimizationtechniques.

 Experience in real-time data streaming using Kafka, Azure Event Hubs, or relatedtechnologies.

 Advanced programming skills in Python and SQL.

 Working knowledge of Azure Cloud and its data services.

 Experience with Change Data Capture (CDC) techniques for incremental dataprocessing.

 Excellent problem-solving, debugging, and analytical skills.

Job Type: Full-time

Pay: ₹1,500,000.00 - ₹2,500,000.00 per year

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

mumbai, bengaluru, thiruvananthapuram

noida, new delhi, delhi / ncr

pune, maharashtra, india

mumbai metropolitan region