Posted:5 hours ago|
Platform:
On-site
Full Time
Experience: 8+ Years (Minimum 5 years of relevant experience in Databricks)
Location: Hyderabad
Work Mode: Onsite (5 Days a Week)
Notice Period: ImmediatePosition Overview
We are seeking a highly skilled Data Engineer to design and implement scalable data platformsleveraging Databricks. The ideal candidate will have deep expertise in data architecture,pipeline development, and integration of diverse data sources including SQL Server,MongoDB, and InfluxDB. This requires proficiency in both real-time and batch dataprocessing, with a strong foundation in cloud data solutions (preferably Azure).This position offers the opportunity to work on advanced analytics, machine learningenablement, and enterprise-scale data solutions that drive business insights and innovation.
Key Responsibilities
Design, build, and maintain a robust data platform on Databricks.
Develop scalable ETL/ELT pipelines to ingest data from multiple sources (SQL Server,MongoDB, InfluxDB) into Databricks Delta Lake.
Implement both real-time and batch data ingestion strategies using Kafka, AzureEvent Hubs, or equivalent tools.
Optimize data storage and processing for performance, scalability, and cost efficiency.
Build and maintain data models supporting BI, analytics, and machine learning usecases.
Collaborate closely with Data Scientists, Analysts, and Product Teams to define anddeliver data requirements.
Ensure data quality, security, and governance across all pipelines and data repositories.
Conduct performance tuning, monitoring, and troubleshooting to ensure reliability ofdata workflows.
Required Skills & Qualifications
Proven hands-on experience in Databricks, including Delta Lake, Spark, PySpark, andSQL.
Strong understanding of data integration from heterogeneous systems — SQL Server,MongoDB, and InfluxDB.
Expertise in ETL/ELT pipeline development and workflow orchestration using toolslike Apache Airflow, Azure Data Factory, or similar.
Proficiency in data modeling, data warehousing, and performance optimizationtechniques.
Experience in real-time data streaming using Kafka, Azure Event Hubs, or relatedtechnologies.
Advanced programming skills in Python and SQL.
Working knowledge of Azure Cloud and its data services.
Experience with Change Data Capture (CDC) techniques for incremental dataprocessing.
Excellent problem-solving, debugging, and analytical skills.
Job Type: Full-time
Pay: ₹1,500,000.00 - ₹2,500,000.00 per year
Work Location: In person
Princenton software services pvt ltd
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowbengaluru
30.0 - 35.0 Lacs P.A.
mumbai, bengaluru, thiruvananthapuram
16.0 - 18.0 Lacs P.A.
noida, new delhi, delhi / ncr
30.0 - 45.0 Lacs P.A.
4.0 - 8.0 Lacs P.A.
chennai
7.0 - 12.0 Lacs P.A.
15.0 - 25.0 Lacs P.A.
10.0 - 14.0 Lacs P.A.
pune, maharashtra, india
Salary: Not disclosed
noida, uttar pradesh, india
Experience: Not specified
Salary: Not disclosed
mumbai metropolitan region
Salary: Not disclosed