Data Engineer - Databricks

10 years

10 - 20 Lacs

Posted:6 days ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

#Connections #Hiring #Fulltime #Contract #Experience #DataEngineer

Hi Connections,

We are hiring...

Job Description: Data Engineer – Data bricks - Data bricks Integration

Job Type:

Full-Time / Contract

About the Role

We are seeking a highly skilled Data Engineer to design, develop, and maintain data pipelines that extract data from Oracle Symphony via APIs, process and store it in the Databricks Lakehouse platform, and then integrate it into Oracle EPM (Enterprise Performance Management). This role requires deep expertise in data integration, ETL/ELT, APIs, and Databricks. The candidate will work closely with business stakeholders, architects, and analysts to ensure seamless data flow, transformation, and availability for financial planning, reporting, and analytics.

Key Responsibilities

Design and implement end-to-end pipelines from Oracle Symphony (API extraction) into Databricks Lakehouse.

Develop efficient ETL/ELT processes in Databricks (PySpark, Delta Lake) to transform, cleanse, and enrich data.

Build and maintain data flows from Databricks into Oracle EPM to support reporting, forecasting, and planning.

Ensure data quality, consistency, and governance across Symphony, Databricks, and EPM.

Optimize pipeline performance, scalability, and reliability.

Collaborate with data architects, finance teams, and Oracle specialists to meet business needs.

Troubleshoot pipeline issues and provide production support for data integration processes.

Document architecture, pipeline logic, and integration workflows.

Stay current on Databricks, Oracle, and API integration best practices.

Required Skills & Qualifications

Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

10+ years of experience in data engineering, ETL/ELT, and data pipeline development.

Hands-on experience with Databricks (PySpark, Delta Lake, MLflow).

Strong experience with APIs (REST, SOAP, JSON, XML) for data extraction and integration.

Proficiency in SQL, Python, and Spark for data processing.

Experience with cloud platforms (Azure, AWS, or GCP) for hosting Databricks and related services.

Knowledge of data modeling, data governance, and performance tuning.

Strong problem-solving skills and ability to work in cross-functional teams.

Interested candidates, kindly share your updated profile to pavani@sandvcapitals.com or reach us on 7995292089.

Thank you.

Job Type: Full-time

Pay: ₹1,000,000.00 - ₹2,000,000.00 per year

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, telangana, india

mumbai, maharashtra, india