Posted:6 days ago|
Platform:
On-site
Full Time
#Connections #Hiring #Fulltime #Contract #Experience #DataEngineer
Hi Connections,
We are hiring...
Job Description: Data Engineer – Data bricks - Data bricks Integration
Job Type:
Full-Time / Contract
About the Role
We are seeking a highly skilled Data Engineer to design, develop, and maintain data pipelines that extract data from Oracle Symphony via APIs, process and store it in the Databricks Lakehouse platform, and then integrate it into Oracle EPM (Enterprise Performance Management). This role requires deep expertise in data integration, ETL/ELT, APIs, and Databricks. The candidate will work closely with business stakeholders, architects, and analysts to ensure seamless data flow, transformation, and availability for financial planning, reporting, and analytics.
Key Responsibilities
Design and implement end-to-end pipelines from Oracle Symphony (API extraction) into Databricks Lakehouse.
Develop efficient ETL/ELT processes in Databricks (PySpark, Delta Lake) to transform, cleanse, and enrich data.
Build and maintain data flows from Databricks into Oracle EPM to support reporting, forecasting, and planning.
Ensure data quality, consistency, and governance across Symphony, Databricks, and EPM.
Optimize pipeline performance, scalability, and reliability.
Collaborate with data architects, finance teams, and Oracle specialists to meet business needs.
Troubleshoot pipeline issues and provide production support for data integration processes.
Document architecture, pipeline logic, and integration workflows.
Stay current on Databricks, Oracle, and API integration best practices.
Required Skills & Qualifications
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
10+ years of experience in data engineering, ETL/ELT, and data pipeline development.
Hands-on experience with Databricks (PySpark, Delta Lake, MLflow).
Strong experience with APIs (REST, SOAP, JSON, XML) for data extraction and integration.
Proficiency in SQL, Python, and Spark for data processing.
Experience with cloud platforms (Azure, AWS, or GCP) for hosting Databricks and related services.
Knowledge of data modeling, data governance, and performance tuning.
Strong problem-solving skills and ability to work in cross-functional teams.
Interested candidates, kindly share your updated profile to pavani@sandvcapitals.com or reach us on 7995292089.
Thank you.
Job Type: Full-time
Pay: ₹1,000,000.00 - ₹2,000,000.00 per year
Work Location: In person
SandVcapitals Private Limited
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now7.0 - 14.0 Lacs P.A.
10.0 - 20.0 Lacs P.A.
maharashtra
Salary: Not disclosed
chennai, tamil nadu
Salary: Not disclosed
gurgaon, haryana, india
Salary: Not disclosed
mumbai, maharashtra, india
Salary: Not disclosed
hyderabad, chennai
12.0 - 22.0 Lacs P.A.
gurgaon, haryana, india
Salary: Not disclosed
hyderabad, telangana, india
Salary: Not disclosed
mumbai, maharashtra, india
Salary: Not disclosed