Senior Data Engineer (Flexcube)

5 - 8 years

25 - 30 Lacs

Bengaluru

Posted:9 hours ago| Platform: Naukri logo

Apply

Skills Required

Computer science Data analysis Coding Consulting Information technology Unix shell scripting Monitoring SQL Python Core banking

Work Mode

Work from Office

Job Type

Full Time

Job Description

We are seeking a skilled Data Engineer with extensive experience in the Hadoop ecosystem and a strong background in integrating and managing data from FLEXCUBE core banking systems. The ideal candidate will play a key role in designing, implementing, and optimizing our data pipelines, ensuring seamless data flow and analysis. Key Responsibilities: Data Integration: Lead the integration of data from FLEXCUBE core banking systems into our Hadoop-based data infrastructure. Develop and maintain efficient data ingestion processes. Hadoop Ecosystem: Design, build, and optimize data pipelines within the Hadoop ecosystem, including HDFS, Sqoop, Unix shell scripting, Python coding and Spark. Data Modelling: Create and maintain data models, schemas, and structures to support data analysis and reporting requirements. ETL Processes: Develop Extract, Transform, Load (ETL) processes to cleanse, enrich, and transform data for downstream consumption. Data Quality: Implement data quality checks and monitoring processes to ensure the accuracy, completeness, and consistency of data. Performance Optimization: Optimize data processing and query performance within the Hadoop ecosystem. Data Security: Ensure data security and compliance with data privacy regulations during data handling and processing. Documentation: Maintain thorough documentation of data pipelines, transformations, and data flow processes. Collaboration: Collaborate with cross-functional teams, including FLEXCUBE consulting, data scientists, analysts, and business stakeholders, to understand data requirements and deliver actionable insights. Mastery of the FLEXCUBE 14.7 backend tables and data model is essential. Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. 5 to 8 Years of proven experience in designing and implementing data solutions within the Hadoop ecosystem. Strong expertise in Hadoop components such as HDFS, Sqoop, Unix shell scripting, Python coding and Spark Experience with FLEXCUBE integration and data extraction. Proficiency in SQL and database systems. Knowledge of data modelling and ETL processes. Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Banking or financial industry experience is a plus. Certifications in Hadoop or related technologies are beneficial. Additional Information: This role offers an exciting opportunity to work on cutting-edge data projects and contribute to data-driven decision-making in the financial sector. The candidate should be prepared to work in a dynamic and collaborative environment. Candidates with a strong background in the Hadoop ecosystem and experience with FLEXCUBE integration are encouraged to apply. We are committed to fostering professional growth and providing opportunities for skill development.

Mock Interview

Practice Video Interview with JobPe AI

Start Computer Science Interview Now
Oracle
Oracle

Information Technology

Redwood City

135,000 Employees

5543 Jobs

    Key People

  • Safra Catz

    CEO
  • Larry Ellison

    Co-Founder & CTO

RecommendedJobs for You