Senior Data Engineer (Flexcube)

3 years

0 Lacs

Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About the Company/Team

We are a global IT services provider, specializing in the Banking and Financial Services domain. Our team is dedicated to delivering cutting-edge solutions to clients worldwide, focusing on enhancing and extending their existing platforms to meet evolving industry demands. As a trusted partner, we offer core banking surround services, helping financial institutions stay ahead of the curve.Our team takes pride in its expertise in big data engineering and data-driven solutions. We work closely with clients to design and develop robust data infrastructure, ensuring their systems are future-proof and capable of handling massive transaction volumes.

Job Summary

We are seeking a talented Data Engineer to join our dynamic team, focusing on big data solutions for the banking industry. In this role, you will lead the design and development of Big Data Warehouse and Data Lake systems, playing a crucial part in our clients' digital transformation journeys. Your expertise in data engineering will empower financial institutions to harness the power of their data, enabling better decision-making and enhanced customer experiences.

Key Responsibilities

  • Big Data Warehouse and Data Lake Development: Lead the design and implementation of Big Data Warehouse and Data Lake systems, utilizing the Hadoop ecosystem and cloud technologies.
  • Data Pipeline Architecture: Architect efficient data pipelines to ingest, transform, and load data from various sources into the data warehouse and data lake.
  • Hadoop Ecosystem Expertise: Employ tools like Spark, Hadoop, Hive, and Sqoop to process and analyze large-scale data, ensuring optimal performance and security.
  • Cloud Integration: Leverage cloud technologies such as MS Azure Data Factory, Azure Data Bricks, and ADLS Gen2 to build scalable and cost-effective data solutions.
  • Data Modeling: Collaborate with the functional team to understand requirements and design canonical data models for transaction processing.
  • Code Development: Write high-quality code using Python, Scala, and SQL, adhering to best practices and coding standards.
  • Peer Reviews: Conduct code reviews to ensure code quality and early defect detection.
  • Troubleshooting: Troubleshoot and resolve issues, following escalation procedures as needed.
  • Team Collaboration: Work as a senior member of the development team, providing technical guidance and participating in project discussions.

Qualifications & Skills

Mandatory:

  • A Computer Science-related degree (BE/BTech/MCA) is required.
  • 3-7 years of experience in Data Engineering, with a strong focus on big data technologies.
  • Hands-on experience with Hadoop ecosystem tools, particularly Spark (PySpark or Spark-Scala), for 3-7 years.
  • Proficiency in programming languages: Python, Scala, and SQL.
  • Understanding of distributed computing, data structures, and algorithms.
  • Experience in the banking domain, preferably with knowledge of core banking solutions like Oracle FLEXCUBE.
  • Strong analytical and communication skills.

Good-to-Have:

  • Prior involvement in at least two production implementations of Big Data Warehouse/Data Lake projects.
  • Knowledge of Oracle FLEXCUBE backend data model.
  • Familiarity with SDLC and Agile methodologies.
  • Flexibility and adaptability to changing project priorities.
  • Excellent written and verbal communication skills.

Self-Assessment Questions:

  • Describe a successful Big Data Warehouse or Data Lake project you worked on. What were your key contributions, and how did you ensure its success?
  • How do you approach designing a data pipeline for a large-scale transaction processing system? Elaborate on the tools and techniques you would choose and why.
  • Share your experience with Spark and Hadoop. How have you optimized data processing and analysis using these technologies?
  • In your experience, what are the critical considerations when working with cloud data solutions like Azure Data Factory and Azure Data Bricks?
  • How do you stay updated with the latest trends and technologies in the data engineering field, and how do you apply this knowledge to your work?

Filtration and Screening Criteria:

  • 3-7 years of experience in Data Engineering, specifically in the Big Data domain.
  • Hands-on expertise with Hadoop ecosystem tools, including Spark and Kafka.
  • Profound knowledge of distributed computing and storage systems.
  • Proficiency in SQL and PL/SQL programming and performance tuning.
  • Understanding of Oracle FLEXCUBE backend data model.
  • Strong analytical and communication skills.

Qualifications

Career Level - IC3

About Us

As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity.We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all.Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Oracle logo
Oracle

Information Technology

Redwood City

RecommendedJobs for You