Data Engineer - Databricks/Snowflake/Open Table Formats

3 - 4 years

10 - 15 Lacs

Posted:1 week ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Data Engineer - Databricks/Snowflake/Open Table Formats

About Oracle FSGIU - Finergy:

The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients business goals.

Job Summary:

We are seeking a skilled

Data Engineer

experienced in

modern data platforms

and

open table formats

. The ideal candidate will have direct experience with

Databricks

,

Snowflake

, or

open table formats

, and be proficient at working with open table formats like

Iceberg

and

catalog

solutions such as

Unity

. Experience with

AWS clusters

, traditional RDBMS, and knowledge of both

SQL and NoSQL

approaches will be highly valued.

Job Responsibilities

  • Design, develop, and maintain data pipelines and solutions leveraging

    Databricks, Snowflake

    , or

    open table formats

    .
  • Work with open table formats like Apache

    Iceberg

    and open catalogs like Unity Catalog for data governance and management.
  • Deploy, configure, and utilize

    AWS clusters

    for data processing and storage workloads.
  • Develop and optimize SQL code: create/modify tables, views, procedures, and queries in traditional RDBMS systems (

    PostgreSQL

    , Oracle, Sybase).
  • Apply

    NOSQL

    capabilities and work with JSON, array datatypes, and implement views in PostgreSQL.
  • Collaborate with cross-functional teams to deliver efficient and scalable data architectures.
  • Participate in schema design discussions, with exposure to

    Data Vault

    and/or

    Star schema

    design a plus.
  • Write clear technical documentation and adhere to data governance and compliance standards.

Qualifications Skills:

  • Bachelor s or Master s degree in Computer Science, Software Engineering, or related discipline
  • 5+ years of experience with Databricks or Snowflake or open table formats (such as Apache Iceberg).
  • Proficient understanding of open table formats like Iceberg and open catalog solutions like Unity Catalog.
  • Hands-on experience with AWS clusters.
  • Strong SQL skills for RDBMS (PostgreSQL, Oracle, Sybase), including views, procedures, and tables.
  • Some knowledge of NoSQL queries and implementing views in PostgreSQL.
  • Excellent communication and problem-solving skills.

Nice to Have:

  • PySpark coding skills.
  • Exposure to Vault schema and/or Star schema design.
  • Familiarity with data pipeline orchestration and ETL tools.
  • Experience with data governance and cataloging tools.

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced, agile environment.
Career Level - IC1

Job Responsibilities

  • Design, develop, and maintain data pipelines and solutions leveraging

    Databricks, Snowflake

    , or

    open table formats

    .
  • Work with open table formats like Apache

    Iceberg

    and open catalogs like Unity Catalog for data governance and management.
  • Deploy, configure, and utilize

    AWS clusters

    for data processing and storage workloads.
  • Develop and optimize SQL code: create/modify tables, views, procedures, and queries in traditional RDBMS systems (

    PostgreSQL

    , Oracle, Sybase).
  • Apply

    NOSQL

    capabilities and work with JSON, array datatypes, and implement views in PostgreSQL.
  • Collaborate with cross-functional teams to deliver efficient and scalable data architectures.
  • Participate in schema design discussions, with exposure to

    Data Vault

    and/or

    Star schema

    design a plus.
  • Write clear technical documentation and adhere to data governance and compliance standards.

Job for Automation

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Oracle logo
Oracle

Information Technology

Redwood City

RecommendedJobs for You

pune, bengaluru, mumbai (all areas)

noida, mumbai, pune, chennai, bengaluru

hubli, mangaluru, mysuru, bengaluru, belgaum