Jobs
Interviews

3 Iceberg Tables Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Project Engineer at one of the top Australian banks, you will play a key role in modernizing the data and analytics platform. Working closely with IT and business stakeholders in the Data and Platform team, you will contribute to implementing the bank's data strategy to position it as the best AI bank worldwide. Your responsibilities will include creating technology blueprints and engineering roadmaps for multi-year data transformational programs. You should have a strong understanding of architectural patterns, cloud-native solutions on AWS, microservices architecture, solutions integration, and containerization. Extensive knowledge of AWS services, especially those related to data storage and processing such as S3, RDS, Redshift, DataZone, and Glue is essential. As an expert in the data development lifecycle, you will focus on data ingestion processes, data transformation pipelines, data integration, and visualization. Your excellent stakeholder management skills and a "Can-do" attitude will be crucial for success in this role. **Skills:** **Must have:** - 8+ years of hands-on experience as an AWS Platform Engineer - Hands-on experience in AWS compute and services, including EC2, and building data technology stack like Hadoop/EMR Serverless, Glue, Redshift, Airflow, Aurora PostgreSQL - Experience in event-driven architecture using Apache Kafka, AWS Kinesis, or similar technologies - Infrastructure as code using CloudFormation or Terraform - Proficiency in Python, SQL - Building CI/CD tooling with Github/Jenkins **Nice to have:** - AWS Solution Architect certification - Knowledge of Containerization (Docker, Kubernetes) - Experience with data visualization tools and integration to Tableau, PowerBI - Familiarity with Alation - Exposure to observability tools like Observe, Splunk, or Prometheus/Grafana - Experience with Ab Initio or DBT tooling - Proficiency with Parquet File Format, Iceberg tables - Understanding of Glue Data Catalogue & AWS DataZone - Markets domain knowledge - Experience dealing with complex hierarchical datasets such as those used in financial products like Murex/MRE/Wallstreet **Other:** - **Languages:** English (C2 Proficient) - **Seniority:** Senior - **Location:** Bengaluru, India - **Industry:** BCM Industry - **Closing Date:** 30/07/2025 - **Job Reference:** VR-116201 This is an exciting opportunity for a skilled professional with a passion for data engineering and AWS technologies to contribute to the transformation of a leading bank's data platform.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a highly motivated and experienced Data Engineer, you will be responsible for designing, developing, and implementing solutions that enable seamless data integration across multiple cloud platforms. Your expertise in data lake architecture, Iceberg tables, and cloud compute engines like Snowflake, BigQuery, and Athena will ensure efficient and reliable data access for various downstream applications. Your key responsibilities will include collaborating with stakeholders to understand data needs and define schemas, designing and implementing data pipelines for ingesting, transforming, and storing data. You will also be developing data transformation logic to make Iceberg tables compatible with the data access requirements of Snowflake, BigQuery, and Athena, as well as designing and implementing solutions for seamless data transfer and synchronization across different cloud platforms. Ensuring data consistency and quality across the data lake and target cloud environments will be crucial in your role. Additionally, you will be analyzing data patterns and identifying performance bottlenecks in data pipelines, implementing data optimization techniques to improve query performance and reduce data storage costs, and monitoring data lake health to proactively address potential issues. Collaboration and communication with architects, leads, and other stakeholders to ensure data quality meet specific requirements will also be an essential part of your role. To be successful in this position, you should have a minimum of 4+ years of experience as a Data Engineer, strong hands-on experience with data lake architectures and technologies, proficiency in SQL and scripting languages, and experience with data governance and security best practices. Excellent problem-solving and analytical skills, strong communication and collaboration skills, and familiarity with cloud-native data tools and services are also required. Additionally, certifications in relevant cloud technologies will be beneficial. In return, GlobalLogic offers exciting projects in industries like High-Tech, communication, media, healthcare, retail, and telecom. You will have the opportunity to collaborate with a diverse team of highly talented individuals in an open, laidback environment. Work-life balance is prioritized with flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional development opportunities include Communication skills training, Stress Management programs, professional certifications, and technical and soft skill trainings. GlobalLogic provides competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS(National Pension Scheme), extended maternity leave, annual performance bonuses, and referral bonuses. Fun perks such as sports events, cultural activities, food on subsidized rates, corporate parties, dedicated GL Zones, rooftop decks, and discounts for popular stores and restaurants are also part of the vibrant office culture at GlobalLogic. About GlobalLogic: GlobalLogic is a leader in digital engineering, helping brands design and build innovative products, platforms, and digital experiences for the modern world. By integrating experience design, complex engineering, and data expertise, GlobalLogic helps clients accelerate their transition into tomorrow's digital businesses. Operating under Hitachi, Ltd., GlobalLogic contributes to driving innovation through data and technology for a sustainable society with a higher quality of life.,

Posted 1 month ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Bengaluru

Work from Office

We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer (AWS, Confluent & Snaplogic ) Data Integration : Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing : Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage : Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation : Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products : Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management : Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming : Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes : Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging : Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. Youd describe yourself as: Experience : 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills : Proficiency in Python, SQL, and other relevant programming languages. Data Modeling : Experience with data modeling and database design. Problem-Solving : Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail : Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills : Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability : Ability to adapt to changing technologies and work in a fast-paced environment. Team Player : Strong team player with a collaborative mindset. Continuous Learning : Eagerness to learn and stay updated with the latest trends and technologies in data engineering.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies