Home
Jobs
Companies
Resume

8 Iceberg Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

0 - 0 Lacs

Chennai

Work from Office

Naukri logo

Job Description for Senior Data Engineer at Fynxt Experience Level: 8+ years Job Title: Senior Data Engineer Location: Chennai Job Type: Full Time Job Description: FYNXT is a Singapore based Software Product Development company that provides a Software as a Service (SaaS) platform to digitally transform leading brokerage firms and fund management companies and help them grow their market share. Our industry leading Digital Front office platform has transformed several leading financial institutions in the Forex industry to go fully digital to optimize their operations, cut costs and become more profitable. For more visit: www.fynxt.com Key Responsibilities: Architect & Build Scalable Systems: Design and implement petabyte-scale lakehouse architectures (Apache Iceberg, Delta Lake) to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink to process structured/unstructured data with low latency. ¢ High-Performance Applications: Leverage Java to build scalable, high-throughput data applications and services. ¢ Modern Data Infrastructure: Leverage modern data warehouses and query engines (Trino, Spark) for sub-second operation and analytics on real-time data. ¢ Database Expertise: Work with RDBMS (PostgreSQL, MySQL, SQL Server) and NoSQL (Cassandra, MongoDB) systems to manage diverse data workloads. ¢ Data Governance: Ensure data integrity, security, and compliance across multi-tenant systems. ¢ Cost & Performance Optimization: Manage production infrastructure for reliability, scalability, and cost efficiency. ¢ Innovation: Stay ahead of trends in the data ecosystem (e.g., Open Table Formats, stream processing) to drive technical excellence. ¢ API Development (Optional): Build and maintain Web APIs (REST/GraphQL) to expose data services internally and externally. Simptra Technologies Pvt. Ltd. hr@fynxt.com www.fynxt.com Qualifications: ¢ 8+ years of data engineering experience with large-scale systems (petabyte-level). ¢ Expert proficiency in Java for data-intensive applications. ¢ Hands-on experience with lakehouse architectures, stream processing (Flink), and event streaming (Kafka/Pulsar). ¢ Strong SQL skills and familiarity with RDBMS/NoSQL databases. ¢ Proven track record in optimizing query engines (e.g., Spark, Presto) and data pipelines. ¢ Knowledge of data governance, security frameworks, and multi-tenant systems. ¢ Experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform). What we offer? ¢ Unique experience in Fin-Tech industry, with a leading, fast-growing company. ¢ Good atmosphere at work and a comfortable working environment. ¢ Additional benefit of Group Health Insurance - OPD Health Insurance ¢ Coverage for Self + Family (Spouse and up to 2 Children) ¢ Attractive Leave benefits like Maternity, Paternity Benefit, Vacation leave & Leave Encashment ¢ Reward & Recognition Monthly, Quarterly, Half yearly & yearly. ¢ Loyalty benefits ¢ Employee referral program Simptra Technologies Pvt. Ltd. hr@fynxt.com www.fynxt.com

Posted 2 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

Remote

Foundit logo

About the Role The Search platform currently powers Rider and Driver Maps, Uber Eats, Groceries, Fulfilment, Freight, Customer Obsession and many such products and systems across Uber. We are building a unified platform for all of Uber's search use-cases. The team is building the platform on OpenSearch. We are already supporting in house search infrastructure built on top of Apache Lucene. Our mission is to build a fully managed search platform while delivering a delightful user experience through low-code data and control APIs . We are looking for an Engineering Manager with strong technical expertise to define a holistic vision and help builda highly scalable, reliable and secure platform for Uber's core business use-cases. Come join our team to build search functionality at Uber scale for some of the most exciting areas in the marketplace economy today. An ideal candidate will be working closely with a highly cross-functional team, including product management, engineering, tech strategy, and leadership to drive our vision and build a strong team. A successful candidate will need to demonstrate strong technical skills, system architecture / design. Having experience on the open source systems and distributed systems is a big plus for this role. The EM2 role will require building a team of software engineers, while directly contributing on the technical side too. What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Provide technical leadership, influence and partner with fellow engineers to architect, design and build infrastructure that can stand the test of scale and availability, while reducing operational overhead. Lead, manage and grow a team of software engineers. Mentor and guide the professional and technical development of engineers on your team, and continuously improve software engineering practices. Own the craftsmanship, reliability, and scalability of your solutions. Encourage innovation, implementation of ground breaking technologies, outside-of-the-box thinking, teamwork, and self-organization Hire top performing engineering talent and maintaining our dedication to diversity and inclusion Collaborate with platform, product and security engineering teams, and enable successful use of infrastructure and foundational services, and manage upstream and downstream dependencies ---- Basic Qualifications ---- Bachelor's degree (or higher) in Computer Science or related field. 10+ years of software engineering industry experience 8+ years of experience as an IC building large scale distributed software systems Outstanding technical skills in backend: Uber managers can lead from the front when the situation calls for it. 1+ years for frontline managing a diverse set of engineers ---- Preferred Qualifications ---- Prior experience with Search or big data systems - OpenSearch, Lucene, Pinot, Druid, Spark, Hive, HUDI, Iceberg, Presto, Flink, HDFS, YARN, etc preferred. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .

Posted 2 weeks ago

Apply

3.0 - 5.0 years

50 - 60 Lacs

Bengaluru

Work from Office

Naukri logo

Staff Data Engineer Experience: 3 - 5 Years Exp Salary : INR 50-60 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 4:00PM to 1:00AM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : ClickHouse, DuckDB, AWS, Python, SQL Good to have skills : DBT, Iceberg, Kestra, Parquet, SQLGlot Rill Data (One of Uplers' Clients) is Looking for: Staff Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Rill is the worlds fastest BI tool, designed from the ground up for real-time databases like DuckDB and ClickHouse. Our platform combines last-mile ETL, an in-memory database, and interactive dashboards into a full-stack solution thats easy to deploy and manage. With a BI-as-code approach, Rill empowers developers to define and collaborate on metrics using SQL and YAML. Trusted by leading companies in e-commerce, digital marketing, and financial services, Rill provides the speed and scalability needed for operational analytics and partner-facing reporting. Job Summary Overview Rill is looking for a Staff Data Engineer to join our Field Engineering team. In this role, you will work closely with enterprise customers to design and optimize high-performance data pipelines powered by DuckDB and ClickHouse. You will also collaborate with our platform engineering team to evolve our incremental ingestion architectures and support proof-of-concept sales engagements. The ideal candidate has strong SQL fluency, experience with orchestration frameworks (e.g., Kestra, dbt, SQLGlot), familiarity with data lake table formats (e.g., Iceberg, Parquet), and an understanding of cloud databases (e.g., Snowflake, BigQuery). Most importantly, you should have a passion for solving real-world data engineering challenges at scale. Key Responsibilities Collaborate with enterprise customers to optimize data models for performance and cost efficiency. Work with the platform engineering team to enhance and refine our incremental ingestion architectures. Partner with account executives and solution architects to rapidly prototype solutions for proof-of-concept sales engagements. Qualifications (required) Fluency in SQL and competency in Python. Bachelors degree in a STEM discipline or equivalent industry experience. 3+ years of experience in a data engineering or related role. Familiarity with major cloud environments (AWS, Google Cloud, Azure) Benefits Competitive salary Health insurance Flexible vacation policy How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Rill is an operational BI tool that provides fast dashboards that your team will actually use. Data teams build fewer, more flexible dashboards for business users, while business users make faster decisions and perform root cause analysis, with fewer ad hoc requests. Rills unique architecture combines a last-mile ETL service, an in-memory database, and operational dashboards - all in a single solution. Our customers are leading media & advertising platforms, including Comcast's Freewheel, tvScientific, AT&T's DishTV, and more. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

3 - 5 years

25 - 35 Lacs

Bengaluru

Remote

Naukri logo

Data Engineer Experience: 3 - 5 Years Exp Salary : Upto INR 35 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:30AM to 7:30PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Apache Airflow, Spark, AWS, Kafka, SQL Good to have skills : Apache Hudi, Flink, Iceberg, Azure, GCP Nomupay (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: At Nomupay, we're all about making global payments simple. Since 2021, weve been on a mission to remove complexity and help businesses expand without limits. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

4 - 9 years

4 - 9 Lacs

Pune

Work from Office

Naukri logo

TCS-FACE TO FACE INTERVIEW 08TH APRIL 2025 IN PUNE CANDIDATES WALKINDRIVE . JOB DESCRIPTION: JD: • True Hands-On Developer in Programming Languages like Java or Scala . • Expertise in Apache Spark . • Database modelling and working with any of the SQL or NoSQL Database is must. • Working knowledge of Scripting languages like shell/python. • Experience of working with Cloudera is Preferred • Orchestration tools like Airflow or Oozie would be a value addition. • Knowledge of Table formats like Delta or Iceberg is plus to have. • Working experience of Version controls like Git, build tools like Maven is recommended. • Having software development experience is good to have along with Data Engineering experience.

Posted 2 months ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.

Posted 3 months ago

Apply

9 - 14 years

7 - 11 Lacs

Pune

Work from Office

Naukri logo

Strong SQL, Apache Spark, Python, Hive, Iceberg, Big Data Ecosystem, Starburst/Trino, Airflow, Kafka Agile, CI/CD Pipeline, JIRA Data Engineering: Strong SQL, Apache Spark, Python, Hive, Iceberg, Big Data Ecosystem, Starburst/Trino, Airflow, Kafka AWS Services related to data and analytics implementing Data Lakehouse solutions in AWS: S3, EC2, EMR, Glue Catalog, SQS, Elastic Kubernetes Service (EKS), Lambda Functions, Cloud Watch, IAM CI/CD tools: JIRA/Github/Jenkins/Tekton/Harness Knowledge in Terraform, CyberArk, Hashicorp Vault is a plus.

Posted 3 months ago

Apply

4 - 9 years

6 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Job Purpose and Impact As a Data Engineer at Cargill you work across the full stack to design, develop and operate high performance and data centric solutions using our comprehensive and modern data capabilities and platforms. You will play a critical role in enabling analytical insights and process efficiencies for Cargills diverse and complex business environments. You will work in a small team that shares your passion for building innovative, resilient, and high quality solutions while sharing, learning and growing together. Key Accountabilities Collaborate with business stakeholders, product owners and across your team on product or solution designs. Develop robust, scalable and sustainable data products or solutions utilizing cloud based technologies. Provide moderately complex technical support through all phases of product or solution life cycle. Perform data analysis, handle data modeling and configure and develop data pipelines to move and optimize data assets. Build moderately complex prototypes to test new concepts and provide ideas on reusable frameworks, components and data products or solutions and help promote adoption of new technologies. Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff. Other duties as assigned Qualifications MINIMUM QUALIFICATIONS Bachelors degree in a related field or equivalent experience Minimum of two years of related work experience Other minimum qualifications may apply PREFERRED QUALIFCATIONS Experience developing modern data architectures, including data warehouses, data lakes, data meshes, hubs and associated capabilities including ingestion, governance, modeling, observability and more. Experience with data collection and ingestion capabilities, including AWS Glue, Kafka Connect and others. Experience with data storage and management of large, heterogenous datasets, including formats, structures, and cataloging with such tools as Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, Kudu or others. Experience with transformation and modeling tools, including SQL based transformation frameworks, orchestration and quality frameworks including dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, Oozie and others Experience working in Big Data environments including tools such as Hadoop and Spark Experience working in Cloud Platforms including AWS, GCP or Azure Experience of streaming and stream integration or middleware platforms, tools, and architectures such as Kafka, Flink, JMS, or Kinesis. Strong programming knowledge of SQL, Python, R, Java, Scala or equivalent Proficiency in engineering tooling including docker, git, and container orchestration services Strong experience of working in devops models with demonstratable understanding of associated best practices for code management, continuous integration, and deployment strategies. Experience and knowledge of data governance considerations including quality, privacy, security associated implications for data product development and consumption

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies