121 Avro Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thin...

Posted 3 months ago

AI Match Score
Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You should have 5+ years of experience in core Java and the Spring Framework. Additionally, you must have at least 2 years of experience in Cloud technologies such as GCP, AWS, or Azure, with a preference for GCP. It is required to have experience in big data processing on a distributed system and in working with databases including RDBMS, NoSQL databases, and Cloud natives. You should also have expertise in handling various data formats like Flat file, JSON, Avro, XML, etc., including defining schemas and contracts. Furthermore, you should have experience in implementing data pipelines (ETL) using Dataflow (Apache Beam) and in working with Microservices and integration patterns of APIs with...

Posted 3 months ago

AI Match Score
Apply

2.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Tiger Analytics is a global AI and analytics consulting firm with a team of over 2800 professionals focused on using data and technology to solve complex problems that impact millions of lives worldwide. Our culture is centered around expertise, respect, and a team-first mindset. Headquartered in Silicon Valley, we have delivery centers globally and offices in various cities across India, the US, UK, Canada, and Singapore, along with a significant remote workforce. At Tiger Analytics, we are certified as a Great Place to Work. Joining our team means being at the forefront of the AI revolution, working with innovative teams that push boundaries and create inspiring solutions. We are currently...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Summary: We are seeking a highly skilled and experienced Snowflake Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the administration, management, and optimization of our Snowflake data platform. The role requires strong expertise in database design, performance tuning, security, and data governance within the Snowflake environment. Key Responsibilities: Administer and manage Snowflake cloud data warehouse environments, including provisioning, configuration, monitoring, and maintenance. Implement security policies, compliance, and access controls. Manage Snowflake accounts and databases in a multi-tenant environment. Monitor the systems and provide ...

Posted 3 months ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Platform Engineer Lead at Barclays, your role is crucial in building and maintaining systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes. Your responsibility includes ensuring the accuracy, accessibility, and security of all data. To excel in this role, you should have hands-on coding experience in Java or Python and a strong understanding of AWS development, encompassing various services such as Lambda, Glue, Step Functions, IAM roles, and more. Proficiency in building efficient data pipelines using Apache Spark and AWS services is essential. You are expected to possess strong technical acumen, troubleshoot complex syst...

Posted 3 months ago

AI Match Score
Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability...

Posted 3 months ago

AI Match Score
Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As an organization with over 26 years of experience in delivering Software Product Development, Quality Engineering, and Digital Transformation Consulting Services to Global SMEs & Large Enterprises, CES has established long-term relationships with leading Fortune 500 Companies across various industries such as Automotive, AgTech, Bio Science, EdTech, FinTech, Manufacturing, Online Retailers, and Investment Banks. These relationships, spanning over a decade, are built on our commitment to timely delivery of quality services, investments in technology innovations, and fostering a true partnership mindset with our customers. In our current phase of exponential growth, we maintain a consistent ...

Posted 3 months ago

AI Match Score
Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at our organization, you will have the opportunity to work on building smart, automated testing solutions. We are seeking individuals who are passionate about data engineering and eager to contribute to our growing team. Ideally, you should hold a Bachelor's or Master's degree in Computer Science, IT, or equivalent field, with a minimum of 4 to 8 years of experience in building and deploying complex data pipelines and data solutions. For junior profiles, a similar educational background is preferred. Your responsibilities will include deploying data pipelines using technologies like Databricks, as well as demonstrating hands-on experience with Java and Databricks. Addition...

Posted 3 months ago

AI Match Score
Apply

3.0 - 8.0 years

11 - 16 Lacs

Noida, Hyderabad, Ahmedabad

Work from Office

About the Role: Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:AWS Data EngineerExperience5-10 YearsLocation:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency...

Posted 3 months ago

AI Match Score
Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

NTT DATA is looking for a Data Ingest Engineer to join the team in Pune, Mahrshtra (IN-MH), India (IN). As a Data Ingest Engineer, you will be part of the Ingestion team of the DRIFT data ecosystem, focusing on ingesting data in a timely, complete, and comprehensive manner using the latest technology available to Citi. Your role will involve leveraging new and creative methods for repeatable data ingestion from various sources while ensuring the highest quality data is provided to downstream partners. Responsibilities include partnering with management teams to integrate functions effectively, identifying necessary system enhancements for new products and process improvements, and resolving ...

Posted 3 months ago

AI Match Score
Apply

2.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Tiger Analytics is a global AI and analytics consulting firm that is at the forefront of solving complex problems using data and technology. With a team of over 2800 experts spread across the globe, we are dedicated to making a positive impact on the lives of millions worldwide. Our culture is built on expertise, respect, and collaboration, with a focus on teamwork. While our headquarters are in Silicon Valley, we have delivery centers and offices in various cities in India, the US, UK, Canada, and Singapore, as well as a significant remote workforce. As an Azure Big Data Engineer at Tiger Analytics, you will be part of a dynamic team that is driving an AI revolution. Your typical day will i...

Posted 3 months ago

AI Match Score
Apply

0.0 - 5.0 years

4 - 9 Lacs

Chennai

Remote

Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Testing and debugging applications. Developing back-end components. Required Candidate profile Knowledge of Python and related frameworks including Django and Flask. A deep understanding and multi-process architecture and the threading limitations of Python. Perks and benefits Flexible Work Arrangements.

Posted 3 months ago

AI Match Score
Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As an online travel booking platform, Agoda is committed to connecting travelers with a vast network of accommodations, flights, and more. With cutting-edge technology and a global presence, Agoda strives to enhance the travel experience for customers worldwide. As part of Booking Holdings and headquartered in Asia, Agoda boasts a diverse team of over 7,100 employees from 95+ nationalities across 27 markets. The work environment at Agoda is characterized by diversity, creativity, and collaboration, fostering innovation through a culture of experimentation and ownership. The core purpose of Agoda is to bridge the world through travel, believing that travel enriches lives, facilitates learning...

Posted 3 months ago

AI Match Score
Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

About Prospecta Founded in 2002 in Sydney, Australia, with additional offices in India, North America, Canada, and a local presence in Europe, the UK, and Southeast Asia, Prospecta is dedicated to providing top-tier data management and automation software for enterprise clients. Our journey began with a mission to offer innovative solutions, leading us to become a prominent data management software company over the years. Our flagship product, MDO (Master Data Online), is an enterprise Master Data Management (MDM) platform designed to streamline data management processes, ensuring accurate, compliant, and relevant master data creation, as well as efficient data disposal. With a strong presen...

Posted 3 months ago

AI Match Score
Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are an experienced Senior QA Specialist being sought to join a dynamic team for a critical AWS to GCP migration project. Your primary responsibility will involve the rigorous testing of data pipelines and data integrity in GCP cloud to ensure seamless reporting and analytics capabilities. Your key responsibilities will include designing and executing test plans to validate data pipelines re-engineered from AWS to GCP, ensuring data integrity and accuracy. You will work closely with data engineering teams to understand AVRO, ORC, and Parquet file structures in AWS S3, and analyze the data in external tables created in Athena used for reporting. It will be essential to ensure that schema a...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

16 - 31 Lacs

Pune

Hybrid

Software Engineer - Lead/Sr.Engineer Bachelor in Computer Science, Engineering, or equivalent experience 7+ years of experience in core JAVA, Spring Framework (Required) 2 years of Cloud experience (GCP, AWS, Azure, GCP preferred ) (Required) Experience in big data processing, on a distributed system. (required) Experience in databases RDBMS, NoSQL databases Cloud natives. (Required) Experience in handling various data formats like Flat file, jSON, Avro, xml etc with defining the schemas and the contracts. (required) Experience in implementing the data pipeline (ETL) using Dataflow (Apache beam) Experience in Microservices and integration patterns of the APIs with data processing. Experience...

Posted 3 months ago

AI Match Score
Apply

4.0 - 9.0 years

10 - 14 Lacs

Pune

Work from Office

: Job TitleStrategic Data Archive Onboarding Engineer, AS LocationPune, India Role Description Strategic Data Archive is an internal service which enables application to implement records management for regulatory requirements, application decommissioning, and application optimization. You will work closely with other teams providing hands on support onboarding by helping them define record content and metadata, configuring archiving, supporting testing and creating defensible documentation that archiving was complete. You will need to both support and manage the expectations of demanding internal clients. What well offer you , 100% reimbursement under childcare assistance benefit (gender ne...

Posted 3 months ago

AI Match Score
Apply

5.0 - 8.0 years

22 - 32 Lacs

Bengaluru

Work from Office

Work with the team to define high-level technical requirements and architecture for the back-end services ,Data components,data monetization component Develop new application features & enhance existing one Develop relevant documentation and diagram Required Candidate profile min 5+ yr of exp in Python development, with a focus on data-intensive application exp with Apache Spark & PySpark for large-scale data process understand of SQL & exp working with relational database

Posted 4 months ago

AI Match Score
Apply

8.0 - 12.0 years

22 - 27 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 12 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. Youll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. Whats in it for you Be part of a global company and build solutions at enterpri...

Posted 4 months ago

AI Match Score
Apply

8.0 - 11.0 years

45 - 50 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Scala Developer to work on scalable data pipelines, distributed systems, and backend services. This role is perfect for candidates passionate about functional programming and big data. Key Responsibilities: Develop data-intensive applications using Scala . Work with frameworks like Akka, Play, or Spark . Design and maintain scalable microservices and ETL jobs. Collaborate with data engineers and platform teams. Write clean, testable, and well-documented code. Required Skills & Qualifications: Strong in Scala, Functional Programming, and JVM internals Experience with Apache Spark, Kafka, or Cassandra Familiar with SBT, Cats, or Scalaz Knowledge of CI/CD, Docker...

Posted 4 months ago

AI Match Score
Apply

4.0 - 9.0 years

6 - 16 Lacs

Hyderabad

Work from Office

Notice Period: Immediate to 15 Days Skills Java 8/11 Spring Boot Microservices Junit / Mockito Karate / Gherkin MySQL Database Kafka / Avro git Pivotal Cloud Foundry Jenkins

Posted 4 months ago

AI Match Score
Apply

5.0 - 7.0 years

1 - 1 Lacs

Lucknow

Hybrid

Technical Experience 5-7 years of hands-on experience in data pipeline development and ETL processes 3+ years of deep AWS experience , specifically with Kinesis, Glue, Lambda, S3, and Step Functions Strong proficiency in NodeJS/JavaScript and Java for serverless and containerized applications Production experience with Apache Spark, Apache Flink, or similar big data processing frameworks Data Engineering Expertise Proven experience with real-time streaming architectures and event-driven systems Hands-on experience with Parquet, Avro, Delta Lake, and columnar storage optimization Experience implementing data quality frameworks such as Great Expectations or similar tools Knowledge of star sche...

Posted 4 months ago

AI Match Score
Apply

7.0 - 12.0 years

18 - 33 Lacs

Hyderabad, Bengaluru

Hybrid

Bachelor in Computer Science, Engineering, or equivalent experience 7+ years of experience in core JAVA, Spring Framework (Required) 2 years of Cloud experience (GCP, AWS, Azure, GCP preferred ) (Required) Experience in big data processing, on a distributed system. (required) Experience in databases RDBMS, NoSQL databases Cloud natives. (Required) Experience in handling various data formats like Flat file, jSON, Avro, xml etc with defining the schemas and the contracts. (required) Experience in implementing the data pipeline (ETL) using Dataflow( Apache beam) Experience in Microservices and integration patterns of the APIs with data processing. Experience in data structure, defining and desi...

Posted 4 months ago

AI Match Score
Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

About The Position. Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective.. Key Responsibilities. Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency.. Mentor junior data engineers within the organization.. Design, develop, and maintain data...

Posted 4 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies