25 Schema Registry Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 16.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Software Architect at our organization, you will play a crucial role as a key leader in the architecture team. Your main responsibility will be to define and evolve the architectural blueprint for complex distributed systems built using Java, Spring Boot, Apache Kafka, and cloud-native technologies. Here are some key responsibilities you will be expected to fulfill: - Own and evolve the overall system architecture for Java-based microservices and data-intensive applications. - Define and enforce architecture best practices, including clean code principles, DDD, event-driven design, and cloud-native patterns. - Lead technical design sessions, architecture reviews, and design walkt...

Posted 13 hours ago

AI Match Score
Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Responsibilities : Design and implement real-time streaming solutions using Kafka, Confluent Platform, and Confluent Cloud Develop stream processing applications using Apache Flink, ksqlDB, and Kafka Streams Build data governance frameworks using Confluent's Stream Governance suite Implement data lineage tracking and stream catalog management Deploy and manage KRaft-based clusters (ZooKeeper-free Kafka architecture) Configure client-side field-level encryption for sensitive data protection Integrate Kafka with enterprise systems (databases, APIs, microservices) Manage Kafka clusters, topics, schemas, and connectors Optimize performance, scalability, and reliability of Kafka solutions Configu...

Posted 1 week ago

AI Match Score
Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description: We are seeking a skilled Kafka Integration Engineer with hands on experience in Confluent Kafka to join our data engineering team The ideal candidate will be responsible for designing developing and maintaining scalable real time data pipelines and integrations using Kafka and Confluent components Key Responsibilities: Design and implement real time streaming solutions using Kafka Confluent Platform and Confluent Cloud Develop stream processing applications using Apache Flink ksqlDB and Kafka Streams Build data governance frameworks using Confluent s Stream Governance suite Implement data lineage tracking and stream catalog management Deploy and manage KRaft based clusters Z...

Posted 1 week ago

AI Match Score
Apply

9.0 - 14.0 years

25 - 37 Lacs

hyderabad, chennai, bengaluru

Work from Office

Role & responsibilities Build producers, consumers, and stream-processing applications. Develop Kafka Connect pipelines to integrate with external data systems (databases, cloud storage, APIs). Implement schema evolution strategies with Confluent Schema Registry (Avro, Protobuf, JSON). Operations & Monitoring Manage topic creation, partitions, replication, retention policies. Monitor cluster health, throughput, and latency using tools like Confluent Control Center, Prometheus, Grafana, etc. Troubleshoot performance bottlenecks, consumer lag, and broker issues. Security & Governance Implement authentication and authorization (SASL, Kerberos, SSL, RBAC). Enforce data governance, compliance, an...

Posted 2 weeks ago

AI Match Score
Apply

3.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Responsibilities : Deploy, configure, and manage Kafka clusters across development, staging, and production environments. Ensure platform reliability, scalability, and performance through proactive monitoring, tuning, and capacity planning. Manage and maintain Kafka components such as Kafka Connect, Schema Registry, and REST Proxy. Implement platform-level security including access controls, TLS encryption, and audit logging. Monitor platform health using observability tools and respond to alerts and incidents. Collaborate with application and data engineering teams to support integration and usage of Kafka. Maintain documentation for deployment procedures, operational runbooks, and platform...

Posted 2 weeks ago

AI Match Score
Apply

3.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description: We are seeking a Platform Engineer with 3 4 years of experience working on Kafka in production environments This role focuses on the deployment configuration monitoring and lifecycle management of Kafka based data platforms ensuring they are secure scalable and highly available The ideal candidate will have hands on experience across various components of Kafka including Kafka Connect Schema Registry and REST Proxy Key Responsibilities: Deploy configure and manage Kafka clusters across development staging and production environments Ensure platform reliability scalability and performance through proactive monitoring tuning and capacity planning Manage and maintain Kafka comp...

Posted 2 weeks ago

AI Match Score
Apply

3.0 - 5.0 years

12 - 15 Lacs

mumbai, navi mumbai

Work from Office

We’re seeking a skilled Confluent Kafka Administrator/Engineer with 3+ years of experience managing Kafka clusters, ensuring high availability, implementing security & automation, supporting cloud-native data integration and production environments.

Posted 3 weeks ago

AI Match Score
Apply

4.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a GCP Data Engineer, your role will involve the following key responsibilities and qualifications: Role Overview: - You should have experience in Java and/or Kotlin - Hands-on experience with GCP is required - Proficiency in PostgresSQL for database management and Dataflow for data processing - Familiarity with GraphQL for API development - Experience with GKE, Kubernetes, and Docker for managing runtime environments - Knowledge of Confluent Kafka and Schema Registry - Preferable experience within Apache Beame - Previous exposure to Data Engineering and the Retail industry - Ability to work on data pipelines, processing, design, and development - A DevOps mindset would be considered a plu...

Posted 3 weeks ago

AI Match Score
Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Role Overview: As a candidate for this role, you will be responsible for setting up, maintaining, and upgrading the Confluent Kafka platform. Your hands-on experience with Kafka Brokers, Schema Registry, KSQL DB, and Kafka connectors will be essential in understanding their underlying implementations and functions. You should have a proficient understanding of Kafka client Producer and consumer functioning, along with experience in deploying Kafka in Azure Kubernetes Service and working in the Azure cloud environment. Key Responsibilities: - Design and implement technical solutions for the Kafka On-premises environment, with experience in Confluent cloud being a plus. - Monitor environment a...

Posted 3 weeks ago

AI Match Score
Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As a Kafka Admin with 4 to 6 years of relevant experience, your main responsibilities will include: - Performing Kafka Cluster build, which involves designing, infrastructure planning, ensuring High Availability, and Disaster Recovery. - Implementing wire encryption using SSL, authentication using SASL/LDAP, and authorization using Kafka ACLs in various components such as Zookeeper, Broker/Client, Connect cluster/connectors, Schema Registry, REST API, Producers/Consumers, and Ksql. - Carrying out high-level, day-to-day administration and support functions. - Managing upgrades for the Kafka Cluster landscape across Development, Test, Staging, and Production/DR systems. - Creating key performa...

Posted 3 weeks ago

AI Match Score
Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a highly motivated and skilled Kafka Developer, your role will involve designing, developing, and maintaining robust and scalable applications that interact with the Apache Kafka streaming platform. You will be required to have a strong understanding of Kafka's core concepts, client APIs, and best practices for building reliable data pipelines and stream-processing solutions. **Key Responsibilities:** - Design, develop, and maintain high-performance and fault-tolerant Kafka producers and consumers using Java, Scala, Python, Go, or other relevant programming languages. - Implement data streaming solutions, including real-time data ingestion, processing, and distribution. - Develop and opti...

Posted 1 month ago

AI Match Score
Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

Role Overview: As a Kafka Platform Engineer at Sarvaha, you will be responsible for deploying and managing scalable Kafka clusters on Kubernetes platforms. You will work with cutting-edge technologies like Strimzi, Helm, Terraform, and StatefulSets to ensure the performance, reliability, and cost-efficiency of Kafka infrastructure. Your role will involve implementing Kafka security measures, automating deployments across cloud platforms, setting up monitoring and alerting systems, and integrating various Kafka ecosystem components. Additionally, you will be defining autoscaling strategies, resource limits, and network policies for Kubernetes workloads while maintaining CI/CD pipelines and co...

Posted 1 month ago

AI Match Score
Apply

5.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

Role Overview: As a Senior Production Support Lead at Deutsche Bank in Pune, India, you will take full ownership of L1 and L2 support operations across strategic enterprise platforms including API Management Platform, Kafka-as-a-Service (KaaS), and Data Platform. Your role as a senior leader will involve ensuring 24X7 operational excellence, efficient incident resolution, and strong collaboration with engineering and business stakeholders. You will be responsible for driving process optimization and operational reliability, focusing on leadership and shaping the future support model with best-in-class practices such as automation, shift-left, and service reliability engineering. Key Responsi...

Posted 1 month ago

AI Match Score
Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Java Backend with Kafka, you will be responsible for demonstrating a strong proficiency in Core Java, Spring Boot, and Microservices architecture. Your role will involve hands-on experience with Apache Kafka, including Kafka Streams, Connect, Schema Registry, and Confluent. You will also work with REST APIs, JSON, and event-driven systems. In this position, you should have knowledge of SQL databases such as MySQL and PostgreSQL, as well as NoSQL databases like MongoDB, Cassandra, and Redis. Familiarity with Docker, Kubernetes, and CI/CD pipelines is essential for success in this role. Experience in multi-threading, concurrency, and distributed systems will be beneficial. An understandin...

Posted 1 month ago

AI Match Score
Apply

3.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Responsibilities Deploy and upgrade production-grade Kafka clusters, ensuring high availability and fault tolerance. Automate the installation and configuration of Kafka clusters using tools such as Ansible and Chef. Troubleshoot critical production issues, optimizing Kafka performance and minimizing downtime. Configure MirrorMaker for data replication between two data centers, ensuring effective disaster recovery. Deploy and manage Provectus Kafka UI on a Kubernetes cluster for real-time monitoring and management. Integrate Schema Registry and deploy multiple Kafka Connectors to facilitate seamless data flow. Onboard new users by managing ACL permissions, creating topics, and implementing e...

Posted 1 month ago

AI Match Score
Apply

12.0 - 16.0 years

0 Lacs

hyderabad, telangana

On-site

As a key leader in the architecture team, you will define and evolve the architectural blueprint for complex distributed systems built using Java, Spring Boot, Apache Kafka, and cloud-native technologies. You will ensure that system designs align with enterprise architecture principles, business objectives, and performance/scalability requirements. Collaborating closely with engineering leads, DevOps, data engineering, product managers, and customer-facing teams, you will drive architectural decisions, mentor technical teams, and foster a culture of technical excellence and innovation. Your key responsibilities will include owning and evolving the overall system architecture for Java-based m...

Posted 2 months ago

AI Match Score
Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for designing, implementing, and maintaining scalable event-streaming architectures that support real-time data. Your duties will include designing, building, and managing Kafka clusters using Confluent Platform and Kafka Cloud services (AWS MSK, Confluent Cloud). You will also be involved in developing and maintaining Kafka topics, schemas (Avro/Protobuf), and connectors for data ingestion and processing pipelines. Monitoring and ensuring the reliability, scalability, and security of Kafka infrastructure will be crucial aspects of your role. Collaboration with application and data engineering teams to integrate Kafka with other AWS-based services (e.g., Lambda, S3, E...

Posted 2 months ago

AI Match Score
Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have at least 10+ years of experience in the field, with a strong background in Kafka Streams / KSQL architecture and associated clustering model. Your expertise should include solid programming skills with Java, along with best practices in development, automation testing, and streaming APIs. Practical experience in scaling Kafka, KStreams, and Connector infrastructures is required, as well as the ability to optimize the Kafka ecosystem based on specific use-cases and workloads. As a developer, you should have hands-on experience in building producer and consumer applications using the Kafka API, and proficiency in implementing KStreams components. Additionally, you should have d...

Posted 3 months ago

AI Match Score
Apply

12.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Global Technology Partners is a premier partner for digital transformation, with a diverse team of software engineering experts in the US and India. They combine strategic thinking, innovative design, and robust engineering to deliver exceptional results for their clients. Job Summary We are seeking a highly experienced and visionary Principal/Lead Java Architect to play a pivotal role in designing and evolving our next-generation, high-performance, and scalable event-driven platforms. This role demands deep expertise in Java, extensive experience with Kafka as a core component of event streaming architectures, and a proven track record of leading architectural design and...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Key Skills: Confluent Kafka, Kafka Connect, Schema Registry, Kafka Brokers, KSQL, KStreams, Java/J2EE, Troubleshooting, RCA, Production Support. Roles & Responsibilities: Design and develop Kafka Pipelines. Perform unit testing of the code and prepare test plans as required. Analyze, design, and develop programs in a development environment. Support applications and jobs in the production environment for issues or failures. Develop operational documents for applications, including DFD, ICD, HLD, etc. Troubleshoot production issues and provide solutions within defined SLA. Prepare RCA (Root Cause Analysis) document for production issues. Provide permanent fixes to production issues. Experienc...

Posted 3 months ago

AI Match Score
Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

You are a results-driven Data Project Manager (PM) responsible for leading data initiatives within a regulated banking environment, focusing on leveraging Databricks and Confluent Kafka. Your role involves overseeing the successful end-to-end delivery of complex data transformation projects aligned with business and regulatory requirements. In this position, you will be required to lead the planning, execution, and delivery of enterprise data projects using Databricks and Confluent. This includes developing detailed project plans, delivery roadmaps, and work breakdown structures, as well as ensuring resource allocation, budgeting, and adherence to timelines and quality standards. Collaborati...

Posted 3 months ago

AI Match Score
Apply

7.0 - 12.0 years

12 - 18 Lacs

Pune, Chennai

Work from Office

Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solute zions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra...

Posted 4 months ago

AI Match Score
Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-ba...

Posted 4 months ago

AI Match Score
Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-ba...

Posted 4 months ago

AI Match Score
Apply

5 - 8 years

9 - 10 Lacs

Bengaluru

Work from Office

Experienced in Kafka cluster maintenance, HA/DR setup, SSL/SASL/LDAP auth, ACLs, Kafka components (ZK, Connect, Schema Registry, etc.), upgrades, monitoring, capacity planning, and DB optimization. Mail:kowsalya.k@srsinfoway.com

Posted 5 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies