Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Java Backend with Kafka, you will be responsible for demonstrating a strong proficiency in Core Java, Spring Boot, and Microservices architecture. Your role will involve hands-on experience with Apache Kafka, including Kafka Streams, Connect, Schema Registry, and Confluent. You will also work with REST APIs, JSON, and event-driven systems. In this position, you should have knowledge of SQL databases such as MySQL and PostgreSQL, as well as NoSQL databases like MongoDB, Cassandra, and Redis. Familiarity with Docker, Kubernetes, and CI/CD pipelines is essential for success in this role. Experience in multi-threading, concurrency, and distributed systems will be beneficial. An understanding of cloud platforms such as AWS, Azure, or GCP is desired. You should possess strong problem-solving skills and excellent debugging abilities to address complex technical challenges effectively. Join our team in Bangalore (WFO) and contribute your expertise to our dynamic projects.,
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Responsibilities Deploy and upgrade production-grade Kafka clusters, ensuring high availability and fault tolerance. Automate the installation and configuration of Kafka clusters using tools such as Ansible and Chef. Troubleshoot critical production issues, optimizing Kafka performance and minimizing downtime. Configure MirrorMaker for data replication between two data centers, ensuring effective disaster recovery. Deploy and manage Provectus Kafka UI on a Kubernetes cluster for real-time monitoring and management. Integrate Schema Registry and deploy multiple Kafka Connectors to facilitate seamless data flow. Onboard new users by managing ACL permissions, creating topics, and implementing end-to-end security with SSL/TLS. Design and manage Kafka infrastructure to enhance message processing efficiency. Conduct performance testing to optimize partitioning, replication, and retention policies. Integrate Kafka with ELK Stack and Solr for scalable data storage and analysis. Qualifications BE, BTech, MCA and MCS only. Required Skills Kafka clusters, MirrorMaker, Kafka Streams, Schema Registry, Kubernetes, Docker, Ansible, Terraform, Monitoring & Logging, Programming & Scripting, Shell Scripting. Experience 3+ years of experience Location Bangalore Show more Show less
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
As a key leader in the architecture team, you will define and evolve the architectural blueprint for complex distributed systems built using Java, Spring Boot, Apache Kafka, and cloud-native technologies. You will ensure that system designs align with enterprise architecture principles, business objectives, and performance/scalability requirements. Collaborating closely with engineering leads, DevOps, data engineering, product managers, and customer-facing teams, you will drive architectural decisions, mentor technical teams, and foster a culture of technical excellence and innovation. Your key responsibilities will include owning and evolving the overall system architecture for Java-based microservices and data-intensive applications. You will define and enforce architecture best practices, lead technical design sessions, and design solutions focusing on performance, scalability, security, and reliability in high-volume, multi-tenant environments. Additionally, you will collaborate with product and engineering teams to convert business requirements into scalable technical architectures and drive the use of DevSecOps, automated testing, and CI/CD to improve development velocity and code quality. Basic qualifications for this role include 12-15 years of hands-on experience in Java-based enterprise application development, with at least 4-5 years in an architectural leadership role. Deep expertise in microservices architecture, Spring Boot, RESTful services, and API design is required, along with a strong understanding of distributed systems design, event-driven architecture, and domain-driven design. Proficiency in technologies such as Kafka, Spark, Kubernetes, Docker, AWS ecosystem, MongoDB, SQL databases, and multithreaded programming is essential. Preferred qualifications include exposure to tools for system architecture and diagramming, experience leading architectural transformations, knowledge of Data Mesh, Data Governance, or Master Data Management concepts, and certification in AWS, Kubernetes, or Software Architecture. Experience in regulated environments with compliance is a plus. Infor, a global leader in business cloud software products, focuses on industry-specific markets. With a commitment to Principle Based Management and eight Guiding Principles, Infor aims to create a culture that fosters innovation, improvement, and transformation while delivering long-term value to clients and supporters. To learn more about Infor, visit www.infor.com.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, implementing, and maintaining scalable event-streaming architectures that support real-time data. Your duties will include designing, building, and managing Kafka clusters using Confluent Platform and Kafka Cloud services (AWS MSK, Confluent Cloud). You will also be involved in developing and maintaining Kafka topics, schemas (Avro/Protobuf), and connectors for data ingestion and processing pipelines. Monitoring and ensuring the reliability, scalability, and security of Kafka infrastructure will be crucial aspects of your role. Collaboration with application and data engineering teams to integrate Kafka with other AWS-based services (e.g., Lambda, S3, EC2, Redshift) is essential. Additionally, you will implement and manage Kafka Connect, Kafka Streams, and ksqlDB where applicable. Optimizing Kafka performance, troubleshooting issues, and managing incidents will also be part of your responsibilities. To be successful in this role, you should have at least 3-5 years of experience working with Apache Kafka and Confluent Kafka. Strong knowledge of Kafka internals such as brokers, zookeepers, partitions, replication, and offsets is required. Experience with Kafka Connect, Schema Registry, REST Proxy, and Kafka security is also important. Hands-on experience with AWS services like EC2, IAM, CloudWatch, S3, Lambda, VPC, and Load balancers is necessary. Proficiency in scripting and automation using tools like Terraform, Ansible, or similar is preferred. Familiarity with DevOps practices and tools such as CI/CD pipelines, monitoring tools like Prometheus/Grafana, Splunk, Datadog, etc., is beneficial. Experience with containerization using Docker and Kubernetes is an advantage. Having a Confluent Certified Developer or Administrator certification, AWS Certified, experience with CICD tools like AWS Code Pipeline, Harness, and knowledge of containers (Docker, Kubernetes) will be considered as additional assets for this role.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have at least 10+ years of experience in the field, with a strong background in Kafka Streams / KSQL architecture and associated clustering model. Your expertise should include solid programming skills with Java, along with best practices in development, automation testing, and streaming APIs. Practical experience in scaling Kafka, KStreams, and Connector infrastructures is required, as well as the ability to optimize the Kafka ecosystem based on specific use-cases and workloads. As a developer, you should have hands-on experience in building producer and consumer applications using the Kafka API, and proficiency in implementing KStreams components. Additionally, you should have developed KStreams pipelines and deployed KStreams clusters. Experience in developing KSQL queries and understanding the best practices of using KSQL vs KStreams is essential. Strong knowledge of the Kafka Connect framework is necessary, including experience with various connector types such as HTTP REST proxy, JMS, File, SFTP, JDBC, Splunk, Salesforce, and the ability to support wire-format translations. Familiarity with connectors available from Confluent and the community, as well as hands-on experience in designing, writing, and operationalizing new Kafka Connectors using the framework is a plus. Knowledge of Schema Registry is also beneficial. Nice-to-have qualities include providing thought leadership for the team, excellent verbal and written communication skills, being a good team player, and willingness to go the extra mile to support the team. In terms of educational qualifications, a four-year college degree in Science, Engineering, Technology, Business, or Humanities is required. Candidates with a Master's degree and/or certifications in the relevant technologies are preferred. The working mode for this position is hybrid, full-time (3 days working from the office), and the notice period is a maximum of 30 days.,
Posted 1 month ago
12.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Global Technology Partners is a premier partner for digital transformation, with a diverse team of software engineering experts in the US and India. They combine strategic thinking, innovative design, and robust engineering to deliver exceptional results for their clients. Job Summary We are seeking a highly experienced and visionary Principal/Lead Java Architect to play a pivotal role in designing and evolving our next-generation, high-performance, and scalable event-driven platforms. This role demands deep expertise in Java, extensive experience with Kafka as a core component of event streaming architectures, and a proven track record of leading architectural design and implementation across complex enterprise systems. You will be instrumental in defining technical strategy, establishing best practices, and mentoring engineering teams to deliver robust and resilient solutions. Key Responsibilities: Architectural Leadership: Lead the design, development, and evolution of highly scalable, resilient, and performant event-driven architectures using Java and Kafka. Define architectural patterns, principles, and standards for event sourcing, CQRS, stream processing, and microservices integration with Kafka. Drive technical vision and strategy for our core platforms, ensuring alignment with business objectives and long-term technology roadmap. Conduct architectural reviews, identify technical debt, and propose solutions for continuous improvement. Stay abreast of emerging technologies and industry trends, evaluating their applicability and recommending adoption where appropriate. Design & Development: Design and implement robust, high-throughput Kafka topics, consumers, producers, and streams (Kafka Streams/KSQL). Architect and design Java-based microservices that effectively integrate with Kafka for event communication and data synchronization. Lead the selection and integration of appropriate technologies and frameworks for event processing, data serialization, and API development. Develop proof-of-concepts (POCs) and prototypes to validate architectural choices and demonstrate technical feasibility. Contribute hands-on to critical path development when necessary, demonstrating coding excellence and leading by example. Kafka Ecosystem Expertise: Deep understanding of Kafka internals, distributed systems concepts, and high-availability configurations. Experience with Kafka Connect for data integration, Schema Registry for data governance, and KSQL/Kafka Streams for real-time stream processing. Proficiency in monitoring, optimizing, and troubleshooting Kafka clusters and related applications. Knowledge of Kafka security best practices (authentication, authorization, encryption). Technical Governance & Mentorship: Establish and enforce architectural governance, ensuring adherence to design principles and coding standards. Mentor and guide engineering teams on best practices for event-driven architecture, Kafka usage, and Java development. Foster a culture of technical excellence, collaboration, and continuous learning within the engineering organization. Communicate complex technical concepts effectively to both technical and non-technical stakeholders. Performance, Scalability & Reliability: Design for high availability, fault tolerance, and disaster recovery. Define and implement strategies for performance optimization, monitoring, and alerting across the event-driven ecosystem. Ensure solutions are scalable to handle significant data volumes and transaction rates. Required Skills & Experience: 12+ years of progressive experience in software development, with at least 5+ years in an Architect role designing and implementing large-scale enterprise solutions. Expert-level proficiency in Java (Java 8+, Spring Boot, Spring Framework). Deep and extensive experience with Apache Kafka: Designing and implementing Kafka topics, producers, and consumers. Hands-on experience with Kafka Streams API or KSQL for real-time stream processing. Familiarity with Kafka Connect, Schema Registry, and Avro/Protobuf. Understanding of Kafka cluster operations, tuning, and monitoring. Strong understanding and practical experience with Event-Driven Architecture (EDA) principles and patterns: Event Sourcing, CQRS, Saga, Choreography vs. Orchestration. Extensive experience with Microservices architecture principles and patterns. Proficiency in designing RESTful APIs and asynchronous communication mechanisms. Experience with relational and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra). Solid understanding of cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes). Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication, presentation, and interpersonal skills. Show more Show less
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Key Skills: Confluent Kafka, Kafka Connect, Schema Registry, Kafka Brokers, KSQL, KStreams, Java/J2EE, Troubleshooting, RCA, Production Support. Roles & Responsibilities: Design and develop Kafka Pipelines. Perform unit testing of the code and prepare test plans as required. Analyze, design, and develop programs in a development environment. Support applications and jobs in the production environment for issues or failures. Develop operational documents for applications, including DFD, ICD, HLD, etc. Troubleshoot production issues and provide solutions within defined SLA. Prepare RCA (Root Cause Analysis) document for production issues. Provide permanent fixes to production issues. Experience Requirement: 5-10 yeras of experience working with Confluent Kafka. Hands-on experience with Kafka Connect using Schema Registry. Strong knowledge of Kafka brokers and KSQL. Familiarity with Kafka Control Center, Zookeepers, and KStreams is good to have. Experience with Java/J2EE is a plus. Education: B.E., B.Tech.
Posted 2 months ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
You are a results-driven Data Project Manager (PM) responsible for leading data initiatives within a regulated banking environment, focusing on leveraging Databricks and Confluent Kafka. Your role involves overseeing the successful end-to-end delivery of complex data transformation projects aligned with business and regulatory requirements. In this position, you will be required to lead the planning, execution, and delivery of enterprise data projects using Databricks and Confluent. This includes developing detailed project plans, delivery roadmaps, and work breakdown structures, as well as ensuring resource allocation, budgeting, and adherence to timelines and quality standards. Collaboration with data engineers, architects, business analysts, and platform teams is essential to align on project goals. You will act as the primary liaison between business units, technology teams, and vendors, facilitating regular updates, steering committee meetings, and issue/risk escalations. Your technical oversight responsibilities include managing solution delivery on Databricks for data processing, ML pipelines, and analytics, as well as overseeing real-time data streaming pipelines via Confluent Kafka. Ensuring alignment with data governance, security, and regulatory frameworks such as GDPR, CBUAE, and BCBS 239 is crucial. Risk and compliance management are key aspects of your role, involving ensuring regulatory reporting data flows comply with local and international financial standards and managing controls and audit requirements in collaboration with Compliance and Risk teams. The required skills and experience for this role include 7+ years of Project Management experience within the banking or financial services sector, proven experience in leading data platform projects, a strong understanding of data architecture, pipelines, and streaming technologies, experience in managing cross-functional teams, and proficiency in Agile/Scrum and Waterfall methodologies. Technical exposure to Databricks (Delta Lake, MLflow, Spark), Confluent Kafka (Kafka Connect, kSQL, Schema Registry), Azure or AWS Cloud Platforms, integration tools, CI/CD pipelines, and Oracle ERP Implementation is expected. Preferred qualifications include PMP/Prince2/Scrum Master certification, familiarity with regulatory frameworks, and a strong understanding of data governance principles. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field. Key performance indicators for this role include on-time, on-budget delivery of data initiatives, uptime and SLAs of data pipelines, user satisfaction, and compliance with regulatory milestones.,
Posted 2 months ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Chennai
Work from Office
Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solute zions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 2 months ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 3 months ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 3 months ago
5 - 8 years
9 - 10 Lacs
Bengaluru
Work from Office
Experienced in Kafka cluster maintenance, HA/DR setup, SSL/SASL/LDAP auth, ACLs, Kafka components (ZK, Connect, Schema Registry, etc.), upgrades, monitoring, capacity planning, and DB optimization. Mail:kowsalya.k@srsinfoway.com
Posted 4 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |