Home
Jobs
Companies
Resume

54 Kafka Streams Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Software Engineer - DevOps Bangalore, India Who we are: INVIDI Technologies Corporation is the worlds leading developer of software transforming television all over the world. Our two-time Emmy Award-winning technology is widely deployed by cable, satellite, and telco operators. We provide a device-agnostic solution delivering ads to the right household no matter what program or network you re watching, how youre watching, or whether you re in front of your TV, laptop, cell phone or any other device. INVIDI created the multi-billion-dollar addressable television business that today is growing rapidly globally. INVIDI is right at the heart of the very exciting and fast-paced world of commercial television; companies benefiting from our software include DirecTV and Dish Network, networks such as CBS/Viacom and A&E, advertising agencies such as Ogilvy and Publicis, and advertisers such as Chevrolet and Verizon. INVIDI s world-class technology solutions are known for their flexibility and adaptability. These traits allow INVIDI partners to transform their video content delivery network, revamping legacy systems without significant capital or hardware investments. Our clients count on us to provide superior capabilities, excellent service, and ease of use. The goal of developing a unified video ad tech platform is a big one and the right DevOps Engineer --like you--flourish in INVIDI s creative, inspiring, and supportive culture. It is a demanding, high-energy, and fast-paced environment. INVIDI s developers are self-motivated quick studies, can-do individuals who embrace the challenge of solving difficult and complex problems. About the role: We are a modern agile product organization looking for an excellent DevOps engineer that can support and offload a remote product development team. Our platform handles tens of thousands of requests/second with sub-second response times across the globe. We serve ads to some of the biggest live events in the world, providing reports and forecasts based on billions of log rows. These are some of the complex challenges that make development and operational work at INVIDI interesting and rewarding. To accomplish this, we use the best frameworks and tools out there or, when they are not good enough, we write our own. Most of the code we write is Java or Kotlin on top of Dropwizard, but every problem is unique, and we always evaluate the best tools for the job. We work with technologies such as Kafka, Google Cloud (GKE, Pub/Sub), BigTable, Terraform and Jsonnet and a lot more. The position will report directly to the Technical Manager of Software Development and will be based in our Chennai, India office. Key responsibilities: You will maintain, deploy and operate backend services in Java and Kotlin that are scalable, durable and performant. You will proactively evolve deployment pipelines and artifact generation. You will have a commitment to Kubernetes and infrastructure maintenance. You will troubleshoot incoming issues from support and clients, fixing and resolving what you can You will collaborate closely with peers and product owners in your team. You will help other team members grow as engineers through code review, pairing, and mentoring. Our Requirements: You are an outstanding DevOps Engineer who loves to work with distributed high-volume systems. You care about the craft and cherish the opportunity to work with smart, supportive, and highly motivated colleagues. You are curious; you like to learn new things, mentor and share knowledge with team members. Like us, you strive to handle complexity by keeping things simple and elegant. As a part of the DevOps team, you will be on-call for the services and clusters that the team owns. You are on call for one week, approximately once or twice per month. While on-call, you are required to be reachable by telephone and able to act upon alarm using your laptop. Skills and qualifications: Master s degree in computer science, or equivalent 4+ years of experience in the computer science industry Strong development and troubleshooting skill sets Ability to support a SaaS environment to meet service objectives Ability to collaborate effectively and work well in an Agile environment Excellent oral and written communication skills in English Ability to quickly learn new technologies and work in a fast-paced environment. Highly Preferred: Experience building service applications with Dropwizard/Spring Boot Experience with cloud services such as GCP and/or AWS. Experience with Infrastructure as Code tools such as Terraform. Experience in Linux environment. Experience working with technologies such as SQL, Kafka, Kafka Streams Experience with Docker Experience with SCM and CI/CD tools such as GIT and Bitbucket Experience with build tools such as Gradle or Maven Experience in writing Kubernetes deployment manifests and troubleshooting cluster and application-level issues. Physical Requirements: INVIDI is a conscious, clean, well-organized, and supportive office environment. Prolonged periods of sitting at a desk and working on a computer are normal. Note: Final candidates must successfully pass INVIDI s background screening requirements. Final candidates must be legally authorized to work in India. INVIDI has reopened its offices on a flexible hybrid model. Ready to join our team? Apply today!

Posted 5 days ago

Apply

8.0 - 13.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karntaka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelinesmandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring)Training & Certification "¢ Apache Kafka Administration Snowflake Fundamentals/Advanced Training "¢ Experience 8 years of experience in a technical role working with AWSAt least 2 years in a leadership or management role

Posted 6 days ago

Apply

5.0 - 10.0 years

7 - 14 Lacs

Mumbai, Goregaon, Mumbai (All Areas)

Work from Office

Naukri logo

Opening for the Insurance Company. **Looking someone with 30 days notice period** Location : Mumbai (Lower Parel) Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives Interested candidates shared resume on snehal@topgearconsultants.com

Posted 6 days ago

Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Naukri logo

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in

Posted 6 days ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in Marketing, Supply Chain Optimization, Network Security and Personalization rely on. Position Overview As a Senior Engineer on the Search Team , you serve as a specialist in the engineering team that supports the product. You help develop and gain insight in the application architecture. You can distill an abstract architecture into concrete design and influence the implementation. You show expertise in applying the appropriate software engineering patterns to build robust and scalable systems. You are an expert in programming and apply your skills in developing the product. You have the skills to design and implement the architecture on your own, but choose to influence your fellow engineers by proposing software designs, providing feedback on software designs and/or implementation. You leverage data science in solving complex business problems. You make decisions based on data. You show good problem solving skills and can help the team in triaging operational issues. You leverage your expertise in eliminating repeat occurrences. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Experience with Search Engines like SOLR and Elastic Search Strong hands-on programming skills in Java, Kotlin, Micronaut, Python, Experience on Pyspark, SQL, Hadoop/Hive is added advantage Experience on streaming systems like Kakfa. Experience on Kafka Streams is added advantage. Experience in MLOps is added advantage Experience in Data Engineering is added advantage Strong analytical thinking skills with an ability to creatively solve business problems, innovating new approaches where required Able to produce reasonable documents/narrative suggesting actionable insights Self-driven and results oriented Strong team player with ability to collaborate effectively across geographies/time zones Know More About Us here: Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 17 Lacs

Pune

Remote

Naukri logo

Kafka/MSK Linux In-depth understanding of Kafka broker configurations, zookeepers, and connectors Understand Kafka topic design and creation. Good knowledge in replication and high availability for Kafka system ElasticSearch/OpenSearch

Posted 1 week ago

Apply

6.0 - 11.0 years

12 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Proficient in Java 8 , Kafka Must have Experience with Junit Test Case Good on Spring boot, Microservices, SQL , ActiveMQ & Restful API

Posted 1 week ago

Apply

4.0 - 8.0 years

27 - 42 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary We are looking for an experienced Infra Dev Specialist with 4 to 8 years of experience to join our team. The ideal candidate will have expertise in KSQL Kafka Schema Registry Kafka Connect and Kafka. This role involves working in a hybrid model with day shifts and does not require travel. The candidate will play a crucial role in developing and maintaining our infrastructure to ensure seamless data flow and integration. Responsibilities Develop and maintain infrastructure solutions using KSQL Kafka Schema Registry Kafka Connect and Kafka. Oversee the implementation of data streaming and integration solutions to ensure high availability and performance. Provide technical support and troubleshooting for Kafka-related issues to minimize downtime and ensure data integrity. Collaborate with cross-functional teams to design and implement scalable and reliable data pipelines. Monitor and optimize the performance of Kafka clusters to meet the demands of the business. Ensure compliance with security and data governance policies while managing Kafka infrastructure. Implement best practices for data streaming and integration to enhance system efficiency. Conduct regular reviews and updates of the infrastructure to align with evolving business needs. Provide training and support to team members on Kafka-related technologies and best practices. Develop and maintain documentation for infrastructure processes and configurations. Participate in code reviews and contribute to the continuous improvement of the development process. Stay updated with the latest trends and advancements in Kafka and related technologies. Contribute to the overall success of the team by delivering high-quality infrastructure solutions. Qualifications Possess strong experience in KSQL Kafka Schema Registry Kafka Connect and Kafka. Demonstrate a solid understanding of data streaming and integration concepts. Have a proven track record of troubleshooting and resolving Kafka-related issues. Show expertise in designing and implementing scalable data pipelines. Exhibit knowledge of security and data governance practices in managing Kafka infrastructure. Display proficiency in monitoring and optimizing Kafka cluster performance. Have experience in providing technical support and training to team members. Be skilled in developing and maintaining infrastructure documentation. Stay informed about the latest trends in Kafka and related technologies. Possess excellent communication and collaboration skills. Have a proactive approach to problem-solving and continuous improvement. Demonstrate the ability to work effectively in a hybrid work model. Show commitment to delivering high-quality infrastructure solutions. Certifications Required Certified Apache Kafka Developer

Posted 2 weeks ago

Apply

4.0 - 7.0 years

8 - 13 Lacs

Pune

Work from Office

Naukri logo

Responsibilities: * Monitor Kafka clusters for performance & availability * Manage Kafka broker instances & replication strategies * Collaborate with dev teams on data pipeline design & implementation Food allowance Health insurance Provident fund Annual bonus

Posted 2 weeks ago

Apply

5.0 - 8.0 years

16 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Senior Software Engineer (Backend-Java) India SRS is unlocking the possibilities of the new electric mobility future by delivering innovative software and services that empower utilities, cities, communities, and automakers to deploy EV charging infrastructure at scale. Our technology is connecting people to their destinations in a safer, cleaner, and smarter way. Headquartered in Los Angeles, CA, the companys global footprint spans across three continents with deployments in 13 different countries. At SRS , we are looking for candidates who want to be a part of something bigger than themselves. What you will do: The ideal candidate is an integral part of a fast-paced development team that builds an integrated product suite of Enterprise applications in the EV Charging network domain. The Candidate will participate in the technical design and implementation of one or more components of the product. This candidate works closely with rest of cross-functional team to produce design documents, implement product features, and develop and execute unit tests. Responsible for designing, developing, and delivering web and microservice APIs based applications. Develop consumer-facing features and architectural components to meet company demands. Collaborate with cross functional teams including our Global Engineering teams in an Agile development environment. Proven experience successfully optimizing applications for scalability. Utilize problem solving skills to implement creative solutions to tough problems. Advocate for best-in-class technology solutions for large scale enterprise applications. Who were looking for: If you find passion in the Companys mission, your qualifications and interest align with the expectations below, we would love to talk to you about this position. • Bachelor’s Degree in Computer Science/ Engineering or equivalent experience required. • 5-8 years of software development experience. • 5+ years of Java server-side design and development experience. Must Have: • Excellent knowledge of RESTful APIs • High proficiency in J2EE, Spring, Spring Boot and Hibernate • Experience with Data Model, SQL, and No-SQL. • Experience with AWS, RDS, Docker, Kubernetes. • Distributed Caching (Redis), Queuing technologies (ActiveMQ, Kafka), Elastic Search • Excellent knowledge of Microservices Architecture and implementation. • Experience working in a small team setting along with offshore development team. • Strong verbal and written communication skills: proven ability to lead both vertically and horizontally to achieve results; thrives in a dynamic, fast-paced, environment and do what it takes to deliver results Good to Have: • Experience with APM tool like Stackify, NewRelic. • Experience in Electric Grid management solutions. • Experience in Angular or similar JavaScript frameworks. • Experience with GitHub/Bitbucket, Jira, Scrum, SonarCloud and CI/CD processes. • Working knowledge of Linux • Experience working on software-as-a-service (SaaS), large scale distributed systems and relational/No-SQL database

Posted 2 weeks ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

We are seeking a talented and experienced Kafka Architect with migration experience to Google Cloud Platform (GCP) to join our team. As a Kafka Architect, you will be responsible for designing, implementing, and managing our Kafka infrastructure to support our data processing and messaging needs, while also leading the migration of our Kafka ecosystem to GCP. You will work closely with our engineering and data teams to ensure seamless integration and optimal performance of Kafka on GCP. Responsibilities: Discovery, analysis, planning, design, and implementation of Kafka deployments on GKE, with a specific focus on migrating Kafka from AWS to GCP. Design, architect and implement scalable, high-performance Kafka architectures and clusters to meet our data processing and messaging requirements. Lead the migration of our Kafka infrastructure from on-premises or other cloud platforms to Google Cloud Platform (GCP). Conduct thorough discovery and analysis of existing Kafka deployments on AWS. Develop and implement best practices for Kafka deployment, configuration, and monitoring on GCP. Develop a comprehensive migration strategy for moving Kafka from AWS to GCP. Collaborate with engineering and data teams to integrate Kafka into our existing systems and applications on GCP. Optimize Kafka performance and scalability on GCP to handle large volumes of data and high throughput. Plan and execute the migration, ensuring minimal downtime and data integrity. Test and validate the migrated Kafka environment to ensure it meets performance and reliability standards. Ensure Kafka security on GCP by implementing authentication, authorization, and encryption mechanisms. Troubleshoot and resolve issues related to Kafka infrastructure and applications on GCP. Ensure seamless data flow between Kafka and other data sources/sinks. Implement monitoring and alerting mechanisms to ensure the health and performance of Kafka clusters. Stay up to date with Kafka developments and GCP services to recommend and implement new features and improvements. Requirements: Bachelors degree in computer science, Engineering, or related field (Masters degree preferred). Proven experience as a Kafka Architect or similar role, with a minimum of [5] years of experience. Deep knowledge of Kafka internals and ecosystem, including Kafka Connect, Kafka Streams, and KSQL. In-depth knowledge of Apache Kafka architecture, internals, and ecosystem components. Proficiency in scripting and automation for Kafka management and migration. Hands-on experience with Kafka administration, including cluster setup, configuration, and tuning. Proficiency in Kafka APIs, including Producer, Consumer, Streams, and Connect. Strong programming skills in Java, Scala, or Python. Experience with Kafka monitoring and management tools such as Confluent Control Center, Kafka Manager, or similar. Solid understanding of distributed systems, data pipelines, and stream processing. Experience leading migration projects to Google Cloud Platform (GCP), including migrating Kafka workloads. Familiarity with GCP services such as Google Kubernetes Engine (GKE), Google Cloud Storage, Google Cloud Pub/Sub, and Big Query. Excellent communication and collaboration skills. Ability to work independently and manage multiple tasks in a fast-paced environment.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

5+ years of working experience with industry-standard messaging systems Apache Kafka, Apache Pulsar, Rabbit MQ.Experience configuring Kafka for large-scale deployments is desirableHands-on experience with building reactive microservices using any popular Java stack.Experience building applications using Java 11 best practices and functional interfaces.Understanding Kubernetes custom operators is desirableExperience building stateful streaming applications using Kafka streams, Apache Flink is a plus.Experience with open telemetry/ tracing / Jaeger is a plus. Career Level - IC3 The role requires proven experience in managing Kafka in large deployments and distributed architectures. Extensive knowledge in configuring Kafka using various industry-driven architectural patterns. You must be passionate about building distributed messaging cloud services running on Oracle Cloud Infrastructure. Experience with pub-sub architectures using Kafka or Pulsar or point-to-point messaging with queues is desirable. Experience building distributed systems with traceability in a high-volume messaging environment. Each team owns its service deployment pipeline to production. Career Level - IC3

Posted 2 weeks ago

Apply

6.0 - 10.0 years

15 - 30 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Mandatory Skills: Kafka Streams Mandatory Skills Description: • Strong, in-depth, and hands-on knowledge & understanding of Core & Advanced Java concepts. • Good knowledge and hands-on experience in working with Spring-Springboot/Camel, JUnit, Hibernate to build Microservices. • Experienced and familiar with Shell Scripts, Unix, PL/SQL, Databases (Oracle/Postgres), IBM MQ & JMS. • Good knowledge & working experience in Cloud components. • Hands-on experience in building distributed systems, Java messaging technologies (Kafka), and databases. • Strong object oriented analysis & design skills. • Good domain knowledge of Investment Banking - Trade Settlement Systems & Payment. • Good communication & presentation skills. • Hands-on experience with Agile methodologies & metrics like Velocity, Burndown chart, Story points, etc. • Strong organizational and quality assurance skills. • Exposure to tools like GITLAB, DevOps like TeamCity, Nexus, Maven/Gradle, etc. Preferred candidate profile

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Administer and maintain Apache Kafka clusters , including installation, upgrades, configuration, and performance tuning. Design and implement Kafka topics, partitions, replication , and consumer groups. Ensure high availability and scalability of Kafka infrastructure in production environments. Monitor Kafka health and performance using tools like Prometheus, Grafana, Confluent Control Center , etc. Implement and manage security configurations such as SSL/TLS, authentication (Kerberos/SASL), and access control. Collaborate with development teams to design and configure Kafka-based integrations and data pipelines . Perform root cause analysis of production issues and ensure timely resolution. Create and maintain documentation for Kafka infrastructure and configurations. Required Skills: Strong expertise in Kafka administration , including hands-on experience with open-source and/or Confluent Kafka . Experience with Kafka ecosystem tools (Kafka Connect, Kafka Streams, Schema Registry). Proficiency in Linux-based environments and scripting (Bash, Python). Experience with monitoring/logging tools and Kafka performance optimization. Ability to work independently and proactively manage Kafka environments. Familiarity with DevOps tools and CI/CD pipelines (e.g., Jenkins, Git, Ansible). Preferred Skills: Experience with cloud platforms (AWS, GCP, or Azure) Kafka services. Knowledge of messaging alternatives like RabbitMQ, Pulsar, or ActiveMQ . Working knowledge of Docker and Kubernetes for Kafka deployment.

Posted 2 weeks ago

Apply

10.0 - 14.0 years

15 - 20 Lacs

Chennai

Work from Office

Naukri logo

Role Summary: We are looking for a seasoned Senior Development Lead with over 10 years of experience in leading development teams and delivering high-quality technical solutions. This role involves not only technical leadership and stakeholder communication but also hands-on development. The ideal candidate will be an SME in modern cloud-native development, proficient in event-driven architecture and integration design, and capable of mentoring developers while enforcing coding standards and best practices. Key Responsibilities: Team Leadership and Stakeholder Engagement Lead and mentor development teams across multiple projects Communicate effectively with senior stakeholders, providing updates and risk assessments Foster a collaborative and productive development environment Code Quality and Best Practices Define and implement coding standards and quality assurance processes Conduct detailed code reviews to ensure high performance, security, and maintainability standards Encourage continuous learning and knowledge sharing among team members Architecture and Design Design scalable and secure APIs, events, integrations, and system adapters Collaborate with architects and product owners to refine technical requirements and solutions in an Agile setup Ensure alignment of design decisions with enterprise architecture and strategy Cross-Functional Collaboration Work with cross-functional teams to manage dependencies and coordinate deliverables Facilitate integration with multiple systems and services in an agile development environment Hands-on Development Act as a Subject Matter Expert (SME) in software development, providing technical guidance Write and review code in Event Driven Architecture to ensure best practices are followed Lead by example through active participation in key development activities Qualifications: Experience : 10+ years of software development experience with at least 3 years in a team leadership role Proven ability to lead distributed development teams and deliver enterprise-grade software processing high throughput and volumes Technical Skills: Strong hands-on development experience with AWS cloud, Kafka Streams, Java, Spring and Spring Boot frameworks for distributed systems Proficient in API design JSON, and Avro schema Familiarity with event-driven architecture and microservices integration and scaling Experience with DevOps practices and CI/CD pipelines Exposure to containerization technologies (Docker, Kubernetes, ECS, EKS) Experience in arriving at cost optimal solution in the Cloud Solution Architectures / Designs resulting in decreasing cloud spend Experience in Node.js and Angular JS or moden UI framework is an added plus Soft Skills: Strong problem-solving and decision-making skills Excellent communication and leadership abilities Ability to manage time, priorities, and multiple tasks efficiently

Posted 2 weeks ago

Apply

3.0 - 8.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

We are hiring skilled Backend Developers to join our technology team supporting a top-tier client in the Retirement Pension Planning and Insurance domain. You'll work on large-scale enterprise data warehouse systems and develop robust, scalable data pipelines across real-time and batch environments. Roles & Responsibilities : Design, develop, and maintain scalable backend data pipelines using AWS Glue, PySpark, Lambda, and Kinesis . Implement both batch and real-time data ingestion and transformation flows using Alteryx . Collaborate with solution architects, analysts, and business stakeholders for data modeling and integration. Optimize data workflow performance, storage, and processing across multiple datasets. Troubleshoot data pipeline issues, maintain documentation, and ensure adherence to best practices. Work in agile teams and participate in sprint planning and code reviews. Technical Skills Required Must-Have: 3+ years of experience with AWS Glue , PySpark , and AWS Lambda Hands-on experience with AWS Kinesis or Amazon MSK Proficiency in scripting using Python Experience working with data warehouses and ETL frameworks Knowledge of batch and real-time data processing with Alteryx Good-to-Have: Understanding of data lake architectures and S3-based pipelines Familiarity with CI/CD tools for cloud deployment Basic knowledge of Data Governance tools or BI platforms (Tableau/Snowflake)

Posted 2 weeks ago

Apply

7.0 - 11.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Position OverviewThe End-to-End Testing Lead will be responsible for overseeing the entire testing process for Surest and USP integrations. This role involves managing the testing strategy, planning, execution, and reporting to ensure seamless integration and functionality across all project components. The ideal candidate will have extensive experience in end-to-end testing, particularly with Surest and USP systems, and will be adept at coordinating with various teams to achieve project goals. Primary Responsibilities Testing Strategy and Planning: Develop and implement comprehensive end-to-end testing strategies for Surest and USP integrations Collaborate with project managers, business leads, and IT teams to define testing scope, objectives, and timelines Ensure alignment of testing activities with project requirements and timelines Test Execution and Management: Coordinate with cross-functional teams to ensure all interfaces and integrations are thoroughly tested Issue Resolution and Reporting: Identify, document, and track defects and issues throughout the testing lifecycle Provide regular updates on testing progress, issues, and risks to stakeholders Work with development teams to ensure timely resolution of defects and retesting Team Leadership and Coordination: Lead and mentor a team of testers, providing guidance and support to ensure high-quality testing Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Experience with Kafka streaming, API and integration testing Experience in end-to-end testing, particularly with Surest and USP systems would be a plus Solid background in managing testing projects and leading testing teams Familiarity with claims adjudication and payment processes Knowledge of various testing phases, including case install, enrollment, billing, and commissions Effective communication to share project progress and risks in different project meetings and sending out the status reports Preferred Qualification Experience with test automation and shift-left testing approaches At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Roles & Responsibilities: Design, build, and manage Kafka clusters using Confluent Platform and Kafka Cloud services (AWS MSK, Confluent Cloud) . Develop and maintain Kafka topics, schemas (Avro/Protobuf), and connectors for data ingestion and processing pipelines. Monitor and ensure the reliability, scalability, and security of Kafka infrastructure. Collaborate with application and data engineering teams to integrate Kafka with other AWS-based services (e.g., Lambda, S3, EC2, Redshift). Implement and manage Kafka Connect , Kafka Streams , and ksqlDB where applicable. Optimize Kafka performance, troubleshoot issues, and manage incident response. Preferred candidate profile 4-6 years of experience working with Apache Kafka and Confluent Kafka. Strong knowledge of Kafka internals (brokers, zookeepers, partitions, replication, offsets). Experience with Kafka Connect, Schema Registry, REST Proxy, and Kafka security. Hands-on experience with AWS (EC2, IAM, CloudWatch, S3, Lambda, VPC, Load balancers). Proficiency in scripting and automation using Terraform, Ansible, or similar tools. Familiarity with DevOps practices and tools (CI/CD pipelines, monitoring tools like Prometheus/Grafana, Splunk, Datadog etc). Experience with containerization (Docker, Kubernetes) is a plus.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

SUMMARY Job Role: Apache Kafka Admin Experience: 6+ years Location: Pune (Preferred), Bangalore, Mumbai Must-Have: The candidate should have 6 years of relevant experience in Apache Kafka Job Description: We are seeking a highly skilled and experienced Senior Kafka Administrator to join our team. The ideal candidate will have 6-9 years of hands-on experience in managing and optimizing Apache Kafka environments. As a Senior Kafka Administrator, you will play a critical role in designing, implementing, and maintaining Kafka clusters to support our organization's real-time data streaming and event-driven architecture initiatives. Responsibilities: Design, deploy, and manage Apache Kafka clusters, including installation, configuration, and optimization of Kafka brokers, topics, and partitions. Monitor Kafka cluster health, performance, and throughput metrics and implement proactive measures to ensure optimal performance and reliability. Troubleshoot and resolve issues related to Kafka message delivery, replication, and data consistency. Implement and manage Kafka security mechanisms, including SSL/TLS encryption, authentication, authorization, and ACLs. Configure and manage Kafka Connect connectors for integrating Kafka with various data sources and sinks. Collaborate with development teams to design and implement Kafka producers and consumers for building real-time data pipelines and streaming applications. Develop and maintain automation scripts and tools for Kafka cluster provisioning, deployment, and management. Implement backup, recovery, and disaster recovery strategies for Kafka clusters to ensure data durability and availability. Stay up-to-date with the latest Kafka features, best practices, and industry trends and provide recommendations for optimizing our Kafka infrastructure. Requirements: 6-9 years of experience as a Kafka Administrator or similar role, with a proven track record of managing Apache Kafka clusters in production environments. In - depth knowledge of Kafka architecture, components, and concepts, including brokers, topics, partitions, replication, and consumer groups. Hands - on experience with Kafka administration tasks, such as cluster setup, configuration, performance tuning, and monitoring. Experience with Kafka ecosystem tools and technologies, such as Kafka Connect, Kafka Streams, and Confluent Platform. Proficiency in scripting languages such as Python, Bash, or Java. Strong understanding of distributed systems, networking, and Linux operating systems. Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues. Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

16 - 19 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Detailed job description - Skill Set: Proven experience as a Kafka Developer Knowledge of Kafka Schemas and use of the Schema Registry Strong knowledge of Kafka and other big data technologies Best practices to optimize the Kafka ecosystem based on use-case and workload Knowledge of Kafka clustering, and its fault-tolerance model supporting High Availability Strong fundamentals in Kafka client configuration and troubleshooting Designing and implementing data pipelines using Apache Kafka Develop and maintain Kafka-based data pipelines Monitor and optimize Kafka clusters Troubleshoot and resolve issues related to Kafka and data processing Ensure data security and compliance with industry standards Create and maintain documentation for Kafka configurations and processes Implement best practices for Kafka architecture and operations Mandatory Skills(ONLY 2 or 3) Kafka Developer

Posted 3 weeks ago

Apply

6.0 - 11.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

SUMMARY Job Role: Apache Kafka Admin Experience: 6+ years Location: Pune (Preferred), Bangalore, Mumbai Must-Have: The candidate should have 6 years of relevant experience in Apache Kafka Job Description: We are seeking a highly skilled and experienced Senior Kafka Administrator to join our team. The ideal candidate will have 6-9 years of hands-on experience in managing and optimizing Apache Kafka environments. As a Senior Kafka Administrator, you will play a critical role in designing, implementing, and maintaining Kafka clusters to support our organization's real-time data streaming and event-driven architecture initiatives. Responsibilities: Design, deploy, and manage Apache Kafka clusters, including installation, configuration, and optimization of Kafka brokers, topics, and partitions. Monitor Kafka cluster health, performance, and throughput metrics and implement proactive measures to ensure optimal performance and reliability. Troubleshoot and resolve issues related to Kafka message delivery, replication, and data consistency. Implement and manage Kafka security mechanisms, including SSL/TLS encryption, authentication, authorization, and ACLs. Configure and manage Kafka Connect connectors for integrating Kafka with various data sources and sinks. Collaborate with development teams to design and implement Kafka producers and consumers for building real-time data pipelines and streaming applications. Develop and maintain automation scripts and tools for Kafka cluster provisioning, deployment, and management. Implement backup, recovery, and disaster recovery strategies for Kafka clusters to ensure data durability and availability. Stay up-to-date with the latest Kafka features, best practices, and industry trends and provide recommendations for optimizing our Kafka infrastructure. Requirements: 6-9 years of experience as a Kafka Administrator or similar role, with a proven track record of managing Apache Kafka clusters in production environments. In - depth knowledge of Kafka architecture, components, and concepts, including brokers, topics, partitions, replication, and consumer groups. Hands - on experience with Kafka administration tasks, such as cluster setup, configuration, performance tuning, and monitoring. Experience with Kafka ecosystem tools and technologies, such as Kafka Connect, Kafka Streams, and Confluent Platform. Proficiency in scripting languages such as Python, Bash, or Java. Strong understanding of distributed systems, networking, and Linux operating systems. Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues. Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Kafka Integration. Experience5-8 Years.

Posted 3 weeks ago

Apply

10.0 - 13.0 years

35 - 50 Lacs

Chennai

Work from Office

Naukri logo

Cognizant Hiring Payments BA!!! Location: Chennai, Bangalore, Hyderabad JD: Job Summary Atleast 10yrs of experience in the BA role and in that a couple of years of experience as BA lead role good domain knowledge in SWIFT/ISO 20022 Payment background and stakeholders management Java Microservices and Spring boot Technical Knowledge: Java / Spring Boot Kafka Streams REST JSON Netflix Micro Services suite ( Zuul Eureka Hystrix etc)12 Factor Apps Oracle PostgresSQL Cassandra & ELK Ability to work with geographically dispersed and highly varied stakeholders Responsibilities Strategy Develop the strategic direction and roadmap for our flagship payments platform aligning with Business Strategy Tech and Ops Strategy and investment priorities. Tap into latest industry trends innovative products & solutions to deliver effective and faster product capabilities Support CASH Management Operations leveraging technology to streamline processes enhance productivity reduce risk and improve controls Business Work hand in hand with Payments Business taking product programs from investment decisions into design specifications solutioning development implementation and hand-over to operations securing support and collaboration from other teams Ensure delivery to business meeting time cost and high quality constraints Support respective businesses in growing Return on investment commercialization of capabilities bid teams monitoring of usage improving client experience enhancing operations and addressing defects & continuous improvement of systems Thrive an ecosystem of innovation and enabling business through technology Processes Responsible for the end-to-end deliveries of the technology portfolio comprising key business product areas such as Payments Clearing etc. Own technology delivery of projects and programs across global markets that a develop/enhance core product capabilities b ensure compliance to Regulatory mandates c support operational improvements process efficiencies and zero touch agenda d build payments platform to align with latest technology and architecture trends improved stability and scale Interface with business & technology leaders of other systems for collaborative delivery.

Posted 3 weeks ago

Apply

6 - 10 years

3 - 8 Lacs

Chennai

Work from Office

Naukri logo

Must-Have handson: ------------------ Primary: Java 13, SpringBoot Microservices, Reactive REST API development, TDD - Junit & Mockito, Webflux DB: Postgresql, Couchbase Containerization: Docker, Kubernetes Build: Maven / Gradle Good-to-have (knowledge level is ok): ------------------------------------ Cloud: vmware private cloud OS : Linux experience, Shell script CICD: Azure pipeline Other skills: Splunk/Kafka Integration, Ansible, NewRelic Total Experience Expected: 08-10 years

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies