Home
Jobs
Companies
Resume

29 Confluent Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture

Posted 6 days ago

Apply

5.0 - 10.0 years

7 - 14 Lacs

Mumbai, Goregaon, Mumbai (All Areas)

Work from Office

Naukri logo

Opening for the Insurance Company. **Looking someone with 30 days notice period** Location : Mumbai (Lower Parel) Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives Interested candidates shared resume on snehal@topgearconsultants.com

Posted 6 days ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

Chennai

Hybrid

Naukri logo

1. Objective We are seeking a highly experienced and visionary Expert Platform Lead with 10+ years of expertise in Confluent Kafka administration, cloud-native infrastructure, and enterprise-scale streaming architecture. This role involves overseeing Kafka platform strategy, optimizing infrastructure through automation, ensuring cost-effective scalability, and working closely with cross-functional teams to enable high-performance data streaming solutions. The ideal candidate will drive innovation, establish best practices, and mentor teams to enhance platform reliability and efficiency. 2. Main tasks Kafka Platform Management Define and execute platform strategy for Confluent Kafka, ensuring security, high availability, and scalability. Lead architecture design reviews, influencing decisions related to Kafka infrastructure and cloud deployment models. Oversee and maintain the Kafka platform in a 24/7 operational setting, ensuring high availability and fault tolerance. Establish monitoring frameworks, proactively identifying and addressing platform inefficiencies. Leadership, Collaboration and Support Act as the primary technical authority on Kafka for enterprise-wide streaming architecture. Collaborate closely with application teams, architects, and vendors to align platform capabilities with business needs. Provide technical mentorship to engineers and architects, guiding best practices in Kafka integration and platform usage. Infrastructure Automation and Optimization Spearhead Infrastructure as Code (IaC) initiatives using Terraform for Kafka, AWS, and cloud resources. Drive automation across provisioning, deployment workflows, and maintenance operations, ensuring efficiency and resilience. Implement advanced observability measures to optimize costs and resource allocation while maintaining peak performance. Governance, Documentation and Compliance Maintain detailed platform documentation, including configuration, security policies, and compliance standards. Track and analyze usage trends, ensuring cost-efficient resource utilization across streaming ecosystems. Establish governance frameworks, ensuring compliance with enterprise security policies and industry standards. 3. Technical expertise Education level: Minimum 4 years Bachelor’s or Master’s degree in Computer Science engineering or related field Required expertise for the function: 10+ years of experience in platform engineering, cloud infrastructure, and data streaming architectures. Extensive expertise in Kafka administration (preferably Confluent Kafka), leading enterprise-wide streaming initiatives. Proven track record in leading critical incident response and ensuring system uptime in a 24/7 environment. Knowledge of languages (depending on the office): English (Mandatory) Technical knowledge required to perform the function: Technical Skills: Expert knowledge of Kafka (Confluent), event-driven architectures, and high-scale distributed systems. Mastery of Terraform for infrastructure automation across AWS, Kubernetes, and cloud-native ecosystems. Strong proficiency in AWS services, networking principles, and security best practices. Advanced experience with CI/CD pipelines, version control (Git), and scripting (Bash, Python). Soft Skills: Strategic problem-solving mindset, capable of leading large-scale technical decisions. Strong leadership and mentorship skills, able to guide teams toward technical excellence. Excellent communication, stakeholder management, and cross-functional collaboration abilities. Preferred Skills: Kafka or Confluent certification, demonstrating deep platform expertise. AWS Solutions Architect certification or equivalent cloud specialization. Experience with monitoring tools (Prometheus, Grafana) and proactive alert management for 24/7 operations.

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 17 Lacs

Pune

Remote

Naukri logo

Kafka/MSK Linux In-depth understanding of Kafka broker configurations, zookeepers, and connectors Understand Kafka topic design and creation. Good knowledge in replication and high availability for Kafka system ElasticSearch/OpenSearch

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Looking exp in 5+ Yrs exp in Confluent Kafka Administrator-Technology Lead Kafka Administrator Required Skills & Experience: Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in

Posted 1 week ago

Apply

10.0 - 20.0 years

30 - 45 Lacs

Gurugram, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Seeking a Kafka Platform Architect with expertise in Confluent Kafka, multi-cloud deployment(AWS/Azure/GCP), CI/CD, security and governance to lead scalable, secure streaming architecture initiatives. Design and Implementation of Kafka Infrastructure

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Pune, Delhi / NCR, Mumbai (All Areas)

Hybrid

Naukri logo

Job Title: Data Engineer - Ingestion, Storage & Streaming (Confluent Kafka) Job Summary: As a Data Engineer specializing in Ingestion, Storage, and Streaming, you will design, implement, and maintain robust, scalable, and high-performance data pipelines for the efficient flow of data through our systems. You will work with Confluent Kafka to build real-time data streaming platforms, ensuring high availability and fault tolerance. You will also ensure that data is ingested, stored, and processed efficiently and in real-time to provide immediate insights. Key Responsibilities: Kafka-Based Streaming Solutions: Design, implement, and manage scalable and fault-tolerant data streaming platforms using Confluent Kafka. Develop real-time data streaming applications to support business-critical processes. Implement Kafka producers and consumers for ingesting data from various sources. Handle message brokering, processing, and event streaming within the platform. Ingestion & Data Integration: Build efficient data ingestion pipelines to bring real-time and batch data from various data sources into Kafka. Ensure smooth data integration across Kafka topics and handle multi-source data feeds. Develop and optimize connectors for data ingestion from diverse systems (e.g., databases, external APIs, cloud storage). Data Storage and Management: Manage and optimize data storage solutions in conjunction with Kafka, including topics, partitions, retention policies, and data compression. Work with distributed storage technologies to store large volumes of structured and unstructured data, ensuring accessibility and compliance. Implement strategies for schema management, data versioning, and data governance. Data Streaming & Processing: Leverage Kafka Streams and other stream processing frameworks (e.g., Apache Flink, ksqlDB) to process real-time data and provide immediate analytics. Build and optimize data processing pipelines to transform, filter, aggregate, and enrich streaming data. Monitoring, Optimization, and Security: Set up and manage monitoring tools to track the performance of Kafka clusters, ingestion, and streaming pipelines. Troubleshoot and resolve issues related to data flows, latency, and failures. Ensure data security and compliance by enforcing appropriate data access policies and encryption techniques. Collaboration and Documentation: Collaborate with data scientists, analysts, and other engineers to align data systems with business objectives. Document streaming architecture, pipeline workflows, and data governance processes to ensure system reliability and scalability. Provide regular updates on streaming and data ingestion pipeline performance and improvements to stakeholders. Required Skills & Qualifications: Experience: 3+ years of experience in data engineering, with a strong focus on Kafka, data streaming, ingestion, and storage solutions. Hands-on experience with Confluent Kafka, Kafka Streams, and related Kafka ecosystem tools. Experience with stream processing and real-time analytics frameworks (e.g., ksqlDB, Apache Flink). Technical Skills: Expertise in Kafka Connect, Kafka Streams, and Kafka producer/consumer APIs. Proficient in data ingestion and integration techniques from diverse sources (databases, APIs, etc.). Strong knowledge of cloud data storage and distributed systems. Experience with programming languages like Java, Scala, or Python for Kafka integration and stream processing. Familiarity with tools such as Apache Spark, Flink, Hadoop, or other data processing frameworks. Experience with containerization and orchestration tools such as Docker, Kubernetes.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Lead Data Architect to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

6 - 11 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Kafka Administrator Required Skills & Experience: Looking exp around 5+ Years. Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Share me your updated resume recruiter.wtr26@walkingtree.in

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Administer and maintain Apache Kafka clusters , including installation, upgrades, configuration, and performance tuning. Design and implement Kafka topics, partitions, replication , and consumer groups. Ensure high availability and scalability of Kafka infrastructure in production environments. Monitor Kafka health and performance using tools like Prometheus, Grafana, Confluent Control Center , etc. Implement and manage security configurations such as SSL/TLS, authentication (Kerberos/SASL), and access control. Collaborate with development teams to design and configure Kafka-based integrations and data pipelines . Perform root cause analysis of production issues and ensure timely resolution. Create and maintain documentation for Kafka infrastructure and configurations. Required Skills: Strong expertise in Kafka administration , including hands-on experience with open-source and/or Confluent Kafka . Experience with Kafka ecosystem tools (Kafka Connect, Kafka Streams, Schema Registry). Proficiency in Linux-based environments and scripting (Bash, Python). Experience with monitoring/logging tools and Kafka performance optimization. Ability to work independently and proactively manage Kafka environments. Familiarity with DevOps tools and CI/CD pipelines (e.g., Jenkins, Git, Ansible). Preferred Skills: Experience with cloud platforms (AWS, GCP, or Azure) Kafka services. Knowledge of messaging alternatives like RabbitMQ, Pulsar, or ActiveMQ . Working knowledge of Docker and Kubernetes for Kafka deployment.

Posted 2 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Introduction A career in IBM Software means youll be part of a team that transforms our customers challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the worlds leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBMs product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your role and responsibilities As a Site Reliability Engineer, you will work in an agile, collaborative environment to build, deploy, configure, and maintain systems for the IBM client business. In this role, you will lead the problem resolution process for our clients, from analysis and troubleshooting, to deploying the latest software updates & fixes. Your primary responsibilities include: . 24x7 Observability: Be part of a worldwide team that monitors the health of production systems and services around the clock, ensuring continuous reliability and optimal customer experience. . Cross-Functional Troubleshooting: Collaborate with engineering teams to provide initial assessments and possible workarounds for production issues. Troubleshoot and resolve production issues effectively. . Deployment and Configuration: Leverage Continuous Delivery (CI/CD) tools to deploy services and configuration changes at enterprise scale. . Security and Compliance Implementation: Implementing security measures that meet or exceed industry standards for regulations such as GDPR, SOC2, ISO 27001, PCI, HIPAA, and FBA. . Maintenance and Support: Tasks related to applying Couchbase security patches and upgrades, supporting Cassandra and Mongo for pager duty rotation, and collaborating with Couchbase Product support for issue resolution. Required education Bachelors Degree Required technical and professional expertise 10+ years working in high-performance engineering team Experience in Cloud server management and troubleshooting, network, windows server management, Aws cloud and automation, cloud monitoring, GitHub, kubernetes, Linux, 10+ years of working knowledge with one or more operating systems: RHEL, CentOS Linux, and Windows Servers. Working knowledge with ServiceNow, JIRA, Confluent, and GitHub Preferred technical and professional experience In-depth understanding and working knowledge with server technologies Working knowledge with how Virtualization, Network, and Storage technologies work in the data center and cloud environments Working knowledge with ServiceNow, JIRA, Confluent, and GitHub ITIL Foundation V4 certification is a plus Excellent verbal and written communication skills Highly responsible, motivated, able to work with little direction Ability to troubleshoot complex problems and customer issues

Posted 2 weeks ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Chennai

Work from Office

Naukri logo

Role Summary: We are seeking a highly strategic Functional Analyst with 5 to 10 years of experience in gathering requirements, designing events and processes for event-driven architecture (EDA) projects. This role serves as a critical link between business stakeholders and technical teams, ensuring seamless collaboration, accurate requirement translation, and the efficient development of event-driven systems. The ideal candidate will possess strong analytical skills, has a technical background in java or integration, and extensive experience in leading requirement gathering, documentation, and testing efforts within large-scale EDA implementations. Key Responsibilities: Stakeholder Engagement and Requirement Analysis: Lead discussions between business stakeholders, architects, and producer/consumer application teams to define EDA-based solutions. Facilitate alignment on business objectives, ensuring technical feasibility and optimal integration strategy. Guide requirement-gathering sessions by leveraging deep knowledge of EDA, event streams, and system design. Functional Documentation and Translation: Develop and refine high-level functional specifications, event definitions, and system workflows to support business needs. Create and maintain comprehensive data models, process flows, and integration design documents, ensuring scalability and efficiency. Drive improvements in documentation quality and standardization across functional teams. Build and Development Support: Work closely with architects and developers to ensure EDA design principles are accurately implemented. Provide advanced functional support in system integration, event sourcing, and business workflow mapping. Identify and address bottlenecks in functional design, recommending optimal solutions. Testing and Validation: Lead User Acceptance Testing (UAT) strategy, focusing on validating event interactions and data flow consistency. Develop and refine UAT test cases, execution plans, and validation frameworks for event-driven applications. Work alongside QA and business teams to track defects, troubleshoot issues, and optimize workflows before production rollout. Qualifications: Experience: 5 to 10 years of experience as a Functional Analyst or Business Analyst in EDA, microservices, or enterprise integration projects. Experience with logistics and related processes Proven expertise in leading requirement workshops, translating business needs into process, event and functional designs Experience coordinating between cross-functional technical and business teams in high-scale environments. Technical Skills: Deep understanding of event-driven architecture principles, including event sourcing, pub/sub models, and streaming technologies. Strong experience with functional documentation (use cases, process flows, event modeling, and API integration). Proficiency with diagramming and documentation tools (e.g., Visio, Lucidchart, Confluence). Hands-on knowledge of integration patterns and event-driven workflows, ensuring optimal data flow and system performance. Soft Skills: Exceptional communication and stakeholder management skills, with a strategic mindset in bridging business and technical perspectives. Strong analytical and problem-solving abilities, particularly in complex integration and workflow scenarios. Highly organized, detail-oriented, and adaptable to dynamic business needs. Preferred Skills: Experience working in Agile environments, leading functional discussions within Scrum and Kanban teams. Exposure to EDA platforms and tools such as Kafka, Confluent, and cloud-based event management solutions. Experience with Event streaming in Kafka Familiarity with Java is an added plus

Posted 2 weeks ago

Apply

6.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Roles & Responsibilities: Design, build, and manage Kafka clusters using Confluent Platform and Kafka Cloud services (AWS MSK, Confluent Cloud) . Develop and maintain Kafka topics, schemas (Avro/Protobuf), and connectors for data ingestion and processing pipelines. Monitor and ensure the reliability, scalability, and security of Kafka infrastructure. Collaborate with application and data engineering teams to integrate Kafka with other AWS-based services (e.g., Lambda, S3, EC2, Redshift). Implement and manage Kafka Connect , Kafka Streams , and ksqlDB where applicable. Optimize Kafka performance, troubleshoot issues, and manage incident response. Preferred candidate profile 4-6 years of experience working with Apache Kafka and Confluent Kafka. Strong knowledge of Kafka internals (brokers, zookeepers, partitions, replication, offsets). Experience with Kafka Connect, Schema Registry, REST Proxy, and Kafka security. Hands-on experience with AWS (EC2, IAM, CloudWatch, S3, Lambda, VPC, Load balancers). Proficiency in scripting and automation using Terraform, Ansible, or similar tools. Familiarity with DevOps practices and tools (CI/CD pipelines, monitoring tools like Prometheus/Grafana, Splunk, Datadog etc). Experience with containerization (Docker, Kubernetes) is a plus.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

SUMMARY Job Role: Apache Kafka Admin Experience: 6+ years Location: Pune (Preferred), Bangalore, Mumbai Must-Have: The candidate should have 6 years of relevant experience in Apache Kafka Job Description: We are seeking a highly skilled and experienced Senior Kafka Administrator to join our team. The ideal candidate will have 6-9 years of hands-on experience in managing and optimizing Apache Kafka environments. As a Senior Kafka Administrator, you will play a critical role in designing, implementing, and maintaining Kafka clusters to support our organization's real-time data streaming and event-driven architecture initiatives. Responsibilities: Design, deploy, and manage Apache Kafka clusters, including installation, configuration, and optimization of Kafka brokers, topics, and partitions. Monitor Kafka cluster health, performance, and throughput metrics and implement proactive measures to ensure optimal performance and reliability. Troubleshoot and resolve issues related to Kafka message delivery, replication, and data consistency. Implement and manage Kafka security mechanisms, including SSL/TLS encryption, authentication, authorization, and ACLs. Configure and manage Kafka Connect connectors for integrating Kafka with various data sources and sinks. Collaborate with development teams to design and implement Kafka producers and consumers for building real-time data pipelines and streaming applications. Develop and maintain automation scripts and tools for Kafka cluster provisioning, deployment, and management. Implement backup, recovery, and disaster recovery strategies for Kafka clusters to ensure data durability and availability. Stay up-to-date with the latest Kafka features, best practices, and industry trends and provide recommendations for optimizing our Kafka infrastructure. Requirements: 6-9 years of experience as a Kafka Administrator or similar role, with a proven track record of managing Apache Kafka clusters in production environments. In - depth knowledge of Kafka architecture, components, and concepts, including brokers, topics, partitions, replication, and consumer groups. Hands - on experience with Kafka administration tasks, such as cluster setup, configuration, performance tuning, and monitoring. Experience with Kafka ecosystem tools and technologies, such as Kafka Connect, Kafka Streams, and Confluent Platform. Proficiency in scripting languages such as Python, Bash, or Java. Strong understanding of distributed systems, networking, and Linux operating systems. Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues. Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

SUMMARY Job Role: Apache Kafka Admin Experience: 6+ years Location: Pune (Preferred), Bangalore, Mumbai Must-Have: The candidate should have 6 years of relevant experience in Apache Kafka Job Description: We are seeking a highly skilled and experienced Senior Kafka Administrator to join our team. The ideal candidate will have 6-9 years of hands-on experience in managing and optimizing Apache Kafka environments. As a Senior Kafka Administrator, you will play a critical role in designing, implementing, and maintaining Kafka clusters to support our organization's real-time data streaming and event-driven architecture initiatives. Responsibilities: Design, deploy, and manage Apache Kafka clusters, including installation, configuration, and optimization of Kafka brokers, topics, and partitions. Monitor Kafka cluster health, performance, and throughput metrics and implement proactive measures to ensure optimal performance and reliability. Troubleshoot and resolve issues related to Kafka message delivery, replication, and data consistency. Implement and manage Kafka security mechanisms, including SSL/TLS encryption, authentication, authorization, and ACLs. Configure and manage Kafka Connect connectors for integrating Kafka with various data sources and sinks. Collaborate with development teams to design and implement Kafka producers and consumers for building real-time data pipelines and streaming applications. Develop and maintain automation scripts and tools for Kafka cluster provisioning, deployment, and management. Implement backup, recovery, and disaster recovery strategies for Kafka clusters to ensure data durability and availability. Stay up-to-date with the latest Kafka features, best practices, and industry trends and provide recommendations for optimizing our Kafka infrastructure. Requirements: 6-9 years of experience as a Kafka Administrator or similar role, with a proven track record of managing Apache Kafka clusters in production environments. In - depth knowledge of Kafka architecture, components, and concepts, including brokers, topics, partitions, replication, and consumer groups. Hands - on experience with Kafka administration tasks, such as cluster setup, configuration, performance tuning, and monitoring. Experience with Kafka ecosystem tools and technologies, such as Kafka Connect, Kafka Streams, and Confluent Platform. Proficiency in scripting languages such as Python, Bash, or Java. Strong understanding of distributed systems, networking, and Linux operating systems. Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues. Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders.

Posted 3 weeks ago

Apply

2 - 7 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer ( AWS, Confluent & Snaplogic ) Data Integration Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. You"™d describe yourself as: Experience 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills Proficiency in Python, SQL, and other relevant programming languages. Data Modeling Experience with data modeling and database design. Problem-Solving Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability Ability to adapt to changing technologies and work in a fast-paced environment. Team Player Strong team player with a collaborative mindset. Continuous Learning Eagerness to learn and stay updated with the latest trends and technologies in data engineering. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers

Posted 1 month ago

Apply

8 - 12 years

22 - 32 Lacs

Bangalore Rural, Bengaluru

Hybrid

Naukri logo

Confluent Kafka Admin Responsibilities Senior Confluent Kafka admin with 10 years of total IT experience with strong engineering background Experience of setup and manage Confluent Kafka cluster and monitor its performance distribution Design Configure and manage RBAC and Multitenancy Experience with Confluent Kafka On prem as well as Confluent Kafka Cloud Manage all Kafka configurations via Ansible Good experience with setting up DR for Confluent Kafka instances Coordinate with different development teams and manage their connectivity usage in Kafka cluster Document maintain and present best engineering practice strategies to ensure nearterm changes are aligned with longterm release objectives Collaborate with infrastructure and other backend teams on software upgrades disaster recovery and other efforts Good experience in using Confluent Kafka with at least 5 years in administration ie cluster management and client integration Knowledge and experience in git and Ansible Notice Immediate – 30days.

Posted 1 month ago

Apply

2 - 6 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Sr. Staff Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). Title - Lead Data Architect (Streaming) Required Skills and Qualifications Overall 10+ years of IT experience of which 7+ years of experience in data architecture and engineering Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS Strong experience with Confluent Strong experience in Kafka Solid understanding of data streaming architectures and best practices Strong problem-solving skills and ability to think critically Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Knowledge of Apache Airflow for data orchestration Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications An understanding of cloud networking patterns and practises Experience with working on a library or other long term product Knowledge of the Flink ecosystem Experience with Terraform Deep experience with CI/CD pipelines Strong understanding of the JVM language family Understanding of GDPR and the correct handling of PII Expertise with technical interface design Use of Docker Key Responsibilities Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem Architect data processing applications using Python, Kafka, Confluent Cloud and AWS Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka Ensure data security and compliance throughout the architecture Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Optimize data flows for performance, cost-efficiency, and scalability Implement data governance and quality control measures Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams Provide technical leadership and mentorship to development teams and lead engineers Stay current with emerging technologies and industry trends Collaborate with data scientists and analysts to enable efficient data access and analysis Evaluate and recommend new technologies to improve data architecture Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Developer, Computer Science, Consulting, Technology

Posted 1 month ago

Apply

6 - 11 years

12 - 16 Lacs

Karnataka

Work from Office

Naukri logo

Description C1 and up Job TitleCloud DevOps Automation Engineer We are looking for DevOps Automation Engineers with back-end web application and systems-level experience to join our Fabric Development Automation team. Our passion for innovation and winning in the cloud marketplace is infectious, and we hope you will feel it with us. The Fabric Development team is dedicated to ensuring that the IBM Cloud is at the forefront of cloud technology, from API design to application architecture to flexible infrastructure services. We are running IBM's current generation cloud platform to deliver performance and predictability for our customers' most demanding workloads, at global scale and with leadership efficiency, resiliency and security. It is an exciting time, and as a team we are driven by this incredible opportunity to thrill our clients. The Development Automation Team sits at the center of our larger development effort. Team members work in areas that are used by the larger development organization and are required to work with developers and stakeholders in other teams to help solve problems. Roles ResponsibilitiesImplement and automate infrastructure solutions that support IBM Cloud products and infrastructure Build Set up test automation and pipeline frameworks Administer automated CI/CD systems and tools for development and test teams Support the compliance and security integrity of the environment Partner with other teams, managers and program managers to develop alerting and monitoring for mission-critical services Support development of new and enhance existing capabilities for our compute infrastructure services Provide technical escalation support for other Infrastructure Operations teams Required technical professional expertise5+ years of infrastructure engineer with proven record for delivering high-quality, large-scale solutions 5+ years of working knowledge with one or more operating systems RHEL, CentOS Linux, and Windows Servers Working knowledge with one or more Virtualization technologiesCitrix Hypervisor, VMware vSphere, Ubuntu KVM, etc. Working knowledge with one or more programming tools Bash, PowerShell, Python, Ruby and Go. Working knowledge with one or more key infrastructure tools/products Active Directory, Ansible, Chef, etc. Working knowledge with Container technologies Kubernetes, Docker, etc. Working knowledge with Monitoring technologies Zabbix, Splunk, etc. Working knowledge with Network and Storage technologies Working knowledge with ServiceNow, JIRA, Confluent, and GitHub Desired Additional Qualifications and Skills (Nice to Have) Experience with Message Queues, PostgreSQL/MySQL Databases, and NoSQL Databases Experience with technologies enabling reliable data processing pipelines such as Kafka, Elasticsearch, Splunk; database and data visualization technologies for operations such as SQL dbs, Influxdb, Grafana, Kibana. Experience with event monitoring/management ecosystems like Zabbix, Nagios, Sysdig, LogDNA, ServiceNow. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills DevOps;API Languages RequiredENGLISH Role Rarity To Be Defined

Posted 2 months ago

Apply

7 - 10 years

9 - 12 Lacs

Chennai, Hyderabad

Work from Office

Naukri logo

The Impact you will have in this role: The Systems Engineering team is responsible for the entire technical effort to evolve and verify solutions that satisfy client needs. The main focus is centered on reducing risk and improving the efficiency, performance, stability, security, and quality of all systems and platforms. The Platform Hosting & Engineering role specializes in the development, implementation, and support of all distributed, mainframe, and network hosting services across the firm. Responsible for the installation, configuration, programming, and support of operating systems, complex networks, and distributed environments, deploying solutions to increase overall system reliability across the firm. What You'll Do: Designing, developing, and deploying the applications by using Chef, Ansible, Jenkins and GitLab within a robust Continuous Integration and Continuous Deployment (CI/CD) pipeline. Orchestrating the seamless installation and configuration of middleware software systems for the smooth integration and operation. Leverage OpenShift to containerize applications, enabling easy movement across various environments and efficient scaling as needed. Use AWS Cloud Services to efficiently handle and support various aspects of the organization's infrastructure. Life cycle management of SSL certificates like request, renewal, retire, and revocation. Ensuring the application of necessary security patches to the servers to safeguard against potential vulnerabilities. Collaborating with various teams on projects, propose solutions, and address the technical issues. Defining performance standards for the middleware systems and continuously monitoring their performance to ensure they meet established benchmarks. Automate installation and configuration of middleware software. Write scripts/utilities for repeatable/reusable tasks and operations. Modify the existing software to correct errors, adapt it to new hardware, or upgrade interfaces and improve performance. Monitor functioning of equipment to ensure system operates in conformance with specifications. Store, retrieve, and manipulate data for analysis of system capabilities and requirements. Qualifications: Bachelor's degree in Computer Science, Information Technology or related field. Talents Needed for Success: A minimum 7+ years of proven Relevant Experience. CI/CD Tools: Atlassian Bitbucket, Maven, GitHub, Jenkins, Jira. Containerization and Orchestration: Docker, Kubernetes, RedHat OpenShift. Application Servers: Confluent, Apache Tomcat, Redhat JBoss Web Server, Liberty, and IBM WebSphere. Configuration Management and Infrastructure as Code (IaC): Chef, Terraform, Puppet, Ansible. Cloud Computing Platforms: Amazon Web Services (AWS). Build and Dependency Management: Jenkins, Maven, and Gradle. Streaming Platform: Confluent is a plus. Operating Systems and Scripting: Red Hat Enterprise Linux, Bash Shell script, Java Script, and Python. Installation, configuration and Patching/upgrade of Application servers software such as Tomcat, Apache webserver, etc., Good experience in SSL including One-way SSL and mutual authentication. Strong Troubleshooting skills. Monitoring: Dynatrace, Prometheus, Grafana, Splunk, AWS CloudWatch Software Development Methodologies: DevOps, Agile, Scrum, Kanban and Waterfall. Nice to have: Certification in Redhat OpenShift Container Platform.

Posted 2 months ago

Apply

10 - 12 years

32 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Are you passionate about working with innovative technologies? Do you have experience in DevOps Engineering and a strong focus on Azure DevOps and Terraform? We are looking for a talented DevOps Engineer to join our SAP Development & Integration CoE team at Novo Nordisk. If you are ready for the next step in your career and want to be part of a global healthcare company that is making a difference in the lives of millions, read on and apply today for a life-changing career. Apply Now! The position As a Senior IT Developer I at Novo Nordisk, you shall develop and maintain integrations as well as APIs using Confluent Kafka as a message broker. You will have the opportunity to: Design, build and maintain CI/CD pipelines using Azure DevOps. Develop and manage infrastructure as code (IaC) using Terraform. Implement software development best practices to ensure high-quality, scalable, and secure solutions. Design, develop and maintain Kafka-based solutions. Work with cross-functional teams to understand business requirements and translate them into technical requirements. Develop and maintain documentation for Kafka-based solutions. Troubleshoot and debug issues in Kafka-based solutions. Optimize Kafka-based solutions for performance, scalability, and availability. Collaborate with global teams to define and deliver projects. Qualifications To be successful in this role, you should have: Masters or bachelors degree in computer science, IT, or a related field, with a total of 10+ years of experience. 6+ years of relevant experience as a DevOps Engineer, with a strong focus on Azure DevOps and Terraform. Solid understanding of software development best practices. Basic knowledge of Confluent Kafka, AWS, and integrations in general. Knowledge about Kafka architecture, Kafka connectors, and Kafka APIs. Experience in implementing Kafka-based solutions in a cloud environment (AWS, Azure, Google Cloud, etc.). Ability to troubleshoot and debug complex issues in Kafka-based solutions. Experience in platform technologies such as Confluent Cloud Kafka, AWS EKS, Kafka Connect, KSQL, Schema registry and in security. Experience in CI/CD tools such as Azure DevOps, Azure pipelines, Terraform and Helm Charts. Experience using Agile, Scrum and iterative development practices. Good communication skills and ability to work with global teams to define and deliver on projects. Self-driven and fast learner with a high sense of ownership.

Posted 2 months ago

Apply

7 - 12 years

5 - 9 Lacs

Jaipur

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Kafka Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Minimum 15 years of full time education Key Responsibilities :A- Strong Experience as Administrator/Platform Engineering for Kafka B- Expertise in Confluent Kafka Administration C- Experience in implementing Kafka on confluent cloudD- Hands-on experience with Kafka clusters hosted on Cloud and on-prem platformsE- Design, build, assemble, and configure application or technical architecture components using business requirementF-Plus to have AWS expertise and familiarity with CI/CD DevOps, in addition to skills in Spring Boot, Microservices, and Angular Technical Experience :A-Token based auth, OAUTH, Basic Auth, Keypair concept, Openssl libraryB-Manage Kafka Cluster in OnPrem and Cloud environmentC-Confluent Cloud backup and restore for data D-Kafka load balancing and auto scale on the basis of loadE-Confluent cloud Centre and KSQL knowledge must have Professional Attributes :A -Interpersonal skills along with the ability to work in a teamB Good presentation skills Qualifications Minimum 15 years of full time education

Posted 3 months ago

Apply

6 - 8 years

10 - 15 Lacs

Bengaluru, Hyderabad, Gurgaon

Work from Office

Naukri logo

Application Integration Engineer Experience Level (6-8 years) Skill Python, AWS S3, AWS MWAA Airflow, Confluent Kafka, API Development ? Experienced Python developer with very good experience with Confluent Kafka and Airflow. Have API development experience using Python. Have good experience with AWS cloud services. Very good experience with DevOps process CI/CD tools like Git, Jenkins, AWS ECR/ECS, AWS EKS etc. ? Requirements analysis of FR / NFR and prepares technical design based on requirements ? Builds code based on the technical design ? Can independently resolve technical issues and also help other team members with technical issue resolution. ? Helps with testing and efficiently fixes bugs. ?Follows the DevOps CI/CD processes and change management processes for any code deployment

Posted 3 months ago

Apply

5 - 10 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

5+ YEARS OF EXPERIENCE CONFLUENT KAFKA KAFKA CONNECT KAFKA CLUSTER ZOOKEEPER

Posted 3 months ago

Apply

3 - 5 years

10 - 11 Lacs

Mohali

Work from Office

Naukri logo

Responsibilities : Data Pipeline Development:Design, develop, and maintain scalable and reliable data pipelines using Python, Spark, and other relevant technologies. Extract, transform, and load (ETL) data from various sources into data lakes and data warehouses. Implement data quality checks and monitoring to ensure data accuracy and consistency. Big Data Technologies:Work with Hadoop, Spark, Confluent Kafka, and other big data technologies to process and analyze large datasets. Build and maintain data lakes for storing structured and unstructured data. Optimize data processing workflows for performance and efficiency. Cloud Platform Integration:Develop and deploy data solutions on cloud platforms such as Azure or Google Cloud. Utilize cloud-based data services like Azure Data Factory, Redshift, Snowflake, or BigQuery. Implement cloud-based data storage and processing solutions. Database Management:Design and implement database schemas in PostgreSQL and other SQL/NoSQL databases. Write complex SQL queries for data extraction and analysis. Optimize database performance and ensure data integrity. Data Streaming:Implement data streaming solutions using Confluent Kafka. Build real-time data pipelines for data ingestion and processing. Data Warehousing:Design and implement data warehousing solutions. Work with data models and dimensional modeling. Key skills Hadoop, Spark, Confluent, Kafka, Data Lake, PosgreSQL, Data Factory etc. Python, Scala, or Java, Confluent, Azure, Google Cloud, SQL, NoSQL databases, Redshift, Snowflake, BigQuery

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies