Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
6 - 10 Lacs
Mumbai
Work from Office
Technical Skills: Administration of Confluent Kafka Platform on Prem and in the Cloud Knowledge of Confluent Kafka operations Administration of Topics, Partitions, Consumer groups and kSQL queries to maintain optimal performance. Knowledge of Kafka ecosystem including Kafka Brokers, Zookeeper/kRaft, kSQL, Connectors, Schema Registry, Control Center, and platform interoperability. Knowledge of Kafka Cluster Linking and replication Experience with administering Multi Regional Confluent Clusters System Performance: Knowledge of performance tuning of messaging systems to meet application requirements. Performance Tools: Prometheus and Grafana metrics for Kafka implementations Development Tools/Languages: Familiarity with tools like CP-Ansible, Java, and shell scripting. Operating Systems: RedHat Linux Soft Skills: Communication: Ability to articulate complex technical scenarios in a straightforward manner to stakeholders at different levels. Critical Thinking: Evaluation of design decisions, trade-offs, and potential future challenges. Attention to Detail: Especially crucial for analyzing system design documentation, error messages, and complex message flows. Teamwork: Ability to collaborate with architects, developers, system administrators, and other roles involved in system administration and implementation
Posted 1 month ago
8.0 - 13.0 years
35 - 70 Lacs
Bengaluru
Work from Office
Job Title: Senior Kafka Engineer Location: Bangalore (Work from Office) Shift: Evening Shift (Ends by 11 PM IST) Experience: 8+ Years Interview : Virtual Drive, Saturday 11 AM to 5 PM About the Role Were seeking a highly experienced Senior Kafka Engineer to join our fast-paced engineering team in Bangalore. This role requires deep expertise in Apache Kafka, Confluent Platform, and cloud-native tooling to support our real-time data streaming infrastructure. If you are passionate about building scalable, fault-tolerant data pipelines and mentoring others, wed love to talk to you! Key Responsibilities Manage and enhance existing Apache Kafka and Confluent Platform on AWS. Review existing implementations and recommend architectural or performance improvements. Collaborate with engineering and product teams to integrate new use cases and define scalable streaming patterns. Implement and maintain Kafka producers/consumers, Kafka Connectors, and Kafka Streams applications. Enforce governance policies around topic design, schema evolution, partitioning strategies, and data retention. Monitor, troubleshoot, and optimize Kafka clusters using Confluent Control Center, Prometheus, and Grafana. Use Kubernetes and Terraform to automate infrastructure provisioning, deployment, and scaling. Ensure high availability, security, and disaster recovery for Kafka environments. Mentor junior engineers and lead Kafka-related initiatives across teams. Required Skills & Qualifications 8+ years of hands-on experience in backend/data engineering with at least 4 years focused on Apache Kafka. Deep understanding of Kafka internals, brokers, zookeepers, producers, consumers, and message delivery semantics. Experience with Confluent Platform and schema management via Schema Registry. Strong background in cloud platforms (preferably AWS). Proficient with Kubernetes, Terraform, and CI/CD pipelines. Working knowledge of observability stacks like Grafana, Prometheus, and ELK. Solid understanding of distributed systems, fault tolerance, and data streaming patterns. Work Arrangement Location: Bangalore Work from Office Shift Timing: Ends by 11 PM IST (Evening Shift) Employment Type: Full-Time, Permanent Apply Now by reaching out to [sasidhar.m@technogenindia.com] with your resume. Please share availability for tomorrow
Posted 1 month ago
10.0 - 12.0 years
30 - 37 Lacs
Bengaluru
Work from Office
We need immediate joiners or those who are serving notice period and can join in another 10-15 days. No other candidate i.e. who are on bench or official 3, 2 months NP. Strong working experience in design and development of RESTful APIs using Java, Spring Boot and Spring Cloud. Technical hands-on experience to support development, automated testing, infrastructure and operations Fluency with relational databases or alternatively NoSQL databases Excellent pull request review skills and attention to detail Experience with streaming platforms (real-time data at massive scale like Confluent Kafka). Working experience in AWS services like EC2, ECS, RDS, S3 etc. Understanding of DevOps as well as experience with CI/CD pipelines Industry experience in Retail domain is a plus. Exposure to Agile Methodology and project tools: Jira, Confluence, SharePoint. Working knowledge in Docker Container/Kubernetes Excellent team player, ability to work independently and as part of a team Experience in mentoring junior developers and providing technical leadership Familiarity with Monitoring & Reporting tools (Prometheus, Grafana, PagerDuty etc). Ability to learn, understand, and work quickly with new emerging technologies, methodologies, and solutions in the Cloud/IT technology space Knowledge of front-end framework using React or Angular and any other programming languages like JavaScript/TypeScript or Python is a plus
Posted 1 month ago
10.0 - 15.0 years
35 - 50 Lacs
Hyderabad, Bengaluru
Work from Office
Job Title: Senior Kafka Engineer Location: Hyderabad / Bangalore Work Mode: Work from Office | 24/7 Rotational Shifts Type: Full-Time Experience: 8+ Years About the Role: Were hiring a Senior Kafka Engineer to manage and enhance our Kafka infrastructure on AWS and Confluent Platform. You’ll lead efforts in building secure, scalable, and reliable data streaming solutions for high-impact FinTech systems. Key Responsibilities: Manage and optimize Kafka and Confluent deployments on AWS Design and maintain Kafka producers, consumers, streams, and connectors Define schema, partitioning, and retention policies Monitor performance using Prometheus, Grafana, and Confluent tools Automate infrastructure using Terraform, Helm, and Kubernetes (EKS) Ensure high availability, security, and disaster recovery Collaborate with teams and share Kafka best practices Required Skills: 8+ years in platform engineering, 5+ with Kafka & Confluent Strong Java or Python Kafka client development Hands-on with Schema Registry, Control Center, ksqlDB Kafka deployment on AWS (MSK or EC2) Kafka Connect, Streams, and schema tools Kubernetes (EKS), Terraform, Prometheus, Grafana Nice to Have: FinTech or regulated industry experience Knowledge of TLS, SASL/OAuth, RBAC Experience with Flink or Spark Streaming Kafka governance and multi-tenancy
Posted 1 month ago
7.0 - 12.0 years
13 - 18 Lacs
Bengaluru
Work from Office
We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture
Posted 1 month ago
5.0 - 10.0 years
7 - 14 Lacs
Mumbai, Goregaon, Mumbai (All Areas)
Work from Office
Opening for the Insurance Company. **Looking someone with 30 days notice period** Location : Mumbai (Lower Parel) Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives Interested candidates shared resume on snehal@topgearconsultants.com
Posted 1 month ago
8.0 - 13.0 years
25 - 35 Lacs
Chennai
Hybrid
1. Objective We are seeking a highly experienced and visionary Expert Platform Lead with 10+ years of expertise in Confluent Kafka administration, cloud-native infrastructure, and enterprise-scale streaming architecture. This role involves overseeing Kafka platform strategy, optimizing infrastructure through automation, ensuring cost-effective scalability, and working closely with cross-functional teams to enable high-performance data streaming solutions. The ideal candidate will drive innovation, establish best practices, and mentor teams to enhance platform reliability and efficiency. 2. Main tasks Kafka Platform Management Define and execute platform strategy for Confluent Kafka, ensuring security, high availability, and scalability. Lead architecture design reviews, influencing decisions related to Kafka infrastructure and cloud deployment models. Oversee and maintain the Kafka platform in a 24/7 operational setting, ensuring high availability and fault tolerance. Establish monitoring frameworks, proactively identifying and addressing platform inefficiencies. Leadership, Collaboration and Support Act as the primary technical authority on Kafka for enterprise-wide streaming architecture. Collaborate closely with application teams, architects, and vendors to align platform capabilities with business needs. Provide technical mentorship to engineers and architects, guiding best practices in Kafka integration and platform usage. Infrastructure Automation and Optimization Spearhead Infrastructure as Code (IaC) initiatives using Terraform for Kafka, AWS, and cloud resources. Drive automation across provisioning, deployment workflows, and maintenance operations, ensuring efficiency and resilience. Implement advanced observability measures to optimize costs and resource allocation while maintaining peak performance. Governance, Documentation and Compliance Maintain detailed platform documentation, including configuration, security policies, and compliance standards. Track and analyze usage trends, ensuring cost-efficient resource utilization across streaming ecosystems. Establish governance frameworks, ensuring compliance with enterprise security policies and industry standards. 3. Technical expertise Education level: Minimum 4 years Bachelor’s or Master’s degree in Computer Science engineering or related field Required expertise for the function: 10+ years of experience in platform engineering, cloud infrastructure, and data streaming architectures. Extensive expertise in Kafka administration (preferably Confluent Kafka), leading enterprise-wide streaming initiatives. Proven track record in leading critical incident response and ensuring system uptime in a 24/7 environment. Knowledge of languages (depending on the office): English (Mandatory) Technical knowledge required to perform the function: Technical Skills: Expert knowledge of Kafka (Confluent), event-driven architectures, and high-scale distributed systems. Mastery of Terraform for infrastructure automation across AWS, Kubernetes, and cloud-native ecosystems. Strong proficiency in AWS services, networking principles, and security best practices. Advanced experience with CI/CD pipelines, version control (Git), and scripting (Bash, Python). Soft Skills: Strategic problem-solving mindset, capable of leading large-scale technical decisions. Strong leadership and mentorship skills, able to guide teams toward technical excellence. Excellent communication, stakeholder management, and cross-functional collaboration abilities. Preferred Skills: Kafka or Confluent certification, demonstrating deep platform expertise. AWS Solutions Architect certification or equivalent cloud specialization. Experience with monitoring tools (Prometheus, Grafana) and proactive alert management for 24/7 operations.
Posted 2 months ago
3.0 - 6.0 years
10 - 17 Lacs
Pune
Remote
Kafka/MSK Linux In-depth understanding of Kafka broker configurations, zookeepers, and connectors Understand Kafka topic design and creation. Good knowledge in replication and high availability for Kafka system ElasticSearch/OpenSearch
Posted 2 months ago
5.0 - 10.0 years
5 - 10 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 5+ Yrs exp in Confluent Kafka Administrator-Technology Lead Kafka Administrator Required Skills & Experience: Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 2 months ago
10.0 - 20.0 years
30 - 45 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Work from Office
Seeking a Kafka Platform Architect with expertise in Confluent Kafka, multi-cloud deployment(AWS/Azure/GCP), CI/CD, security and governance to lead scalable, secure streaming architecture initiatives. Design and Implementation of Kafka Infrastructure
Posted 2 months ago
4.0 - 8.0 years
10 - 20 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job Title: Data Engineer - Ingestion, Storage & Streaming (Confluent Kafka) Job Summary: As a Data Engineer specializing in Ingestion, Storage, and Streaming, you will design, implement, and maintain robust, scalable, and high-performance data pipelines for the efficient flow of data through our systems. You will work with Confluent Kafka to build real-time data streaming platforms, ensuring high availability and fault tolerance. You will also ensure that data is ingested, stored, and processed efficiently and in real-time to provide immediate insights. Key Responsibilities: Kafka-Based Streaming Solutions: Design, implement, and manage scalable and fault-tolerant data streaming platforms using Confluent Kafka. Develop real-time data streaming applications to support business-critical processes. Implement Kafka producers and consumers for ingesting data from various sources. Handle message brokering, processing, and event streaming within the platform. Ingestion & Data Integration: Build efficient data ingestion pipelines to bring real-time and batch data from various data sources into Kafka. Ensure smooth data integration across Kafka topics and handle multi-source data feeds. Develop and optimize connectors for data ingestion from diverse systems (e.g., databases, external APIs, cloud storage). Data Storage and Management: Manage and optimize data storage solutions in conjunction with Kafka, including topics, partitions, retention policies, and data compression. Work with distributed storage technologies to store large volumes of structured and unstructured data, ensuring accessibility and compliance. Implement strategies for schema management, data versioning, and data governance. Data Streaming & Processing: Leverage Kafka Streams and other stream processing frameworks (e.g., Apache Flink, ksqlDB) to process real-time data and provide immediate analytics. Build and optimize data processing pipelines to transform, filter, aggregate, and enrich streaming data. Monitoring, Optimization, and Security: Set up and manage monitoring tools to track the performance of Kafka clusters, ingestion, and streaming pipelines. Troubleshoot and resolve issues related to data flows, latency, and failures. Ensure data security and compliance by enforcing appropriate data access policies and encryption techniques. Collaboration and Documentation: Collaborate with data scientists, analysts, and other engineers to align data systems with business objectives. Document streaming architecture, pipeline workflows, and data governance processes to ensure system reliability and scalability. Provide regular updates on streaming and data ingestion pipeline performance and improvements to stakeholders. Required Skills & Qualifications: Experience: 3+ years of experience in data engineering, with a strong focus on Kafka, data streaming, ingestion, and storage solutions. Hands-on experience with Confluent Kafka, Kafka Streams, and related Kafka ecosystem tools. Experience with stream processing and real-time analytics frameworks (e.g., ksqlDB, Apache Flink). Technical Skills: Expertise in Kafka Connect, Kafka Streams, and Kafka producer/consumer APIs. Proficient in data ingestion and integration techniques from diverse sources (databases, APIs, etc.). Strong knowledge of cloud data storage and distributed systems. Experience with programming languages like Java, Scala, or Python for Kafka integration and stream processing. Familiarity with tools such as Apache Spark, Flink, Hadoop, or other data processing frameworks. Experience with containerization and orchestration tools such as Docker, Kubernetes.
Posted 2 months ago
7.0 - 9.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Lead Data Architect to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 months ago
5.0 - 10.0 years
6 - 11 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Kafka Administrator Required Skills & Experience: Looking exp around 5+ Years. Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Share me your updated resume recruiter.wtr26@walkingtree.in
Posted 2 months ago
5.0 - 10.0 years
10 - 18 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Administer and maintain Apache Kafka clusters , including installation, upgrades, configuration, and performance tuning. Design and implement Kafka topics, partitions, replication , and consumer groups. Ensure high availability and scalability of Kafka infrastructure in production environments. Monitor Kafka health and performance using tools like Prometheus, Grafana, Confluent Control Center , etc. Implement and manage security configurations such as SSL/TLS, authentication (Kerberos/SASL), and access control. Collaborate with development teams to design and configure Kafka-based integrations and data pipelines . Perform root cause analysis of production issues and ensure timely resolution. Create and maintain documentation for Kafka infrastructure and configurations. Required Skills: Strong expertise in Kafka administration , including hands-on experience with open-source and/or Confluent Kafka . Experience with Kafka ecosystem tools (Kafka Connect, Kafka Streams, Schema Registry). Proficiency in Linux-based environments and scripting (Bash, Python). Experience with monitoring/logging tools and Kafka performance optimization. Ability to work independently and proactively manage Kafka environments. Familiarity with DevOps tools and CI/CD pipelines (e.g., Jenkins, Git, Ansible). Preferred Skills: Experience with cloud platforms (AWS, GCP, or Azure) Kafka services. Knowledge of messaging alternatives like RabbitMQ, Pulsar, or ActiveMQ . Working knowledge of Docker and Kubernetes for Kafka deployment.
Posted 2 months ago
10.0 - 12.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction A career in IBM Software means youll be part of a team that transforms our customers challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the worlds leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBMs product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your role and responsibilities As a Site Reliability Engineer, you will work in an agile, collaborative environment to build, deploy, configure, and maintain systems for the IBM client business. In this role, you will lead the problem resolution process for our clients, from analysis and troubleshooting, to deploying the latest software updates & fixes. Your primary responsibilities include: . 24x7 Observability: Be part of a worldwide team that monitors the health of production systems and services around the clock, ensuring continuous reliability and optimal customer experience. . Cross-Functional Troubleshooting: Collaborate with engineering teams to provide initial assessments and possible workarounds for production issues. Troubleshoot and resolve production issues effectively. . Deployment and Configuration: Leverage Continuous Delivery (CI/CD) tools to deploy services and configuration changes at enterprise scale. . Security and Compliance Implementation: Implementing security measures that meet or exceed industry standards for regulations such as GDPR, SOC2, ISO 27001, PCI, HIPAA, and FBA. . Maintenance and Support: Tasks related to applying Couchbase security patches and upgrades, supporting Cassandra and Mongo for pager duty rotation, and collaborating with Couchbase Product support for issue resolution. Required education Bachelors Degree Required technical and professional expertise 10+ years working in high-performance engineering team Experience in Cloud server management and troubleshooting, network, windows server management, Aws cloud and automation, cloud monitoring, GitHub, kubernetes, Linux, 10+ years of working knowledge with one or more operating systems: RHEL, CentOS Linux, and Windows Servers. Working knowledge with ServiceNow, JIRA, Confluent, and GitHub Preferred technical and professional experience In-depth understanding and working knowledge with server technologies Working knowledge with how Virtualization, Network, and Storage technologies work in the data center and cloud environments Working knowledge with ServiceNow, JIRA, Confluent, and GitHub ITIL Foundation V4 certification is a plus Excellent verbal and written communication skills Highly responsible, motivated, able to work with little direction Ability to troubleshoot complex problems and customer issues
Posted 2 months ago
5.0 - 10.0 years
8 - 13 Lacs
Chennai
Work from Office
Role Summary: We are seeking a highly strategic Functional Analyst with 5 to 10 years of experience in gathering requirements, designing events and processes for event-driven architecture (EDA) projects. This role serves as a critical link between business stakeholders and technical teams, ensuring seamless collaboration, accurate requirement translation, and the efficient development of event-driven systems. The ideal candidate will possess strong analytical skills, has a technical background in java or integration, and extensive experience in leading requirement gathering, documentation, and testing efforts within large-scale EDA implementations. Key Responsibilities: Stakeholder Engagement and Requirement Analysis: Lead discussions between business stakeholders, architects, and producer/consumer application teams to define EDA-based solutions. Facilitate alignment on business objectives, ensuring technical feasibility and optimal integration strategy. Guide requirement-gathering sessions by leveraging deep knowledge of EDA, event streams, and system design. Functional Documentation and Translation: Develop and refine high-level functional specifications, event definitions, and system workflows to support business needs. Create and maintain comprehensive data models, process flows, and integration design documents, ensuring scalability and efficiency. Drive improvements in documentation quality and standardization across functional teams. Build and Development Support: Work closely with architects and developers to ensure EDA design principles are accurately implemented. Provide advanced functional support in system integration, event sourcing, and business workflow mapping. Identify and address bottlenecks in functional design, recommending optimal solutions. Testing and Validation: Lead User Acceptance Testing (UAT) strategy, focusing on validating event interactions and data flow consistency. Develop and refine UAT test cases, execution plans, and validation frameworks for event-driven applications. Work alongside QA and business teams to track defects, troubleshoot issues, and optimize workflows before production rollout. Qualifications: Experience: 5 to 10 years of experience as a Functional Analyst or Business Analyst in EDA, microservices, or enterprise integration projects. Experience with logistics and related processes Proven expertise in leading requirement workshops, translating business needs into process, event and functional designs Experience coordinating between cross-functional technical and business teams in high-scale environments. Technical Skills: Deep understanding of event-driven architecture principles, including event sourcing, pub/sub models, and streaming technologies. Strong experience with functional documentation (use cases, process flows, event modeling, and API integration). Proficiency with diagramming and documentation tools (e.g., Visio, Lucidchart, Confluence). Hands-on knowledge of integration patterns and event-driven workflows, ensuring optimal data flow and system performance. Soft Skills: Exceptional communication and stakeholder management skills, with a strategic mindset in bridging business and technical perspectives. Strong analytical and problem-solving abilities, particularly in complex integration and workflow scenarios. Highly organized, detail-oriented, and adaptable to dynamic business needs. Preferred Skills: Experience working in Agile environments, leading functional discussions within Scrum and Kanban teams. Exposure to EDA platforms and tools such as Kafka, Confluent, and cloud-based event management solutions. Experience with Event streaming in Kafka Familiarity with Java is an added plus
Posted 2 months ago
6.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Roles & Responsibilities: Design, build, and manage Kafka clusters using Confluent Platform and Kafka Cloud services (AWS MSK, Confluent Cloud) . Develop and maintain Kafka topics, schemas (Avro/Protobuf), and connectors for data ingestion and processing pipelines. Monitor and ensure the reliability, scalability, and security of Kafka infrastructure. Collaborate with application and data engineering teams to integrate Kafka with other AWS-based services (e.g., Lambda, S3, EC2, Redshift). Implement and manage Kafka Connect , Kafka Streams , and ksqlDB where applicable. Optimize Kafka performance, troubleshoot issues, and manage incident response. Preferred candidate profile 4-6 years of experience working with Apache Kafka and Confluent Kafka. Strong knowledge of Kafka internals (brokers, zookeepers, partitions, replication, offsets). Experience with Kafka Connect, Schema Registry, REST Proxy, and Kafka security. Hands-on experience with AWS (EC2, IAM, CloudWatch, S3, Lambda, VPC, Load balancers). Proficiency in scripting and automation using Terraform, Ansible, or similar tools. Familiarity with DevOps practices and tools (CI/CD pipelines, monitoring tools like Prometheus/Grafana, Splunk, Datadog etc). Experience with containerization (Docker, Kubernetes) is a plus.
Posted 2 months ago
6.0 - 11.0 years
4 - 9 Lacs
Bengaluru
Work from Office
SUMMARY Job Role: Apache Kafka Admin Experience: 6+ years Location: Pune (Preferred), Bangalore, Mumbai Must-Have: The candidate should have 6 years of relevant experience in Apache Kafka Job Description: We are seeking a highly skilled and experienced Senior Kafka Administrator to join our team. The ideal candidate will have 6-9 years of hands-on experience in managing and optimizing Apache Kafka environments. As a Senior Kafka Administrator, you will play a critical role in designing, implementing, and maintaining Kafka clusters to support our organization's real-time data streaming and event-driven architecture initiatives. Responsibilities: Design, deploy, and manage Apache Kafka clusters, including installation, configuration, and optimization of Kafka brokers, topics, and partitions. Monitor Kafka cluster health, performance, and throughput metrics and implement proactive measures to ensure optimal performance and reliability. Troubleshoot and resolve issues related to Kafka message delivery, replication, and data consistency. Implement and manage Kafka security mechanisms, including SSL/TLS encryption, authentication, authorization, and ACLs. Configure and manage Kafka Connect connectors for integrating Kafka with various data sources and sinks. Collaborate with development teams to design and implement Kafka producers and consumers for building real-time data pipelines and streaming applications. Develop and maintain automation scripts and tools for Kafka cluster provisioning, deployment, and management. Implement backup, recovery, and disaster recovery strategies for Kafka clusters to ensure data durability and availability. Stay up-to-date with the latest Kafka features, best practices, and industry trends and provide recommendations for optimizing our Kafka infrastructure. Requirements: 6-9 years of experience as a Kafka Administrator or similar role, with a proven track record of managing Apache Kafka clusters in production environments. In - depth knowledge of Kafka architecture, components, and concepts, including brokers, topics, partitions, replication, and consumer groups. Hands - on experience with Kafka administration tasks, such as cluster setup, configuration, performance tuning, and monitoring. Experience with Kafka ecosystem tools and technologies, such as Kafka Connect, Kafka Streams, and Confluent Platform. Proficiency in scripting languages such as Python, Bash, or Java. Strong understanding of distributed systems, networking, and Linux operating systems. Excellent problem-solving and troubleshooting skills, with the ability to diagnose and resolve complex technical issues. Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders.
Posted 2 months ago
2 - 7 years
6 - 10 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer ( AWS, Confluent & Snaplogic ) Data Integration Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. You"™d describe yourself as: Experience 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills Proficiency in Python, SQL, and other relevant programming languages. Data Modeling Experience with data modeling and database design. Problem-Solving Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability Ability to adapt to changing technologies and work in a fast-paced environment. Team Player Strong team player with a collaborative mindset. Continuous Learning Eagerness to learn and stay updated with the latest trends and technologies in data engineering. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers
Posted 2 months ago
8 - 12 years
22 - 32 Lacs
Bangalore Rural, Bengaluru
Hybrid
Confluent Kafka Admin Responsibilities Senior Confluent Kafka admin with 10 years of total IT experience with strong engineering background Experience of setup and manage Confluent Kafka cluster and monitor its performance distribution Design Configure and manage RBAC and Multitenancy Experience with Confluent Kafka On prem as well as Confluent Kafka Cloud Manage all Kafka configurations via Ansible Good experience with setting up DR for Confluent Kafka instances Coordinate with different development teams and manage their connectivity usage in Kafka cluster Document maintain and present best engineering practice strategies to ensure nearterm changes are aligned with longterm release objectives Collaborate with infrastructure and other backend teams on software upgrades disaster recovery and other efforts Good experience in using Confluent Kafka with at least 5 years in administration ie cluster management and client integration Knowledge and experience in git and Ansible Notice Immediate – 30days.
Posted 2 months ago
2 - 6 years
8 - 12 Lacs
Bengaluru
Work from Office
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Sr. Staff Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). Title - Lead Data Architect (Streaming) Required Skills and Qualifications Overall 10+ years of IT experience of which 7+ years of experience in data architecture and engineering Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS Strong experience with Confluent Strong experience in Kafka Solid understanding of data streaming architectures and best practices Strong problem-solving skills and ability to think critically Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Knowledge of Apache Airflow for data orchestration Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications An understanding of cloud networking patterns and practises Experience with working on a library or other long term product Knowledge of the Flink ecosystem Experience with Terraform Deep experience with CI/CD pipelines Strong understanding of the JVM language family Understanding of GDPR and the correct handling of PII Expertise with technical interface design Use of Docker Key Responsibilities Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem Architect data processing applications using Python, Kafka, Confluent Cloud and AWS Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka Ensure data security and compliance throughout the architecture Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Optimize data flows for performance, cost-efficiency, and scalability Implement data governance and quality control measures Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams Provide technical leadership and mentorship to development teams and lead engineers Stay current with emerging technologies and industry trends Collaborate with data scientists and analysts to enable efficient data access and analysis Evaluate and recommend new technologies to improve data architecture Position Overview: We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Developer, Computer Science, Consulting, Technology
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough