Jobs
Interviews

108 Kafka Streams Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

13 - 20 Lacs

bengaluru

Work from Office

Job Title: Senior Software Engineer Location: Bengaluru/Bangalore (On-site) Website: https://lovelocal.in LinkedIn : https://in.linkedin.com/company/lovelocalindia Qualification : B.E./B.Tech. (Computer Science/IT/ECE) or MCA Experience: 4-6 Years About Lovelocal At Lovelocal, we're committed to building vibrant communities through the support of local Kirana stores. Our mission is to empower small businesses by providing them with the tools and resources they need to thrive in todays competitive market. By leveraging technology and fostering connections, we aim to enhance the local shopping experience while promoting sustainable practices. About Our Leadership Team Our leadership team consists of experienced professionals from diverse backgrounds who are passionate about supporting local businesses and communities. With a blend of expertise in technology, business development, and community engagement, our leaders are committed to fostering a collaborative and innovative environment. Role Overview As the Senior Software Engineer - Backend , you will architect, build, and scale robust backend systems that power Lovelocals platform. You will lead a team of engineers to design high-performance APIs, optimize databases, and integrate cutting-edge technologies like GenAI and real-time communication. Your work will directly influence the reliability, scalability, and innovation of our tech stack. Key Responsibilities Own end-to-end development of high-performance backend services in Go (Gin)/Python (FastAPI). Architect scalable databases (SQL/NoSQL) and real-time systems (WebSockets, Kafka, Redis). Deploy and optimize cloud-native apps on GCP (GKE, Cloud Run) using Docker/K8s. Mentor engineers, enforce best practices (CI/CD, monitoring via Prometheus/Grafana). Evaluate emerging tech (GenAI, vector DBs, GraphQL) for strategic adoption. Required Qualifications Expertise in Go/Python, cloud-native development, and distributed systems. Strong expertise in distributed systems, microservices, and API design (REST/gRPC). Hands-on with GCP (GKE, Pub/Sub, Cloud SQL) + Kubernetes. Proven experience with real-time systems (WebSockets, Kafka, RabbitMQ). Deep knowledge of SQL (PostgreSQL/MySQL) & NoSQL (MongoDB/Firestore). Nice to Haves Experience with GenAI pipelines, vector databases, and graph databases (Neo4j). Knowledge of gRPC, GraphQL. Active in tech communities (open-source, blogs, talks). Familiarity with Prometheus and Grafana. Tech Stack Databases: MySQL, MongoDB Cloud Services: Google Cloud Platform, AWS, Microsoft Azure Programming Languages: Python, Go Web Application Backend Frameworks: Flask, Django, FastAPI, Gin GenAI: LangChain, LLM integrations Observability: Prometheus, Grafana ETL Tools: Databricks Workflow, Apache Airflow Data Warehousing: Snowflake, Google Big Query Big Data Technologies: Apache Spark BI Tools: Tableau, QlikSense Perks and Benefits Competitive salary and performance-based bonuses. Flexible work hours and hybrid work options. Comprehensive health and wellness programs, including mental health support. Generous paid time off policy and holiday schedule. Personal Growth and Development Quarterly rewards and recognition programs to celebrate team achievements. OKR-based appraisal system to align personal goals with company objectives. Tailored learning roadmaps for each software engineer to support career growth. Participation in hackathons to foster innovation and collaboration. Opportunities for training and participation in webinars to enhance skills and knowledge. Diversity and Inclusion At Lovelocal, we celebrate diversity and are committed to creating an inclusive environment for all employees. We believe that diverse perspectives drive innovation and enrich our communities. We encourage applications from individuals of all backgrounds and experiences. POSH (Prevention of Sexual Harassment) Lovelocal is dedicated to providing a safe and respectful workplace for everyone. We have a strict POSH policy in place, ensuring that all employees are protected from harassment and discrimination. Training and resources are provided to foster a positive workplace culture. Join Us! Ready to architect the future of local commerce? Apply now and lets build scalable, AI-powered backend systems that empower millions of Kirana stores! Lovelocal is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Posted Date not available

Apply

6.0 - 8.0 years

10 - 15 Lacs

hyderabad

Hybrid

Key Responsibilities: Design, build, and optimize large-scale data processing systems using distributed computing frameworks like Hadoop, Spark, and Kafka . Develop and maintain data pipelines (ETL/ELT) to support analytics, reporting, and machine learning use cases. Integrate data from multiple sources (structured and unstructured) and ensure data quality and consistency. Collaborate with cross-functional teams to understand data needs and deliver data-driven solutions. Implement data governance, data security, and privacy best practices. Monitor performance and troubleshoot issues across data infrastructure. Stay updated with the latest trends and technologies in big data and cloud computing. Required Qualifications: Bachelors or master's degree in computer science, Engineering, or a related field. 6+ years of experience in big data engineering or a similar role. Proficiency in big data technologies such as Hadoop, Apache Spark, Hive, and Kafka. Strong programming skills in Python. Experience with cloud platforms like AWS (EMR, S3, Redshift), GCP (Big Query, Dataflow), or Azure (Data Lake, Synapse). Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts. Familiarity with CI/CD tools and practices for data engineering. Preferred Qualifications: Experience with orchestration tools like Apache Airflow or Prefect. Knowledge of real-time data processing and stream analytics. Exposure to containerization tools like Docker and Kubernetes. Certification in cloud technologies (e.g., AWS Certified Big Data – Specialty).

Posted Date not available

Apply

5.0 - 10.0 years

0 - 0 Lacs

hyderabad, pune, bengaluru

Work from Office

Kafka Developer: Good hands-on exposure in handling Apache and confluent Kafka. Managing the data pipeline Troubleshooting issues in Kafka and provide resolutions. Exposure in handling Kafka version migrations and patch updates. Good understanding of disaster recovery setup. Implement best practices. Setting up and providing administration for Kafka platform Designing Kafka cluster, ensure data reliability and performance tuning. Exposure in capacity planning. Knowledge on Zookeeper, Schema Registry, control center, Mirror Maker 2, Cruise Control, Kafka Exporter, Kafka Connect. Good knowledge on setting up monitoring dashboards like Grafana, Prometheus. Experience in handling service now(SNOW), slack etc. for incident management and routing the issues to appropriate channels Implementing Automations in Kafka Mentoring team members

Posted Date not available

Apply

10.0 - 13.0 years

30 - 40 Lacs

pune

Work from Office

Experience Required : 10+ years overall, with 5+ years in Kafka infrastructure management and operations. Must have successfully deployed and maintained Kafka clusters in production environments, with proven experience in securing, monitoring, and scaling Kafka for enterprise-grade data streaming. Overview : We are seeking an experienced Kafka Administrator to lead the deployment, configuration, and operational management of Apache Kafka clusters supporting real-time data ingestion pipelines. The role involves ensuring secure, scalable, and highly available Kafka infrastructure for streaming flow records into centralized data platforms. Role & responsibilities Architect and deploy Apache Kafka clusters with high availability. Implement Kafka MirrorMaker for cross-site replication and disaster recovery readiness. Integrate Kafka with upstream flow record sources using IPFIX-compatible plugins. Configure Kafka topics, partitions, replication, and retention policies based on data flow requirements. Set up TLS/SSL encryption, Kerberos authentication, and access control using Apache Ranger. Monitor Kafka performance using Prometheus, Grafana, or Cloudera Manager and ensure proactive alerting. Perform capacity planning, cluster upgrades, patching, and performance tuning. Ensure audit logging, compliance with enterprise security standards, and integration with SIEM tools. Collaborate with solution architects and Kafka developers to align infrastructure with data pipeline needs. Maintain operational documentation, SOPs, and support SIT/UAT and production rollout activities. Preferred candidate profile Proven experience in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Strong understanding of IPFIX, nProbe Cento, and network flow data ingestion. Hands-on experience with Apache Spark (Structured Streaming) and modern data lake or DWH platforms. Familiarity with Cloudera Data Platform, HDFS, YARN, Ranger, and Knox. Deep knowledge of data security protocols, encryption, and governance frameworks. Excellent communication, documentation, and stakeholder management skills.

Posted Date not available

Apply

10.0 - 13.0 years

30 - 40 Lacs

pune

Work from Office

Experience Required : 10+ years overall, with 5+ years in Kafka-based data streaming development. Must have delivered production-grade Kafka pipelines integrated with real-time data sources and downstream analytics platforms. Overview : We are looking for a Kafka Developer to design and implement real-time data ingestion pipelines using Apache Kafka. The role involves integrating with upstream flow record sources, transforming and validating data, and streaming it into a centralized data lake for analytics and operational intelligence. Role & responsibilities Develop Kafka producers to ingest flow records from upstream systems such as flow record exporters (e.g., IPFIX-compatible probes). Build Kafka consumers to stream data into Spark Structured Streaming jobs and downstream data lakes. Define and manage Kafka topic schemas using Avro and Schema Registry for schema evolution. Implement message serialization, transformation, enrichment, and validation logic within the streaming pipeline. Ensure exactly once processing, checkpointing, and fault tolerance in streaming jobs. Integrate with downstream systems such as HDFS or Parquet-based data lakes, ensuring compatibility with ingestion standards. Collaborate with Kafka administrators to align topic configurations, retention policies, and security protocols. Participate in code reviews, unit testing, and performance tuning to ensure high-quality deliverables. Document pipeline architecture, data flow logic, and operational procedures for handover and support. Preferred candidate profile Proven experience in developing Kafka producers and consumers for real-time data ingestion pipelines. Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Proficiency in Apache Spark (Structured Streaming) for real-time data transformation and enrichment. Solid understanding of IPFIX, NetFlow, and network flow data formats; experience integrating with nProbe Cento is a plus. Experience with Avro, JSON, or Protobuf for message serialization and schema evolution. Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and Knox. Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or Delta formats. Strong programming skills in Scala, Java, or Python for stream processing and data engineering tasks. Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control via Apache Ranger. Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk. Understanding of CI/CD pipelines, Git-based workflows, and containerization (Docker/Kubernetes)

Posted Date not available

Apply

7.0 - 10.0 years

15 - 25 Lacs

chennai

Remote

Job Overview: We are seeking a highly experienced Senior Backend Engineer with over 10 years of experience in backend development, specifically using Java , PostgreSQL , AWS core services , and Apache Kafka . The ideal candidate will have a strong background in designing, developing, and maintaining scalable APIs and distributed systems in a cloud-based environment. Key Responsibilities: Design, develop, and maintain scalable and secure RESTful APIs using Java (Spring Boot or similar frameworks) . Optimize and manage PostgreSQL databases , including schema design, indexing, and performance tuning. Build and maintain data pipelines and real-time processing systems using Apache Kafka . Leverage AWS core services (e.g., EC2, S3, Lambda, RDS, CloudWatch, Dynamodb) to build cloud-native applications. Collaborate with cross-functional teams including product managers, front-end developers, and QA to deliver high-quality software. Participate in code reviews, design discussions, and mentor junior team members. Troubleshoot production issues and ensure high system availability and performance. Required Qualifications: 10+ years of hands-on experience in backend development. Strong proficiency in Java , with experience in frameworks like Spring Boot, Dropwizard , or similar. Deep understanding of PostgreSQL : SQL queries, indexing, performance optimization, and database design. Hands-on experience with Apache Kafka for building event-driven and real-time applications. Proven experience working with AWS core services in a production environment. Solid understanding of microservices architecture, security best practices, and REST API standards. Experience with version control systems like Git and CI/CD tools. Strong problem-solving skills and the ability to work independently or in a team. Preferred Qualifications: Experience with containerization tools like Docker and orchestration tools like Kubernetes . Exposure to monitoring and logging tools like Prometheus, Grafana, CloudWatch , or ELK stack . Experience working in Agile/Scrum environments.

Posted Date not available

Apply

2.0 - 5.0 years

27 - 40 Lacs

gurugram

Work from Office

Xtelify Ltd || Senior Data Engineer Xtelify Ltd is looking for a Data Engineer to join the Data Platform team who can help develop and deploy data pipelines at a huge scale of ~5B daily events and concurrency of 500K users. The platform is built on a cloud-native modern data stack (AWS/GCP), enabling real-time reporting and deep data exploration from first principles. Experience: 25 Years Job Location: Gurgaon Responsibilities: • Create and maintain a robust, scalable, and optimized data pipeline. • Handle TBs of data daily across Xtelify's music and video streaming platforms. • Extract and consume data from live systems to analyze and operate in a 99.999% SLA Big Data environment. • Build and execute data mining and modeling activities using agile development techniques. • Solve problems creatively and communicate effectively across teams. • Build infrastructure for optimal data extraction, transformation, and loading from diverse sources. • Drive internal process improvements: automation, data delivery optimization, infrastructure redesign, etc. • Manage both structured and unstructured datasets using SQL and NoSQL databases. • Understand and leverage cloud computation models to enhance delivery and deployment of solutions. • Drive innovation in areas like analytics pipelines, streaming systems, and machine learning data applications. Desired Profile: • B.E / B.Tech / M.E / M.Tech / M.S in Computer Science or Software Engineering from a premier institute. • 2+ years of hands-on experience with Spark and Scala. • Strong grasp of data structures and algorithms. • Proficient with Hadoop, Pig, Hive, Storm, and SQL. • Programming experience in at least one language: Java, Scala, or Python. • Solid experience in big data infrastructure, distributed systems, dimensional modeling, query processing & optimization, and relational databases. • Own end-to-end delivery of data modules/features. Good to Have: • Exposure to NoSQL solutions like Cassandra, MongoDB, CouchDB, Postgres, Redis, Elasticsearch. • Experience with Kafka and Spark Streaming/flink. • Familiarity with OLAP engines like Apache Druid or Apache Pinot. • Enthusiasm to explore and learn modern data architectures and streaming systems Role & responsibilities Preferred candidate profile

Posted Date not available

Apply

4.0 - 9.0 years

13 - 17 Lacs

chennai

Work from Office

Position title - Developer - Connected Platform Reports to - Manager Job grade - Deputy Manager Job Purpose The Connected Platform Developer is part of the in-house product engineering team that has been set up to create technical differentiation and intellectual property in the domain of Software Defined Vehicle (SDV). This role will be part of the team that owns application development, communication protocols, designing, building, and maintaining scalable data pipelines for real-time data processing of Connected Vehicle data. Key Responsibilities As an Developer - Connected Platform, you will be responsible for development working with various stakeholders. Your key responsibilities will include, but not limited to: - Data Pipeline Development: Design and implement data pipelines using Kafka, MQTT, or HTTP to facilitate real-time data streaming and integration from various sources. Ensure pipelines are scalable and maintainable - Data Ingestion and Integration: Develop systems to ingest data from a variety of sources, including IoT devices, databases, APIs, and external services. Integrate data into downstream systems and data lakes. - Real-Time Data Processing: Implement real-time data processing solutions using Kafka Streams, Kafka Connect, or similar technologies. Ensure that data processing is reliable and consistent with business requirements - Collaboration and Communication: Collaborate with data scientists, analysts, and software engineers to understand data requirements and deliver solutions that meet their needs. Communicate technical concepts effectively to both technical and non-technical stakeholders - Continuous Improvement: Stay updated with the latest developments in data engineering, Kafka, and related technologies. Drive continuous improvement in data engineering practices and processes Work Experience Education Bachelor's degree in Computer Science/Electronics & Communications or equivalent degree Experience - Minimum 4 years of relevant experience in developing and deploying applications in cloud settings - Working with Data OS team to support internal stakeholders by developing data platform capabilities, full stack analytics applications, and data pipelines leveraging Kafka and MQTT/HTTP - Experience in tools like Jira or Rally - Extensive experience in Microsoft Azure cloud OR GCP, Data Engineering, Data Life Cycle Management - Knowledge of cloud platforms (e.g., AWS, Azure, GCP) is desirable.

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies