Home
Jobs
Companies
Resume

11 Data Streaming Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Naukri logo

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in

Posted 6 days ago

Apply

10.0 - 13.0 years

16 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Looking for a Cloud Data Support Streaming Engineer with 8+ years of experience in Azure Data Lake, Databricks, PySpark, and Python. Role includes monitoring, troubleshooting, and support for streaming data pipelines.

Posted 1 week ago

Apply

3.0 - 5.0 years

15 - 22 Lacs

Bengaluru

Remote

Naukri logo

Role & responsibilities Design real-time data pipelines for structured and unstructured sources. Collaborate with analysts and data scientists to create impactful data solutions. Continuously improve data infrastructure based on team feedback. Take full ownership of complex data problems and iterate quickly. Promote strong documentation and engineering best practices. Monitor, detect, and fix data quality issues with custom tools. Preferred candidate profile Experience with big data tools like Spark, Hadoop, Hive, and Kafka. Proficient in SQL and working with relational databases. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Familiar with workflow tools like Airflow.

Posted 1 week ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Chennai

Remote

Naukri logo

We are seeking a skilled and motivated individual to join our team as a Real-Time Data Streaming Engineer. In this role, you will be responsible for designing, developing, and maintaining real-time data streaming applications using Apache Kafka. You will collaborate with cross-functional teams to integrate Kafka-based solutions into existing systems and ensure smooth data streaming and processing. Additionally, you will monitor and optimize Kafka clusters to ensure high availability, performance, and scalability. You will implement data pipelines and streaming processes that support business analytics and operational needs, while troubleshooting and resolving any issues that arise. Ideal candidates will have a strong foundation in Apache Kafka, real-time data streaming, and proficiency in Java, Scala, or Python, as well as a solid understanding of distributed systems and microservices architecture. Responsibilities: Design, develop, and maintain real-time data streaming applications using Apache Kafka. Collaborate with cross-functional teams to integrate Kafka solutions into existing systems. Monitor and optimize Kafka clusters to ensure high availability and performance. Implement data pipelines and streaming processes to support business analytics and operations. Troubleshoot and resolve issues related to data streaming and processing. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience with Apache Kafka and real-time data streaming. Proficiency in programming languages such as Java, Scala, or Python. Familiarity with distributed systems and microservices architecture. Strong problem-solving skills and the ability to work collaboratively in a team environment. Understanding of SOA, Object-oriented analysis and design, or client/server systems. Expert knowledge in REST, JSON, XML, SOAP, WSDL, RAML, YAML. Hands-on experience in large scale SOA design, development and deployment. Experience with API management technology Experience working with continuous integration and continuous deliver tools (CI/CD) and processes.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Pune, Delhi / NCR, Mumbai (All Areas)

Hybrid

Naukri logo

Job Title: Data Engineer - Ingestion, Storage & Streaming (Confluent Kafka) Job Summary: As a Data Engineer specializing in Ingestion, Storage, and Streaming, you will design, implement, and maintain robust, scalable, and high-performance data pipelines for the efficient flow of data through our systems. You will work with Confluent Kafka to build real-time data streaming platforms, ensuring high availability and fault tolerance. You will also ensure that data is ingested, stored, and processed efficiently and in real-time to provide immediate insights. Key Responsibilities: Kafka-Based Streaming Solutions: Design, implement, and manage scalable and fault-tolerant data streaming platforms using Confluent Kafka. Develop real-time data streaming applications to support business-critical processes. Implement Kafka producers and consumers for ingesting data from various sources. Handle message brokering, processing, and event streaming within the platform. Ingestion & Data Integration: Build efficient data ingestion pipelines to bring real-time and batch data from various data sources into Kafka. Ensure smooth data integration across Kafka topics and handle multi-source data feeds. Develop and optimize connectors for data ingestion from diverse systems (e.g., databases, external APIs, cloud storage). Data Storage and Management: Manage and optimize data storage solutions in conjunction with Kafka, including topics, partitions, retention policies, and data compression. Work with distributed storage technologies to store large volumes of structured and unstructured data, ensuring accessibility and compliance. Implement strategies for schema management, data versioning, and data governance. Data Streaming & Processing: Leverage Kafka Streams and other stream processing frameworks (e.g., Apache Flink, ksqlDB) to process real-time data and provide immediate analytics. Build and optimize data processing pipelines to transform, filter, aggregate, and enrich streaming data. Monitoring, Optimization, and Security: Set up and manage monitoring tools to track the performance of Kafka clusters, ingestion, and streaming pipelines. Troubleshoot and resolve issues related to data flows, latency, and failures. Ensure data security and compliance by enforcing appropriate data access policies and encryption techniques. Collaboration and Documentation: Collaborate with data scientists, analysts, and other engineers to align data systems with business objectives. Document streaming architecture, pipeline workflows, and data governance processes to ensure system reliability and scalability. Provide regular updates on streaming and data ingestion pipeline performance and improvements to stakeholders. Required Skills & Qualifications: Experience: 3+ years of experience in data engineering, with a strong focus on Kafka, data streaming, ingestion, and storage solutions. Hands-on experience with Confluent Kafka, Kafka Streams, and related Kafka ecosystem tools. Experience with stream processing and real-time analytics frameworks (e.g., ksqlDB, Apache Flink). Technical Skills: Expertise in Kafka Connect, Kafka Streams, and Kafka producer/consumer APIs. Proficient in data ingestion and integration techniques from diverse sources (databases, APIs, etc.). Strong knowledge of cloud data storage and distributed systems. Experience with programming languages like Java, Scala, or Python for Kafka integration and stream processing. Familiarity with tools such as Apache Spark, Flink, Hadoop, or other data processing frameworks. Experience with containerization and orchestration tools such as Docker, Kubernetes.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

15 - 18 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Client is looking for a strong Java candidate with the following skills. Spring Webflux and streaming knowledge is a must and key thing that they are looking for. Here is the overall JD for the position. RDBMS No At least 1 year Is Required CI/CD 2-5 Years Is Required Cloud Computing 2-5 Years Is Required Core Java 5-10 Years Is Required Kubernetes -2-5 Years Is Required microservices - 2-5 Years Is Required MongoDB - At least 1 year Nice To Have NoSQL -At least 1 year Nice To Have python - At least 1 year Is Required Spring Boot - 5-10 Years Is Required Spring Data - 2-5 Years Is Required Spring Security - 2-5 Years Is Required Spring Webflux - At least 1 year Is Required Stream processing - At least 1 year Is Required Java 17 - 2-5 Years Is Required Apache Kafka - At least 1 year Is Required Apache SOLR - At least 1 year Is Required Expertise with solution design and enterprise large scale applications development In-depth knowledge of integration patterns, integration technologies and integration platforms Experience with Queuing related technologies like Kafka Good hands-on experience to design and build cloud ready application Good programming skills in Java, Python etc Proficiency of dev/build tools: git, maven, gradle Experience with the modern NoSQL/Graph DB/Data Streaming technologies is a plus Good understanding of Agile software development methodology Location: hyd,mangalore.bubaneshawr,trivendrum

Posted 3 weeks ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Chennai

Remote

Naukri logo

Position Title: Apache Flink Engineer Open Position: 3 Employment Type: Permanent; Full-Time. Location: Chennai Experience: 3+yrs Skills required: Confluent Kafka or Kafka Apache Spark or Apache Flink

Posted 3 weeks ago

Apply

2 - 6 years

15 - 20 Lacs

Hyderabad

Remote

Naukri logo

Responsibilities Maintain and improve data pipeline performance. Build data products for batch and real-time business needs. Create and track metrics for analytics. Troubleshoot and fix data-related issues. Stay updated with new data engineering tools and trends. Optimize data pipelines for better performance and reliability. Ensure data quality and integrity. Work with cross-functional teams to integrate data solutions. Advise engineering teams on data practices. Requirements Hands-on experience with Data Streaming technologies (Kafka, Flink). Strong expertise in Data Warehousing and building scalable solutions. Proficient in NoSQL (e.g., HBase) and distributed storage systems like HDFS. Experience with real-time and batch data processing pipelines. Familiar with cloud platforms (AWS, GCP, Azure). Perks and Benefits Medical Insurance Internet Reimbursement Flexible working hours

Posted 1 month ago

Apply

2 - 5 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

AWS certified with solid 1+ years experience in Production environment Candidate Profile AWS certified candidate with 1+ yr experience of working in production environment is the specific ask. Within AWS, the key needs are working knowledge on Kinesis (streaming data), Redshift / RDS ( Querying), Dynamo DB: No SQL DB.

Posted 2 months ago

Apply

6 - 10 years

0 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

JD: Knowledge & experience with Kafka for real-time data streaming Must have proficiency in ANSI SQL and NoSQL databases Experience in Java Springboot, CSS etc Must have expertise in developing microservices using Spring Boot Must have solid experience with Java 8 and Java Nice to have experience in the Devices domain Must have excellent problem-solving skills and attention to detail Must have strong communication and collaboration skills Must have the ability to work in a hybrid work model Must have the ability to work in day shifts Must have a proactive attitude and the ability to work independently Must have the ability to mentor and guide junior engineer.

Posted 2 months ago

Apply

4 - 9 years

9 - 19 Lacs

Chennai, Pune, Mumbai (All Areas)

Work from Office

Naukri logo

Dear Candidate, This is regarding an opportunity for Data Streaming Professionals. PFB the Job Description Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Total experience - 4-9years Skill- Data Streaming (Python , Apache Flink Streaming Or Spark Streaming, Kafka) Location -Pune/Mumbai /Chennai /Bangalore If you possess the relevant experience, kindly share the Following Details and Updated Resume on below mentioned Email Id Current Organization Current CTC Expected CTC Notice Period

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies