Jobs
Interviews

84 Apache Storm Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

8 - 18 Lacs

bengaluru, delhi / ncr, mumbai (all areas)

Work from Office

7+ years’ experience (3+ in Kafka – Apache, Confluent, MSK – & RabbitMQ) with strong skills in monitoring, optimization, and incident resolution. Proficient in brokers, connectors, Zookeeper/KRaft, schema registry, and middleware performance metrics.

Posted Date not available

Apply

5.0 - 8.0 years

5 - 8 Lacs

hyderabad

Work from Office

Must have skills Azure DataBricks, Python And Pyspark, Spark. Please find the JD In the mail chain Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on experience in Azure Databricks , Data Factory, Data Lake store/Blob storage, SQL DB Experience in creating Big data Pipelines with Azure components Hands on programing with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala...

Posted Date not available

Apply

10.0 - 15.0 years

22 - 27 Lacs

bengaluru

Work from Office

A hands-on Java Architect/Lead with a stellar track record in designing and leading enterprise-grade applications. This role calls for someone with deep architectural insight, strong system design instincts, and the ability to mentor and guide technical teams toward building resilient, distributed platforms.T he ideal candidate will have strong expertise in Java , Spring Boot , and distributed streaming technologies like Kafka . Experience with Apache Storm is a plus. Key Competencies: 10+ years of robust Java development experience, including 5+ years in architecture or technical leadership Expertise in Java (JDK 17+), Spring Boot, and modular microservices architecture Proven delivery of l...

Posted Date not available

Apply

8.0 - 12.0 years

4 - 8 Lacs

mumbai, pune, bengaluru

Work from Office

Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs: 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines f...

Posted Date not available

Apply

3.0 - 8.0 years

9 - 13 Lacs

gurugram, bengaluru

Work from Office

JOB DESCRIPTION Collaborate with customer to gather requirements and to understand their business processes. Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, SQL and Azure or AWS data technologies. Build analytics tools that utilize the data pipeline to provide actionable insigh...

Posted Date not available

Apply

4.0 - 6.0 years

2 - 6 Lacs

bengaluru

Work from Office

Strong problem-solving skills with a focus on product development. Domain expertise in Big Data, Data Platforms, and Distributed Systems. Proficiency in Java, Scala, or Python (hands-on experience with Apache Spark is essential). Experience with data ingestion frameworks such as Apache Storm, Flink, or Spark Streaming. Experience with streaming technologies like Kafka, Kinesis, Oplogs, Binlogs, or Debezium. Strong database skills with experience in HDFS, Delta Lake, Iceberg, or Lakehouse architectures.

Posted Date not available

Apply

6.0 - 8.0 years

11 - 16 Lacs

noida, uttar pradesh

Work from Office

About the Role: This position requires someone to work on complex technical projects and closely work with peers in an innovative and fast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6-8 years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliable tools, handling petabytes of data every day. Brainstorm and create new platforms that can help in our quest to make available to cluster users in all shapes and forms, with low latency and horizontal scalability. Make changes to our diagnosing any problems across the entire techni...

Posted Date not available

Apply

5.0 - 7.0 years

2 - 5 Lacs

pune

Work from Office

Job Title:Data Engineer Experience: 5-7Years Location:Pune Job Description : Roles & Responsibilities: Create and maintain optimal data pipeline architecture Build data pipelines that transform raw, unstructured data into formats that data analyst can use to for analysis Assemble large, complex data sets that meet functional / non-functional business requirements Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and delivery of data from a wide variety of data sources using SQL and AWS Big Data ...

Posted Date not available

Apply

2.0 - 5.0 years

27 - 40 Lacs

gurugram

Work from Office

Xtelify Ltd || Senior Data Engineer Xtelify Ltd is looking for a Data Engineer to join the Data Platform team who can help develop and deploy data pipelines at a huge scale of ~5B daily events and concurrency of 500K users. The platform is built on a cloud-native modern data stack (AWS/GCP), enabling real-time reporting and deep data exploration from first principles. Experience: 25 Years Job Location: Gurgaon Responsibilities: • Create and maintain a robust, scalable, and optimized data pipeline. • Handle TBs of data daily across Xtelify's music and video streaming platforms. • Extract and consume data from live systems to analyze and operate in a 99.999% SLA Big Data environment. • Build a...

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies