210 Spark Streaming Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

The opportunity available at EY is for a Bigdata Engineer based in Pune, requiring a minimum of 4 years of experience. As a key member of the technical team, you will collaborate with Engineers, Data Scientists, and Data Users in an Agile environment. Your responsibilities will include software design, Scala & Spark development, automated testing, promoting development standards, production support, troubleshooting, and liaising with BAs to ensure correct interpretation and implementation of requirements. You will be involved in implementing tools and processes, handling performance, scale, availability, accuracy, and monitoring. Additionally, you will participate in regular planning and sta...

Posted 3 months ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You should have experience in understanding and translating data, analytic requirements, and functional needs into technical requirements while collaborating with global customers. Your responsibilities will include designing cloud-native data architectures to support scalable, real-time, and batch processing. You will be required to build and maintain data pipelines for large-scale data management in alignment with data strategy and processing standards. Additionally, you will define strategies for data modeling, data integration, and metadata management. Your role will also involve having strong experience in database, data warehouse, and data lake design and architecture. You should be pr...

Posted 3 months ago

AI Match Score
Apply

6.0 - 8.0 years

8 - 10 Lacs

Chennai

Work from Office

We are seeking a highly experienced Big Data Lead with strong expertise in Apache Spark, Spark SQL, and Spark Streaming The ideal candidate should have extensive hands-on experience with the Hadoop ecosystem, a solid grasp of multiple programming languages including Java, Scala, and Python, and a proven ability to design and implement data processing pipelines in distributed environments Roles & Responsibilities Lead design and development of scalable data processing pipelines using Apache Spark , Spark SQL , and Spark Streaming Work with Java , Scala , and Python to implement big data solutions Design efficient data ingestion pipelines leveraging Sqoop , Kafka , HDFS , and MapReduce Optimiz...

Posted 3 months ago

AI Match Score
Apply

5.0 - 9.0 years

25 - 32 Lacs

Pune, Chennai, Coimbatore

Work from Office

Job Description : We are seeking an experienced Data Engineer with expertise in Big Data technologies and a strong background in distributed computing. The ideal candidate will have a proven track record of designing, implementing, and optimizing scalable data solutions using tools like Apache Spark, Python, and various cloud-based platforms. Key Responsibilities : Experience : 5-12 years of hands-on experience in Big Data and related technologies. Distributed Computing Expertise : Deep understanding of distributed computing principles and their application in real-world data systems. Apache Spark Mastery : Extensive experience in leveraging Apache Spark for building large-scale data process...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

10 - 20 Lacs

Noida, Hyderabad, Greater Noida

Work from Office

Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven syste...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

6 - 15 Lacs

Bengaluru

Work from Office

Greetings!!! If you're interested please apply by clicking below link https://bloomenergy.wd1.myworkdayjobs.com/BloomEnergyCareers/job/Bangalore-Karnataka/Staff-Engineer---Streaming-Analytics_JR-19447 Role & responsibilities Our team at Bloom Energy embraces the unprecedented opportunity to change the way companies utilize energy. Our technology empowers businesses and communities to responsibly take charge of their energy. Our energy platform has three key value propositions: resiliency, sustainability, and predictability. We provide infrastructure that is flexible for the evolving net zero ecosystem. We have deployed more than 30,000 fuel cell modules since our first commercial shipments i...

Posted 4 months ago

AI Match Score
Apply

1.0 - 3.0 years

15 - 30 Lacs

Bengaluru

Work from Office

About the Role Does digging deep for data and turning it into useful, impactful insights get you excited? Then you could be our next SDE II Data-Real Time Streaming. In this role, youll oversee your entire teams work, ensuring that each individual is working towards achieving their personal goals and Meeshos organisational goals. Moreover, youll keep an eye on all engineering projects and ensure the team is not straying from the right track. Youll also be tasked with directing programming activities, evaluating system performance, and designing new programs and features for smooth functioning. What you will do Build a platform for ingesting and processing multi terabytes of data daily Curate...

Posted 4 months ago

AI Match Score
Apply

5.0 - 6.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Responsibilities: * Design, develop, test, deploy big data solutions using PySpark, Java, Scala, AWS. * Implement CI/CD pipelines with Docker, Kubernetes, SQL, NoSQL databases.

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks support...

Posted 4 months ago

AI Match Score
Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

As a Senior Data Engineer at JLL Technologies, you will: Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development Develop good understanding of how data will flow & stored through an organization across multiple applications suc...

Posted 4 months ago

AI Match Score
Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed b...

Posted 4 months ago

AI Match Score
Apply

4.0 - 8.0 years

0 - 1 Lacs

Hyderabad, Bengaluru

Hybrid

Role & responsibilities The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data...

Posted 4 months ago

AI Match Score
Apply

8.0 - 11.0 years

45 - 50 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Scala Developer to work on scalable data pipelines, distributed systems, and backend services. This role is perfect for candidates passionate about functional programming and big data. Key Responsibilities: Develop data-intensive applications using Scala . Work with frameworks like Akka, Play, or Spark . Design and maintain scalable microservices and ETL jobs. Collaborate with data engineers and platform teams. Write clean, testable, and well-documented code. Required Skills & Qualifications: Strong in Scala, Functional Programming, and JVM internals Experience with Apache Spark, Kafka, or Cassandra Familiar with SBT, Cats, or Scalaz Knowledge of CI/CD, Docker...

Posted 4 months ago

AI Match Score
Apply

2.0 - 6.0 years

6 - 10 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 09 The Role Software Engineer II The Team Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge . You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with c oworkers from around the globe. The Impact The work you do will be used every single day , its the essential code youll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. Whats in it for y ou Build a care...

Posted 4 months ago

AI Match Score
Apply

3.0 - 6.0 years

6 - 9 Lacs

Hyderabad

Work from Office

"Spark & Delta Lake Understanding of Spark core concepts like RDDs, DataFrames, DataSets, SparkSQL and Spark Streaming. Experience with Spark optimization techniques. Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) Understanding of data quality best practices and data validation techniques. Other Skills: Understanding of data warehouse concepts, data modelling techniques....

Posted 4 months ago

AI Match Score
Apply

7.0 - 12.0 years

6 - 9 Lacs

Hyderabad

Work from Office

Understanding of Spark core concepts like RDDs, DataFrames, DataSets, SparkSQL and Spark Streaming. Experience with Spark optimization techniques. Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) Understanding of data quality best practices and data validation techniques. Other Skills: Understanding of data warehouse concepts, data modelling techniques. Expertise in Git fo...

Posted 4 months ago

AI Match Score
Apply

6.0 - 7.0 years

11 - 14 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across ...

Posted 4 months ago

AI Match Score
Apply

4.0 - 6.0 years

10 - 20 Lacs

Noida

Hybrid

Designation: Senior Software Engineer/ Software Engineer - Data Engineering Location: Noida Experience: 4 -6 years Job Summary/ Your Role in a Nutshell: The ideal candidate would be a skilled Data Engineer proficient in Python, Scala, or Java with a strong background in Hadoop, Spark, SQL, and various data platforms and have expertise in optimizing the performance of data applications and contributing to rapid and agile development processes. What youll do: Review and understand business requirements ensuring that development tasks are completed within the timeline provided and that issues are fully tested with minimal defects Partner with a software development team to implement best practi...

Posted 4 months ago

AI Match Score
Apply

3.0 - 8.0 years

6 - 14 Lacs

Gurugram

Work from Office

The ideal candidate will have strong expertise in Python, Apache Spark, and Databricks, along with experience in machine learning Data Engineer

Posted 4 months ago

AI Match Score
Apply

6.0 - 9.0 years

25 - 32 Lacs

Bangalore/Bengaluru

Work from Office

Full time with top German MNC for location Bangalore - Experience on SCALA is a must Job Overview: To work on development, monitoring and maintenance of Data pipelines across clusters. Primary responsibilities: Develop, Monitor and Maintain data pipeline for various plants. Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. Work with stakeholders including the Data officers and stewards to assist with data-related technical i...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 16 Lacs

Bhubaneswar, Bengaluru, Delhi / NCR

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL) Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed and implemented efficiently and effectively, while meeting the needs of the organization. Your typical day will inv...

Posted 4 months ago

AI Match Score
Apply

12.0 - 15.0 years

55 - 60 Lacs

Ahmedabad, Chennai, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Platform Engineer to build and maintain scalable, secure, and reliable data infrastructure for analytics and real-time processing. Key Responsibilities: Design and manage data pipelines, storage layers, and ingestion frameworks. Build platforms for batch and streaming data processing (Spark, Kafka, Flink). Optimize data systems for scalability, fault tolerance, and performance. Collaborate with data engineers, analysts, and DevOps to enable data access. Enforce data governance, access controls, and compliance standards. Required Skills & Qualifications: Proficiency with distributed data systems (Hadoop, Spark, Kafka, Airflow). Strong SQL and experience wi...

Posted 4 months ago

AI Match Score
Apply

8.0 - 13.0 years

25 - 40 Lacs

Chennai

Work from Office

Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming

Posted 4 months ago

AI Match Score
Apply

1.0 - 3.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Skills Required : Kafka, Spark Streaming. Proficiency in one of the programming languages preferably Java, Scala or Python. Education/Qualification : Bachelor's Degree in Computer Science, Engineering, Technology or related field Desirable Skills : Kafka, Spark Streaming. Proficiency in one of the programming languages preferably Java, Scala or Python.

Posted 4 months ago

AI Match Score
Apply

4.0 - 8.0 years

15 - 30 Lacs

Noida, Hyderabad, India

Hybrid

Spark Architecture , Spark tuning, Delta tables, Madelaine architecture, data Bricks , Azure cloud services python Oops concept, Pyspark complex transformation , Read data from different file format and sources writing to delta tables Dataware housing concepts How to process large files and handle pipeline failures in current projects Roles and Responsibilities Spark Architecture , Spark tuning, Delta tables, Madelaine architecture, data Bricks , Azure cloud services python Oops concept, Pyspark complex transformation , Read data from different file format and sources writing to delta tables Dataware housing concepts How to process large files and handle pipeline failures in current projects

Posted 4 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies