Posted:3 days ago|
Platform:
On-site
Full Time
Overview:
We are seeking a talented and motivated Data Engineer with 2-5 years of experience in stream
processing, particularly utilizing Flink and Kafka, alongside expertise in Spark and AWS technologies.
As a Data Engineer, you will play a crucial role in designing, implementing, and maintaining robust
stream processing solutions to handle large volumes of data in real-time, ensuring high
performance, scalability, and reliability.
Responsibilities:
Stream Processing Development: Design, develop, and optimize stream processing pipelines using
Apache Flink and Kafka to process real-time data streams efficiently.
Data Ingestion: Implement robust data ingestion pipelines to collect, process, and distribute
streaming data from various sources into the Flink and Kafka ecosystem.
Data Transformation: Perform data transformation and enrichment operations on streaming data
using Spark Streaming and other relevant technologies to derive actionable insights.
Performance Optimization: Continuously optimize stream processing pipelines for performance,
scalability, and reliability, ensuring low-latency and high-throughput data processing.
Monitoring and Troubleshooting: Monitor stream processing jobs, troubleshoot issues, and
implement necessary optimizations to ensure smooth operation and minimal downtime.
Integration with AWS Services: Leverage AWS technologies such as Amazon Kinesis, AWS Lambda,
Amazon EMR, and others to build end-to-end stream processing solutions in the cloud environment.
Data Governance and Security: Implement data governance and security measures to ensure
compliance with regulatory requirements and protect sensitive data in streaming pipelines.
Collaboration: Collaborate closely with cross-functional teams including data scientists, software
engineers, and business stakeholders to understand requirements and deliver impactful solutions.
Documentation: Create and maintain comprehensive documentation for stream processing
pipelines, including design specifications, deployment instructions, and operational procedures.
Continuous Learning: Stay updated with the latest advancements in stream processing technologies,
tools, and best practices, and incorporate them into the development process as appropriate.
Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
2-5 years of professional experience in data engineering roles with a focus on stream processing.
Strong proficiency in Apache Flink and Kafka for building real-time stream processing applications.
Hands-on experience with Spark and Spark Streaming for batch and stream processing.
Solid understanding of cloud computing platforms, particularly AWS services such as Amazon
Kinesis, AWS Lambda, Amazon EMR, etc.
Proficiency in programming languages such as Java, Scala, or Python.
Experience with containerization and orchestration tools like Docker and Kubernetes is a plus.
Excellent problem-solving skills and the ability to troubleshoot complex distributed systems.
Strong communication skills and the ability to work effectively in a collaborative team environment.
NextHire
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Java coding challenges to boost your skills
Start Practicing Java Now13.0 - 14.0 Lacs P.A.
bengaluru, mumbai (all areas)
20.0 - 34.0 Lacs P.A.
15.0 - 27.5 Lacs P.A.
chandigarh, delhi / ncr
15.0 - 15.0 Lacs P.A.
bengaluru
15.0 - 22.5 Lacs P.A.
india
Salary: Not disclosed
new delhi, delhi, india
Salary: Not disclosed
hyderābād
5.25 - 6.8625 Lacs P.A.
hyderābād
Experience: Not specified
5.37235 - 6.99999 Lacs P.A.
bengaluru, karnataka, india
Salary: Not disclosed