Posted:1 week ago|
Platform:
Remote
Full Time
We are seeking a Kafka Developer with a strong focus on real-time data streaming and distributed systems. This role requires expertise in developing and managing robust data pipelines, primarily using Python, and includes significant Kafka administration responsibilities. You will collaborate with cross-functional teams to integrate Kafka-based solutions into existing systems and ensure smooth data streaming and processing. Additionally, you will monitor and optimize Kafka clusters to ensure high availability, performance, and scalability. You will implement data pipelines and streaming processes that support business analytics and operational needs, while troubleshooting and resolving any issues that arise. Ideal candidates will have a strong foundation in Apache Kafka, real-time data streaming, and proficiency in Java, Scala, or Python, as well as a solid understanding of distributed systems and microservices architecture.
• Develop & Optimize Data Streaming Solutions: Design, develop, and maintain high-performance data pipelines and real-time streaming applications primarily using Java with Apache Kafka, Kafka Streams, and Apache Flink. Experience with Python is a plus.
• Kafka Administration & Integration: Install, configure, monitor, and optimize Kafka clusters. This includes extensive experience with Kafka Connect and specifically the MongoDB Connector for seamless data integration.
• MongoDB Expertise: Design and optimize MongoDB schemas, develop complex queries, and manage MongoDB replication sets and sharded clusters for scalable data storage.
• Software Engineering Excellence: Apply strong principles of Object-Oriented Programming (OOP), Test-Driven Development (TDD), and proven Software Design Patterns to deliver clean, maintainable, and scalable code.
• Y: Utilize DevOps practices (CI/CD, Docker, Kubernetes) for automated deployments, and actively monitor and troubleshoot distributed systems to ensure high availability and performance.
• Kafka Streams & Event Processing Experience
• Experience in using real-time stream processing frameworks like Apache Flink, Spark, or ksqlDB.
• experience in stateful processing (e.g., windowing, aggregations)
• Experience with Schema Evolution & Avro/Protobuf
• Experience with Data Transformation Pipelines : to work with real-time transformations, joins, and aggregations using Kafka
Streams API or Flink/Spark.
• Design, develop, and maintain real-time data streaming applications using Apache Kafka.
• Collaborate with cross-functional teams to integrate Kafka solutions into existing systems.
• Monitor and optimize Kafka clusters to ensure high availability and performance.
• Implement data pipelines and streaming processes to support business analytics and operations.
• Troubleshoot and resolve issues related to data streaming and processing.
• Proven experience with Apache Kafka and real-time data streaming.
• Proficiency in programming languages such as Java, Scala, or Python.
• Familiarity with distributed systems and microservices architecture.
• Strong problem-solving skills and the ability to work collaboratively in a team environment.
• Experience with API management technology
Klarin Technologies
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowSalary: Not disclosed
Kochi, Kerala, India
Salary: Not disclosed
Salary: Not disclosed
Kochi, Kerala, India
Salary: Not disclosed