Posted:3 weeks ago|
Platform:
Work from Office
Full Time
Job Description Kafka/Streaming Data Engineer
OverviewWe are seeking a Kafka/Streaming Data Engineer to join our team and support a new high-volume data ingestion and fulfillment architecture. The role requires strong hands-on expertise in Kafka cluster management, monitoring, and performance tuning, along with the ability to troubleshoot issues, propose solutions, and contribute to automation efforts.Our current architecture based on a custom solution leveraging Ab Initio for CDC, Kafka for event streaming, and Apache Flink for downstream fulfillment into databases. The production environment is being built, and we need a Kafka expert to help fine-tune, monitor, and stabilize the ecosystem as it goes live.ResponsibilitiesManage and fine-tune Kafka clusters to ensure high throughput, scalability, and reliability.Strong Ansible scripting skills for automating Kafka installation, configuration, and deployment. Build a production-ready, reliable, and scalable solution for replicating data from on-premise to cloud AWSMonitor end-to-end data pipelines, including Ab Initio CDC jobs, Kafka topics, and Flink consumers.Capture and report key metrics (e.g., records published to Kafka, records consumed by Flink, lag, latency, throughput).Configure, optimize, and maintain Prometheus and Grafana dashboards for observability and alerting.Proactively troubleshoot production issues and provide alternative solutions, working closely with fulfillment and development teams.Participate in UAT and production readiness activities, including synthetic load testing and volume simulations.Drive automation for repetitive operational tasks and cluster tuning activities.RequirementsStrong hands-on experience with Apache Kafka (cluster management, partitions, replication, consumer groups, retention policies, monitoring).Solid background in streaming data pipelines and real-time processing.Experience with monitoring tools (Prometheus, Grafana) and alerting integrations.Knowledge of Apache Flink, or equivalent ETL/streaming frameworks.Familiarity with relational databases (Postgres) and CDC techniques.Proven ability to troubleshoot, optimize, and fine-tune large-scale data systems.Experience working in high-volume, production-grade environments (multi-billion row datasets preferred).Strong problem-solving mindset: ability to propose workarounds and long-term fixes.Excellent communication skills for collaborating across development, UAT, and fulfillment teams.
Nexure Tech
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
visakhapatnam, warangal, hyderabad
Experience: Not specified
25.0 - 30.0 Lacs P.A.
hyderabad
14.0 - 24.0 Lacs P.A.
13.0 - 23.0 Lacs P.A.
noida, mohali
0.5 - 1.25 Lacs P.A.
chennai, bengaluru
15.0 - 18.0 Lacs P.A.
0.5 - 0.7 Lacs P.A.
5.0 - 12.0 Lacs P.A.
pune, bengaluru, mumbai (all areas)
5.0 - 15.0 Lacs P.A.
Experience: Not specified
2.0 - 3.0 Lacs P.A.
chennai
Experience: Not specified
2.0 - 5.0 Lacs P.A.