Job
Description
Job Title
: Senior Kafka & Monitoring Engineer
Experience Required
: 6+ Years
Location
: Remote
Employment Type
: Contract / Full-Time (as applicable)
Role Overview
Role Overview:
We are seeking a highly skilled
Kafka & Monitoring Engineer
to:
Architect, develop, and manage real-time data pipelines using Apache Kafka.
Lead monitoring infrastructure using Grafana and Elasticsearch.
The ideal candidate thrives in distributed, high-throughput environments and has a strong focus on:
Observability
Performance tuning
Scalable data infrastructure.
Key Responsibilities
Design, build, and maintain real-time Kafka-based data pipelines to support mission-critical streaming applications.
Develop Kafka producers and consumers using .NET or Python.
Tune and optimize Kafka clusters for high availability, performance, and fault tolerance.
Create and maintain Grafana dashboards to monitor Kafka metrics and infrastructure health.
Integrate Grafana with Elasticsearch and other observability tools for end-to-end monitoring.
Implement alerting systems to proactively monitor failures and performance degradation.
Collaborate with DevOps, Data Engineering, and Application Teams to ensure consistent and reliable streaming operations.
Enforce best practices for security, reliability, and compliance in streaming and monitoring implementations.
Required Skills & Qualifications
6+ years of hands-on experience with Apache Kafka, Grafana, and Elasticsearch in production.
Deep knowledge of Kafka internals: topics, partitions, replication, offsets, consumer groups.
Strong programming experience in .NET or Python for Kafka client development.
Proven ability to build observability dashboards using Grafana integrated with multiple data sources.
Proficient in event-driven architecture and real-time messaging systems.
Familiar with time-series databases, Kafka monitoring tools, and logging stacks.
Experience with data serialization using JSON, Avro, or Protobuf.
Working knowledge of Azure cloud services and monitoring Kafka in cloud-native environments.
Ability to work independently and collaboratively in distributed team setups.
Preferred (Good To Have)
Hands-on experience with Docker, Kubernetes, or other container orchestration platforms.
Exposure to CI/CD pipelines and DevOps practices.
Security awareness in managing Kafka access and IAM policies.
Knowledge of OpenTelemetry, Prometheus, or other observability tools.
Skills: opentelemetry,python,data serialization (json, avro, protobuf),azure cloud services,kubernetes,ci/cd pipelines,apache,grafana,kafka,infrastructure,event-driven architecture,apache kafka,docker,elasticsearch,real-time messaging systems,time-series databases,prometheus,.net
Show more
Show less