As a Backend Systems Engineer, you will be responsible for architecting, designing, developing, and maintaining robust and scalable backend systems to ensure high availability and performance. Your role will involve working with event-driven and message-driven architectures to create resilient and decoupled system communication. You will be tasked with designing, implementing, and managing efficient data pipelines utilizing technologies such as Kafka, Redis, and other relevant data streaming and storage solutions. Additionally, you will apply design patterns to guarantee maintainable, scalable, and well-structured code while mentoring junior team members on design principles. Your duties will also include building and maintaining robust CI/CD pipelines to automate software delivery and facilitate efficient deployments. Collaboration with cross-functional teams, including Product Management, Data Science, and other Engineering teams, will be essential to deliver secure and scalable solutions that align with business requirements. Proactively identifying and implementing optimizations for system performance, reliability, and scalability will be a key part of your responsibilities. Participating actively in system design discussions, providing valuable insights and expertise, particularly in the context of high-throughput and low-latency systems, will be crucial. Troubleshooting and efficiently resolving production issues, contributing to root cause analysis, and implementing preventive measures will also fall within your scope of work. Furthermore, you will contribute to enhancing our platform architecture, development processes, and engineering best practices continuously. In terms of core skills, we are looking for candidates with expertise in Event-Driven Architecture, including designing and implementing systems based on event-driven principles, and experience with message brokers such as Kafka, RabbitMQ, or similar technologies for building reliable and scalable communication. Familiarity with MQTT is considered a plus. Strong proficiency in designing and building data pipelines, utilizing technologies like Kafka, Redis, as well as both SQL and NoSQL databases, is essential. An excellent ability to design scalable and distributed systems, considering factors like fault tolerance, consistency, and performance, is also required. Demonstrating strong coding and problem-solving skills, along with a solid understanding of data structures and algorithms, is crucial. Practical experience applying various design patterns in real-world projects to solve complex problems and enhance code quality will be highly valued.,