Experience: 7+ years Must-Have Skills: Golang programming expertise AWS/Azure cloud experience Kubernetes proficiency Who You Are A problem solver who constantly looks for ways to improve code, tools, workflows, and documentation . A systems thinker —curious about networking, load balancing, Linux internals, and how systems interact. Comfortable working in Golang and cloud-native ecosystems ( Docker, Kubernetes, Prometheus, Service Meshes, Distributed Tracing ). An effective communicator —open to learning, asking questions, and challenging assumptions. Bonus: Experience in Storage technologies is a plus. What You’ll Do Build and shape next-gen programmable infrastructure solutions. Define and implement features that enhance developer and user experiences . Contribute to open-source projects and potentially speak at conferences and meetups . Work with cloud-native technologies and public clouds ( AWS, Azure, GCP ). Collaborate using GitHub, Slack , and embrace async workflows in a distributed team. Why Join Us? Work on cutting-edge cloud-native infrastructure . Grow in an environment that values learning, ownership, and open collaboration . Opportunity to contribute to open-source and industry events. If this sounds exciting, let’s talk! Show more Show less
About the project: We are building a data protection platform for SaaS companies. Your experience in building or working with SaaS products and storage will be a super plus for you to succeed in the role. Role and Responsibilities Design and build the SaaS product - you will help in building and scaling the product. Contribute to and drive open source projects which will be part of the SaaS platform and some key technologies will be open-sourced Collaborate with a distributed team of engineers to build core components of the platform Work with a modern technology stack based on Golang, Containers, public cloud services, Cloud Native technologies such as Kubernetes among others Requirements A strong bias toward action and direct, frequent communication Expertise in developing, testing, and debugging production-quality, scalable, concurrent systems Experience with distributed systems concepts and architecture Strong computer science fundamentals (data structures, algorithms, and concurrency) Proficient in a programming language like Golang. Passionate about code quality, extensibility, coding standards, testing, and automation Nice to have Experience building core features on storage, database, data protection, or security systems. Public cloud development experience (AWS strongly preferred) Golang experience Open Source Software contributions Experience working in startup environments or on early-stage products
Senior Kubernetes Engineer – Airgap, CI/CD, Cloud-Native (Hyderabad, Onsite) Must-Have Skills: 6+ years of experience in Kubernetes deployment & management. Hands-on experience with airgap Kubernetes clusters , ideally in regulated industries (finance, healthcare, etc.) . Strong expertise in CI/CD pipelines , programmable infrastructure , and automation . Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery . Security & compliance knowledge for regulated industries. Preferred: Experience with GKE, RKE, Rook-Ceph and certifications like CKA, CKAD . Who You Are A Kubernetes expert who thrives on scalability, automation, and security . Passionate about optimizing infrastructure, CI/CD, and high-availability systems . Comfortable troubleshooting Linux, improving observability , and ensuring disaster recovery readiness . A problem solver who simplifies complexity and drives cloud-native adoption . What You’ll Do Architect & automate Kubernetes solutions for airgap and multi-region clusters . Optimize CI/CD pipelines & cloud-native deployments . Work with open-source projects , selecting the right tools for the job. Educate & guide teams on modern cloud-native infrastructure best practices. Solve real-world scaling, security, and infrastructure automation challenges . Why Join Us? Work on high-impact Kubernetes projects in regulated industries . Solve real-world automation & infrastructure challenges with cutting-edge tools . Grow in a team that values learning, open-source contributions, and innovation . Location: Hyderabad (Onsite) Ready to shape the future of Kubernetes infrastructure? Apply now!
Work mode- WFO 5 days Experience- 7+ K8s Hands-on experience Linux Troubleshooting Skills Experience on OnPrem Servers and Management Helm Docker Ingress and Ingress Controllers Networking Basics Proficient Communication Must-Have Skills: Hands-on experience with airgap Kubernetes clusters , ideally in regulated industries (finance, healthcare, etc.) . Strong expertise in CI/CD pipelines , programmable infrastructure , and automation . Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery . Security & compliance knowledge for regulated industries. Preferred: Experience with GKE, RKE, Rook-Ceph, and certifications like CKA, CKAD . Who You Are A Kubernetes expert who thrives on scalability, automation, and security . Passionate about optimizing infrastructure, CI/CD, and high-availability systems . Comfortable troubleshooting Linux, improving observability , and ensuring disaster recovery readiness . A problem solver who simplifies complexity and drives cloud-native adoption . What Youll Do Architect & automate Kubernetes solutions for airgap and multi-region clusters . Optimize CI/CD pipelines & cloud-native deployments . Work with open-source projects , selecting the right tools for the job. Educate & guide teams on modern cloud-native infrastructure best practices. Solve real-world scaling, security, and infrastructure automation challenges . Why Join Us Work on high-impact Kubernetes projects in regulated industries . Solve real-world automation & infrastructure challenges with cutting-edge tools . Grow in a team that values learning, open-source contributions, and innovation . Location: Hyderabad (Onsite) Show more Show less
The job requires you to have 8-10 years of experience in cloud engineering and platform development. You will be leading the squads in delivering cloud infrastructure and platform services, building, configuring, and optimizing AWS EKS, Kafka, Cassandra, and MongoDB environments. Your responsibilities will also include implementing GitOps pipelines and Terraform-based infrastructure automation, conducting architecture and code reviews to maintain high-quality standards, mentoring junior engineers, ensuring technical alignment within the team, and working closely with other squads and leadership to integrate platform services seamlessly. To be successful in this role, you must have a strong expertise in AWS, Kubernetes (EKS), GitOps, ArgoCD, Terraform, and production experience with Kafka, Cassandra, and MongoDB. A solid understanding of distributed system scalability and performance tuning is essential. You should also possess the ability to lead and deliver projects on time with high quality.,
About the project: We are currently working on developing a data protection platform specifically designed for SaaS companies. Your prior experience in either constructing or collaborating with SaaS products and storage solutions will greatly enhance your performance in this position. Role and Responsibilities: As a part of the team, your main responsibilities will include designing and constructing the SaaS product, contributing to and overseeing open source projects integral to the SaaS platform, and collaborating with a group of distributed engineers to establish core components of the platform. You will be utilizing a modern technology stack centered around Golang, Containers, public cloud services, and Cloud Native technologies like Kubernetes. Requirements: The ideal candidate will display a proactive approach, emphasizing direct and frequent communication. Furthermore, proficiency in developing, testing, and debugging scalable, concurrent systems is essential. Familiarity with distributed systems concepts and architecture, as well as strong computer science fundamentals (data structures, algorithms, and concurrency) are also required. Proficiency in a programming language such as Golang is crucial. A deep-seated passion for code quality, extensibility, coding standards, testing, and automation is highly valued. Nice to have: Candidates with experience in developing core features related to storage, database, data protection, or security systems will be preferred. Additionally, experience in public cloud development (with a preference for AWS), proficiency in Golang, contributions to Open Source Software, and prior exposure to startup environments or early-stage product development are considered advantageous.,
As a QA leader with a data-driven approach, you will play a crucial role in ensuring the accuracy and efficiency of big-data pipelines. Your responsibilities will include designing and conducting comprehensive data integrity tests for Spark queries to identify and resolve any discrepancies. You will also develop and implement automated validation scripts that seamlessly integrate with our CI/CD workflows. In addition, you will be involved in defining and executing performance benchmarks across various Spark cluster configurations, analyzing metrics such as throughput, latency, and resource utilization. Your expertise will be instrumental in fine-tuning Spark configurations to meet SLAs for data freshness and query response times. As a mentor, you will guide and coach junior testers, lead test planning sessions, and document best practices to ensure reproducible and scalable testing processes. Collaboration with data engineering, DevOps, and product teams will be essential to embed quality gates into the development lifecycle. This role offers the opportunity to take ownership of the quality and performance of our core big-data platform, collaborate with experienced data engineers on cutting-edge Spark deployments, and drive automation and efficiency in an innovation-focused environment. Professional growth, training opportunities, and participation in industry conferences are additional benefits of joining our team.,
Data Engineer (3–6 Years) We are looking for a Data Engineer who can work across modern data platforms and streaming frameworks to build scalable and reliable pipelines. If you enjoy working with Spark on Databricks, Kafka, Snowflake, and MongoDB — and want to solve real-world data integration challenges — this role is for you. What you’ll do: You will develop ETL/ELT pipelines in Databricks (PySpark notebooks) or Snowflake (SQL/Snowpark), ingesting from sources like Confluent Kafka Handle data storage optimizations using Delta Lake/Iceberg formats, ensuring reliability (e.g., time travel for auditing in fintech pipelines). Integrate with Azure ecosystems (e.g., Fabric for warehousing, Event Hubs for streaming), supporting BI/ML teams—e.g., preparing features for demand forecasting models Contribute to real-world use cases, such as building dashboards for healthcare outcomes or optimizing logistics routes with aggregated IoT data. Write clean, maintainable code in Python or Scala Collaborate with analysts, engineers, and product teams to translate data needs into scalable solutions Ensure data quality, reliability, and observability across the pipelines What we’re looking for: 3–6 years of hands-on experience in data engineering Experience with Databricks / Apache Spark for large-scale data processing Familiarity with Kafka, Kafka Connect, and streaming data use cases Proficiency in Snowflake — including ELT design, performance tuning, and query optimization Exposure to MongoDB and working with flexible document-based schemas Strong programming skills in Python or Scala Comfort with CI/CD pipelines, data testing, and monitoring tools Good to have: - Experience with Airflow, dbt, or similar orchestration tools Worked on cloud-native stacks (AWS, GCP, or Azure) Contributed to data governance and access control practices
We’re looking for an experienced Search Engineer with deep expertise in Elasticsearch and strong proficiency in Java to help us build and optimize high-performance search solutions. In this role, you’ll design scalable indices, craft efficient queries, and tune clusters to deliver fast and relevant search experiences. Beyond core development, you’ll collaborate closely with cross-functional teams in an Agile environment, contribute to clean and modular codebases, and ensure quality through robust testing and CI/CD practices. If you’re passionate about search technologies, system performance, and delivering seamless user experiences, we’d love to hear from you. Must-have :4–12 years of professional software development experience .Deep experience with Elasticsearc h—modeling data, optimizing indices, designing queries, tuning clusters, and ensuring performance .Strong command of Jav a, demonstrating clean coding, modular architecture, and DRY principles .Solid understanding of object-oriented desig n, concurrency, and efficient data structures .Proficiency in automated testing frameworks (e.g., JUnit, Mockito), and CI/CD pipelines .Skilled in collaborating in Agile settings—participating in stand-ups, retrospectives, and iterative delivery .Excellent communication and ability to convey technical concepts to both technical and non-technical stakeholders .Nice-to-have / Preferred :Hands-on experience with Sprin g, Spring Data Elasticsearc h, or comparable frameworks .Familiarity with search use cases: autocomplete, fuzzy matching, filtering, relevance tuning, pagination .Knowledge of JVM internals, performance profiling, and memory management .Exposure to search analytics and tools like Kibana or custom dashboards (focused on search metrics rather than logs) .Experience mentoring junior developers or leading small cross-functional projects .
Location: Pune, India Work Model: Work from Office (Viman Nagar) is mandatory Experience: 5-8 years Key Responsibilities Develop and maintain infrastructure components using AWS, Kubernetes, and Terraform. Implement and manage Kafka pipelines, Cassandra, and MongoDB clusters. Contribute to CI/CD pipelines and GitOps workflows for zero-downtime deployments. Troubleshoot, optimize, and ensure the resilience of mission-critical systems. Collaborate with cross-functional teams to deliver platform capabilities. Required Skills 58 years of software engineering experience with a focus on cloud platforms. Hands-on experience in AWS, Kubernetes (EKS), Terraform, GitOps (ArgoCD). Knowledge of Kafka, Cassandra, and MongoDB in production environments. Good understanding of high-availability architectures and performance optimization. Strong debugging and problem-solving skills.
What are we looking for You are an engineer with an eye for constant improvement. You not only look at improving the code but also tooling, the commands you use, the user-facing documentation, and everything that makes great and beautiful products possible. You can talk fluently to computers: You will be coding in Golang majorly. You really like to solve problems, complex engineering problems! You have some exposure/understanding of systems. Out of curiosity, you tried to understand the routing, load balancing in a web server, or how the Linux filesystem was built and structured the way it is. You may not have worked extensively but you definitely have dabbled with it and can think as well as understand how systems interact with each other. You can express ideas and your opinions and aren't shy to say no if you don't know something. We are not hiring Wikipedia after all, are we You have working experience with cloud-native technologies such as Docker, Kubernetes, Prometheus, Service Meshes, and Distributed Tracing in some shape or form. What you will be learning and doing You will be part of a team-building product to solve the next generation of problems in the programmable infrastructure. It's you who will start with defining the feature and how it will make the life of the end-user better and then make it into a reality. You will most likely be programming in Golang so you should be proficient in writing Go code. Most likely some part of your work might be open source and worthy of talking and presenting at conferences and meetups. You will be working with cloud-native technologies such as Docker, Kubernetes, Prometheus, Service Meshes, Distributed Tracing in some shape or form. You will also be working with one or more public cloud platforms from AWS/Azure/Google Cloud Platform (again, you may not know any or some of these technologies and that is not a deal-breaker) Your workflow will be driven by tools such as GitHub, Slack, and a lot of asynchronous communication with distributed teams. GitHub issues will be your new re-incarnated Jira ;) Experience: 9+ Years
What are we looking for?? You have a good understanding and work experience in AKS, Kubernetes, and EKS. You are able to manage multi region clusters for disaster recovery. You have a good understanding of AWS stack. You have experience of production level in Kubernetes. You are comfortable coding/programming and can do so whenever required. You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice. You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community. You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things. What you will be learning and doing? You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure. The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation. You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job. You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure. Experience : 4-8 Years
Data Engineer (36 Years) We are looking for a Data Engineer who can work across modern data platforms and streaming frameworks to build scalable and reliable pipelines. If you enjoy working with Spark on Databricks, Kafka, Snowflake, and MongoDB and want to solve real-world data integration challenges this role is for you. What you'll do: You will develop ETL/ELT pipelines in Databricks (PySpark notebooks) or Snowflake (SQL/Snowpark), ingesting from sources like Confluent Kafka Handle data storage optimizations using Delta Lake/Iceberg formats, ensuring reliability (e.g., time travel for auditing in fintech pipelines). Integrate with Azure ecosystems (e.g., Fabric for warehousing, Event Hubs for streaming), supporting BI/ML teamse.g., preparing features for demand forecasting models Contribute to real-world use cases, such as building dashboards for healthcare outcomes or optimizing logistics routes with aggregated IoT data. Write clean, maintainable code in Python or Scala Collaborate with analysts, engineers, and product teams to translate data needs into scalable solutions Ensure data quality, reliability, and observability across the pipelines What we're looking for: 36 years of hands-on experience in data engineering Experience with Databricks / Apache Spark for large-scale data processing Familiarity with Kafka, Kafka Connect, and streaming data use cases Proficiency in Snowflake including ELT design, performance tuning, and query optimization Exposure to MongoDB and working with flexible document-based schemas Strong programming skills in Python or Scala Comfort with CI/CD pipelines, data testing, and monitoring tools Good to have: - Experience with Airflow, dbt, or similar orchestration tools Worked on cloud-native stacks (AWS, GCP, or Azure) Contributed to data governance and access control practices
You will be working on building a data protection platform for SaaS companies. Your experience in building or working with SaaS products and storage will greatly benefit you in this role. **Role and Responsibilities:** - Design and build the SaaS product, contributing to its scaling and growth. - Contribute to and drive open source projects that will be integrated into the SaaS platform. - Collaborate with a distributed team of engineers to develop core components of the platform. - Utilize a modern technology stack including Golang, Containers, public cloud services, and Cloud Native technologies like Kubernetes. **Requirements:** - Display a strong bias toward action and maintain direct, frequent communication. - Demonstrate expertise in developing, testing, and debugging production-quality, scalable, and concurrent systems. - Possess experience with distributed systems concepts and architecture. - Solid foundation in computer science fundamentals such as data structures, algorithms, and concurrency. - Proficiency in a programming language like Golang. - Show passion for code quality, extensibility, coding standards, testing, and automation. **Nice to have:** - Experience in building core features related to storage, database, data protection, or security systems. - Previous experience in public cloud development, with a preference for AWS. - Golang experience. - Contributions to Open Source Software. - Experience working in startup environments or on early-stage products.,