Jobs
Interviews

1569 Gitops Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

10 - 15 Lacs

mumbai metropolitan region

On-site

Sr. QA Engineer Shift : Night Shift Location: Navi Mumai / Bangalore / Pune / Hyderabad / Mohali / Dehradun / Panchkula Technical Proficiency Deep understanding of Kubernetes internals, cluster lifecycle management, Helm, service meshes (e.g., Istio or Linkerd), and network policies. Strong scripting and automation capabilities (Python, Pytest, Bash, etc.). Familiarity with observability stacks (Prometheus, Grafana, Jaeger), Kubernetes security (RBAC, secrets management), and performance benchmarking tools (e.g., K6). Solid grounding in cloud architecture (AWS, Azure, GCP), infrastructure provisioning, and containerized CI/CD. Moderate to advanced linux knowledge and proficiency is required: Bash scripting and debugging, systemd/logs, networking/firewalling/routing, certificate/PKI management, containers (Docker/containerd), and Kubernetes tooling (kubectl/Helm with OCI registries, GitOps/Flux) to install, test, and troubleshoot multi-cluster environments. Architecting Test Systems Architect test frameworks and infrastructure for validating microservices and infrastructure components in multi-cluster and hybrid-cloud environments. Oversee the design of complex test scenarios simulating production-like workloads, resource scaling, failure injection, and recovery across distributed clusters. Automation & Scalability Spearhead the development of scalable and maintainable test automation integrated with CI/CD (Jenkins, GitHub Actions, etc.). Leverage Kubernetes APIs, Helm, and service mesh tools to build comprehensive automation coverage, including system health, failover behavior, and network resilience. Promote test infrastructure-as-code and drive IaC forward on the team making sure the infrastructure code is repeatable, extensible and reliable. Skills: kubernetes,python,bash,cloud,aws,azure,qa automation,istio,linkerd

Posted 1 week ago

Apply

0 years

10 - 15 Lacs

navi mumbai, maharashtra, india

On-site

Sr. QA Engineer Shift : Night Shift Location: Navi Mumai / Bangalore / Pune / Hyderabad / Mohali / Dehradun / Panchkula Technical Proficiency Deep understanding of Kubernetes internals, cluster lifecycle management, Helm, service meshes (e.g., Istio or Linkerd), and network policies. Strong scripting and automation capabilities (Python, Pytest, Bash, etc.). Familiarity with observability stacks (Prometheus, Grafana, Jaeger), Kubernetes security (RBAC, secrets management), and performance benchmarking tools (e.g., K6). Solid grounding in cloud architecture (AWS, Azure, GCP), infrastructure provisioning, and containerized CI/CD. Moderate to advanced linux knowledge and proficiency is required: Bash scripting and debugging, systemd/logs, networking/firewalling/routing, certificate/PKI management, containers (Docker/containerd), and Kubernetes tooling (kubectl/Helm with OCI registries, GitOps/Flux) to install, test, and troubleshoot multi-cluster environments. Architecting Test Systems Architect test frameworks and infrastructure for validating microservices and infrastructure components in multi-cluster and hybrid-cloud environments. Oversee the design of complex test scenarios simulating production-like workloads, resource scaling, failure injection, and recovery across distributed clusters. Automation & Scalability Spearhead the development of scalable and maintainable test automation integrated with CI/CD (Jenkins, GitHub Actions, etc.). Leverage Kubernetes APIs, Helm, and service mesh tools to build comprehensive automation coverage, including system health, failover behavior, and network resilience. Promote test infrastructure-as-code and drive IaC forward on the team making sure the infrastructure code is repeatable, extensible and reliable. Skills: kubernetes,python,bash,cloud,aws,azure,qa automation,istio,linkerd

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we're shaping the future and making a meaningful impact on the world. About The Role We at Innovaccer are looking for a Site Reliability Engineer-I to build the most amazing product experience. You'll get to work with other engineers to build delightful feature experiences to understand and solve our customer's pain points A Day in the Life Take ownership of SRE pillars: Deployment, Reliability, Scalability, Service Availability (SLA/SLO/SLI), Performance, and Cost. Lead production rollouts of new releases and emergency patches using CI/CD pipelines while continuously improving deployment processes. Establish robust production promotion and change management processes with quality gates across Dev/QA teams. Roll out a complete observability stack across systems to proactively detect and resolve outages or degradations. Analyze production system metrics, optimize system utilization, and drive cost efficiency. Manage autoscaling of the platform during peak usage scenarios. Perform triage and RCA by leveraging observability toolchains across the platform architecture. Reduce escalations to higher-level teams through proactive reliability improvements. Participate in the 24x7 OnCall Production Support team. Lead monthly operational reviews with executives covering KPIs such as uptime, RCA, CAP (Corrective Action Plan), PAP (Preventive Action Plan), and security/audit reports. Operate and manage production and staging cloud platforms, ensuring uptime and SLA adherence. Collaborate with Dev, QA, DevOps, and Customer Success teams to drive RCA and product improvements. Implement security guidelines (e.g., DDoS protection, vulnerability management, patch management, security agents). Manage least-privilege RBAC for production services and toolchains. Build and execute Disaster Recovery plans and actively participate in Incident Response. Work with a cool head under pressure and avoid shortcuts during production issues. Collaborate effectively across teams with excellent verbal and written communication skills. Build strong relationships and drive results without direct reporting lines. Take ownership, be highly organized, self-motivated, and accountable for high-quality delivery What You Need Experience: 1-3 years in production engineering, site reliability, or related roles. Solid hands-on experience with at least one cloud provider (AWS, Azure, GCP) with automation focus (certifications preferred). Strong expertise in Kubernetes and Linux. Proficiency in scripting/programming (Python required). Strong understanding of observability toolchains (Logs, Metrics, Tracing). Knowledge of CI/CD pipelines and toolchains (Jenkins, ArgoCD, GitOps). Familiarity with persistence stores (Postgres, MongoDB), data warehousing (Snowflake, Databricks), and messaging (Kafka). Exposure to monitoring/observability tools such as ElasticSearch, Prometheus, Jaeger, NewRelic, etc. Proven experience in production reliability, scalability, and performance systems. Experience in 24x7 production environments with process focus. Familiarity with ticketing and incident management systems. Security-first mindset with knowledge of vulnerability management and compliance. Advantageous: hands-on experience with Kafka, Postgres, and Snowflake. Excellent judgment, analytical thinking, and problem-solving skills. Ability to quickly identify and drive optimal solutions within constraints. Here's What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most Care Program: Whether it's a celebration or a time of need, we've got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need Financial Assistance: Life happens, and when it does, we're here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, Instagram, and the Web.

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

delhi, delhi

On-site

About Kuoni Tumlare At Kuoni Tumlare, we deliver truly inspiring and innovative solutions and experiences that create value both for our Partners and Society at large. Our wide portfolio of products and solutions is built on 100+ years of destination management experience. Our solutions include series tours, technical visits, educational tours, Japan specialist travel consulting, as well as meetings, incentives, conferences, and exhibitions. Our product portfolio includes MyBus excursions at destinations as well as guaranteed departure tours devised and delivered by our Seat-in-Coach specialists, Europamundo (EMV) and MyBus Landcruise. About the Business / Function Proudly part of Kuoni Tumlare, TUMLARE SOFTWARE SERVICES (P) LTD. is a multinational technology support company that serves as a trusted technology partner for businesses since 1999. We also help established brands reimagine their business through digitalization. We are looking for an experienced Senior Frontend Developer with expertise in React/JavaScript/TypeScript and knowledge of Liferay to join our growing development team. In this role, you will be responsible for designing and building high-performance, scalable, and responsive web applications within the Liferay portal framework. You will work closely with backend developers, product managers, and designers to deliver a seamless user experience. Key Responsibilities: Developing and maintaining the complete user interface (GUI) using JavaScript/TypeScript and the React ecosystem (React, Redux, ClayUI and related libraries). This includes building interactive user forms, data listings, filters, and other UI components. Integrating these frontend components into the Liferay CE 7.4.x platform. Connecting the entire frontend to our services by working with REST API clients. Our APIs are well-documented with Open API specification to enable seamless integration with our backend systems. Collaborating with the backend team to ensure a smooth user experience. Write clean, maintainable, and well-documented code. Conduct code reviews. 5-8 years of hands-on experience in frontend development. Proficiency in JavaScript/TypeScript and the React framework. Experience with REST APIs and understanding OpenAPI specifications. Knowledge of GraphQL is added advantage. Working knowledge of GitOps, including managing infrastructure changes via pull requests. Daily use of Docker for local development with docker compose. Familiarity with Kubernetes from a user perspective. Ability to read and understand Jenkins pipelines. Basic understanding of OpenSearch (Elasticsearch) is a plus—you should be able to query data and troubleshoot errors via the GUI. Experience with Liferay CE 7.4.x and Java is a major advantage. Familiarity with responsive design and cross-browser compatibility. Excellent communication and interpersonal skills. What We Offer: Working in one of the world’s leading multinational company Probation period - only 3 months. Annual Bonus – as per company policy. Long Service Award. Paid leaves for Birthday and Wedding/Work Anniversary Learning Opportunity through an online learning platform with rich training courses and resources. Company Sponsored IT Certification - as per company policy Following insurance from Date of Joining: Group Medical Insurance with Sum Insured of up to 5 Lakh Term life Insurance - 3 times of your CTC Accidental Insurance - 3 times of your CTC Employee Engagement Activities: Fun Friday per week Annual Off-Site Team Building End Year Party CSR programs Global Employee Engagement Events If you match the requirements, excited about what we offer and interested in a new challenge, we are looking forward to receiving your full application. Job Location - Pitampura, Delhi. 5 days working.

Posted 1 week ago

Apply

0.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

General information Country India State Telangana City Hyderabad Job ID 45594 Department Development Experience Level MID_SENIOR_LEVEL Employment Status FULL_TIME Workplace Type On-site Description & Requirements As a Senior DevOps Engineer, you will be responsible for leading the design, development, and operationalization of cloud infrastructure and CI/CD processes. You will serve as a subject matter expert (SME) for Kubernetes, AWS infrastructure, Terraform automation, and DevSecOps practices. This role also includes mentoring DevOps engineers, contributing to architecture decisions, and partnering with cross-functional engineering teams to implement best-in-class cloud and deployment solutions. Essential Duties: Design, architect, and automate cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform and CloudFormation. Lead and optimize Kubernetes-based deployments, including Helm chart management, autoscaling, and custom controller integrations. Implement and manage CI/CD pipelines for microservices and serverless applications using Jenkins, GitLab, or similar tools. Champion DevSecOps principles, integrating security scanning (SAST/DAST) and policy enforcement into the pipeline. Collaborate with architects and application teams to build resilient and scalable infrastructure solutions across AWS services (EC2, VPC, Lambda, EKS, S3, IAM, etc.). Establish and maintain monitoring, alerting, and logging practices using tools like Prometheus, Grafana, CloudWatch, ELK, or Datadog. Drive cost optimization, environment standardization, and governance across cloud environments. Mentor junior DevOps engineers and participate in technical reviews, playbook creation, and incident postmortems. Develop self-service infrastructure provisioning tools and contribute to internal DevOps tooling. Actively participate in architecture design reviews, cloud governance, and capacity planning efforts. Basic Qualifications: 7–9 years of hands-on experience in DevOps, Cloud Infrastructure, or SRE roles. Strong expertise in AWS cloud architecture and automation using Terraform or similar IaC tools. Solid knowledge of Kubernetes, including experience managing EKS clusters, Helm, and custom resources. Deep experience in Linux administration, networking, and security hardening. Advanced experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.). Proficient in scripting with Bash, Groovy, or Python. Strong understanding of containerization using Docker and orchestration strategies. Experience with monitoring and logging stacks like ELK, Prometheus, and CloudWatch. Familiarity with security, identity management, and cloud compliance frameworks. Excellent troubleshooting skills and a proactive approach to system reliability and resilience. Strong interpersonal skills and ability to work cross-functionally. Bachelor’s degree in Computer Science, Information Systems, or equivalent. Preferred Qualifications: Experience with GitOps using ArgoCD or FluxCD. Knowledge of multi-account AWS architecture, VPC peering, and Service Mesh. Exposure to DataOps, platform engineering, or large-scale data pipelines. Familiarity with Serverless Framework, API Gateway, and event-driven designs. Certifications such as AWS DevOps Engineer – Professional, CKA/CKAD, or equivalent. Experience in regulated environments (e.g., SOC2, ISO27001, GDPR, HIPAA). About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a global organization committed to improving health outcomes and advancing health equity on a global scale, Optum leverages technology to connect millions of individuals with the care, pharmacy benefits, and resources they need to lead healthier lives. At Optum, you will be part of a diverse and inclusive culture that offers comprehensive benefits, career development opportunities, and a supportive environment where you can make a meaningful impact on the communities we serve. The Optum Technology Digital team is dedicated to disrupting the healthcare industry by transforming UnitedHealth Group (UHG) into a leading Consumer brand. Our focus is on providing hyper-personalized digital solutions that empower consumers to access the right care at the right time. We are revolutionizing healthcare by delivering cutting-edge, personalized digital solutions and ensuring exceptional support for consumers throughout their healthcare journeys. By leveraging AI, cloud computing, and other disruptive technologies, we are redefining customer interactions with the healthcare system and making a positive impact on millions of lives through UnitedHealthcare & Optum. We are looking for a dynamic individual with a strong engineering background and a passion for innovation to join our team. The ideal candidate will excel in an agile, fast-paced environment, embrace DevOps practices, and prioritize the Voice of the Customer. If you are driven by excellence, eager to innovate, and excited about shaping the future of healthcare, this opportunity is for you. Join us in pioneering modern technologies and consumer-centric strategies while upholding robust cyber-security protocols. Primary Responsibilities: - Build and maintain AWS and Azure resources using modern Infrastructure-as-Code tooling (Terraform, GH Actions). - Develop and maintain pipelines and automation through GitOps (GitHub Actions, Jenkins, etc.). - Create and manage platform level services on Kubernetes. - Develop automation scripts and participate in code reviews on GitHub. - Design monitoring and alerting templates for various cloud metrics using Splunk. - Mentor team members on standard tools, process, automation, and DevOps best practices. Required Qualifications: - Undergraduate degree or equivalent experience. - Experience with AWS, Azure. - DevOps experience in a complex development environment. - Proficiency in GitOps, Docker/Containerization, Terraform. If you are a forward-thinking individual with a passion for innovation and a desire to make a tangible impact on the future of healthcare, we invite you to explore this exciting opportunity with us. Join our team in Hyderabad, Telangana, IN, and be part of shaping the future of healthcare with your unique skills and expertise. Apply now and be a part of our mission to care, connect, and grow together.,

Posted 1 week ago

Apply

0 years

0 Lacs

india

On-site

Job Summary: We are looking for a skilled and motivated DevOps Engineer to join our growing team. In this role, you will be responsible for the development, deployment, monitoring, and optimization of our cloud infrastructure and CI/CD pipelines. The ideal candidate has a strong background in systems administration, automation, and cloud platforms, and thrives in a fast-paced, collaborative environment. Key Responsibilities: Design, implement, and maintain scalable and secure CI/CD pipelines. Manage and optimize cloud infrastructure (AWS, Azure, GCP, etc.). Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or CloudFormation. Monitor application and system performance using observability tools (e.g., Prometheus, Grafana, ELK, Datadog). Collaborate with development and QA teams to ensure smooth deployment of software releases. Implement security best practices and ensure system reliability and high availability. Troubleshoot production issues and participate in on-call rotations. Maintain and enhance infrastructure as code (IaC) practices. Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). Strong hands-on experience with cloud platforms (AWS, Azure, or GCP). Proficiency with scripting languages (Bash, Python, etc.). Experience with containerization and orchestration tools (Docker, Kubernetes, etc.). Solid understanding of networking, Linux/Unix systems, and security principles. Familiarity with GitOps practices and version control systems like Git. Preferred Skills: Experience with Infrastructure as Code (Terraform, Pulumi, etc.). Knowledge of configuration management tools (Ansible, Chef, Puppet). Experience with monitoring and alerting tools (Prometheus, ELK, Datadog). Understanding of Agile/Scrum development processes. Exposure to serverless architecture and microservices.

Posted 1 week ago

Apply

10.0 - 14.0 years

3 - 10 Lacs

hyderābād

On-site

Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. The Solution Architect is responsible for designing and delivering tailored, high-impact digital solutions that combine application development, platform orchestration, and emerging AI technologies. The role requires deep technical leadership and strategic client engagement to translate unique business problems into intelligent, scalable, and secure digital architectures. This individual will serve as a cross-functional enabler between product, engineering, AI/data teams, and customers, driving custom software solutions, low-code/no-code platforms, intelligent workflows, and AI/ML-enabled use cases across sectors such as Government, Healthcare, Utilities, Oil & Gas, and Smart Cities. Key Responsibilities: Architect , end-to-end application and platform solutions, blending custom software development, middleware integration, AI-driven insights, and cloud-native infrastructure. Design scalable, API-first, modular application architectures based on microservices, containers, and event-driven models. Translate industry-specific business requirements into technology blueprints across citizen engagement apps, enterprise portals, command centers, field-force automation tools, and AI assistants. Drive the design of data-centric platforms incorporating ingestion, ETL pipelines, knowledge graphs, AI/ML layers, and data governance. Lead use case discovery and design for AI/ML-driven applications such as intelligent document processing, computer vision, NLP-based bots, and anomaly detection. Integrate bespoke solutions with external and internal platforms including CRM, ERP, billing, identity management, GIS, IoT, and national government services. Define architecture patterns and security models for hybrid and multi-cloud platforms, including deployment via Kubernetes, Docker, Azure/AWS services, and edge nodes. Champion low-code/no-code platforms (e.g., Microsoft Power Platform, Mendix, OutSystems) where applicable for rapid enterprise digitalization. Lead the architectural integration of AI engines, MLOps pipelines, agentic AI frameworks, and pre-trained foundation models into applications and workflows. Work with data scientists and AI teams to align model design with application context, usability, and performance KPIs. Identify opportunities to apply generative AI, predictive analytics, and cognitive services to automate and augment human workflows across business verticals. Engage directly with enterprise customers to uncover business challenges, define requirements, and co-create solution journeys. Conduct solution workshops, PoCs, and design sprints to validate technical feasibility and user experience. Prepare technical documentation, including high-level architecture (HLA), low-level design (LLD), functional specs, and system integration diagrams. Collaborate with presales and bid teams on commercial proposals, effort estimation, and value articulation. Provide architectural oversight during the implementation and development phases, ensuring delivery is aligned with design, performance, and security expectations. Guide development teams (in-house and partner) on best practices, coding standards, DevSecOps, and testing strategies. Work across agile delivery squads, product owners, UX designers, QA, and DevOps teams to ensure successful solution realization. Required Skills & Qualifications Technical Expertise: Strong experience in custom application architecture (web/mobile/enterprise), distributed systems, and cloud-native development (microservices, containers, serverless). Proven expertise with integration frameworks (e.g., API gateways, ESB, Kafka, GraphQL), databases (SQL, NoSQL, TimeSeries), and identity frameworks (OAuth2, SSO, IAM). Knowledge of AI/ML lifecycle, including model training, inferencing, orchestration (Kubeflow, MLflow), and APIs for GenAI/LLMs (e.g., OpenAI, Azure OpenAI, Hugging Face). Proficient in cloud platforms (Azure, AWS, GCP) and DevSecOps pipelines (CI/CD, GitOps, IaC using Terraform/ARM). Professional Experience: 10–14 years of total experience in ICT/digital services/software architecture roles, with at least 5 years in designing bespoke or custom digital solutions. Experience delivering projects involving citizen apps, operational dashboards, digital twins, AI chatbots, recommendation systems, or platform orchestration. Past delivery experience with government, smart city, or regulated enterprise customers is an advantage. Soft Skills: Strategic thinker with the ability to bridge business and technology. Exceptional communication and solution storytelling skills for both technical and executive audiences. Strong stakeholder management, team collaboration, and leadership skills. Preferred Certifications: TOGAF or equivalent enterprise architecture certification Azure/AWS Solution Architect certifications AI/ML-related certifications (e.g., TensorFlow, Azure AI Engineer, NVIDIA Deep Learning Institute)

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

gurugram, haryana, india

On-site

Purpose of the Role We’re looking for a Platform Engineer to lead the design and development of internal self-service workflows and automation for our internal developer platform. This role will: Build reusable workflows using Go, empowering developers to provision infrastructure, deploy applications, manage secrets, and operate at scale without needing to become Kubernetes or cloud experts Drive platform standardization and codification of best practices across cloud infrastructure, Kubernetes, and CI/CD Create developer friendly APIs and experiences while maintaining a high bar for reliability, observability, and performance Design, develop, and maintain Go-based platform tooling and self-service automation that simplifies infrastructure provisioning, application deployment, and service management. Write clean, testable code and workflows that integrate with our internal systems such as GitLab, ArgoCD, Port, AWS, and Kubernetes. Partner with product engineering, SREs, and cloud teams to identify high-leverage platform improvements and enable adoption across brands. Mandatory Skills 4 - 6 years of experience in a professional cloud computing role with Kubernetes, Docker and Infra-as-Code. A BA/BS in Computer Science or equivalent work experience Exposure on Cloud/DevOps/SRE/Platform Engineering roles. Proficient in Golang for backend automation and system tooling. Experience operating in Kubernetes environments and building automation for multi-tenant workloads. Deep experience with AWS (or equivalent cloud provider), infrastructure as code (e.g., Terraform), and CI/CD systems like GitLab CI. Strong understanding of containers, microservice architectures, and modern DevOps practices. Familiarity with GitOps practices using tools like ArgoCD, Helm, and Kustomize. Strong debugging and troubleshooting skills across distributed systems.

Posted 1 week ago

Apply

130.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Description Current Employees apply HERE Current Contingent Workers apply HERE Secondary Language(s) Job Description Manager, DevOps Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Our ability to deliver features rapidly, securely, and reliably is central to our enterprise technology strategy. The Senior DevOps Engineer will be a key enabler of that mission — architecting CI/CD pipelines, optimizing cloud infrastructure, automating deployment workflows, and ensuring platform observability across our AWS, MuleSoft, and microservices landscape. This role demands technical depth in cloud and automation along with a strategic mindset for cost optimization, system resilience, and continuous improvement. Key Responsibilities Platform Reliability & Infrastructure as Code Design and manage AWS infrastructure for microservices and MuleSoft workloads (ECS/EKS, Lambda, API Gateway, RDS/Aurora, Redshift, DynamoDB, S3, CloudFront, Databricks). Implement Infrastructure as Code (Terraform, CloudFormation) to provision and manage environments consistently. Containerization & Orchestration Build, deploy, and maintain Docker containers and orchestrate workloads using Kubernetes (EKS or self-managed). Ensure scalability, fault tolerance, and disaster recovery strategies are in place. Monitoring & Logging Set up and maintain observability stacks (CloudWatch, Prometheus, Grafana, ELK, or similar). CI/CD & Release Automation Own the enterprise CI/CD strategy using GitHub Actions for multi-stage builds, automated testing, security scans, and controlled deployments. Automate deployments for backend microservices, MuleSoft APIs, and UI applications. Introduce deployment strategies like blue-green, canary, and rolling updates. Performance & Cost Optimization Monitor and optimize system performance, reliability, and cost efficiency. Observability & Incident Response Implement and manage monitoring, logging, and tracing solutions (CloudWatch, Prometheus/Grafana, OpenTelemetry). Define SLOs, SLIs, and error budgets with development teams. Drive root cause analysis and remediation for production incidents. Security & Compliance Integrate security checks into pipelines (SAST, DAST, dependency scanning, container image scanning). Enforce IAM best practices and least-privilege policies. Ensure compliance with enterprise security and regulatory standards. Collaboration & Leadership Partner with development, QA, and architecture teams to embed DevOps practices early in the lifecycle. Mentor junior DevOps engineers and promote a culture of automation-first and self-service tooling. Contribute to DevOps standards, playbooks, and enterprise-wide best practices. Qualifications 3+ year’s experience in DevOps, cloud engineering, or platform operations. Strong AWS expertise (ECS/EKS, Lambda, API Gateway, RDS/DynamoDB, VPC design). Proven track record in CI/CD pipeline design (GitHub Actions is must). Proficiency with Infrastructure as Code (Terraform, CloudFormation). Strong skills in scripting/automation (Python, Bash, Groovy). Experience with containerization and orchestration (Docker, Kubernetes). Familiarity with API deployments (MuleSoft, AWS API Gateway) and microservices environments. Deep knowledge of security best practices in CI/CD and cloud. MuleSoft deployment experience is a plus. AWS or DevOps certifications preferred. Preferred Qualifications AWS Certified DevOps Engineer – Professional. Certified Kubernetes Administrator (CKA). Experience with GitOps tools (ArgoCD, Flux) and Helm charts. Familiarity with service mesh (Istio, Linkerd). Knowledge of secrets management (Vault, AWS Secrets Manager). Soft Skills Strong problem-solving and troubleshooting ability. Excellent communication and collaboration skills. Ability to work in fast-paced, agile environments. Proactive mindset with a focus on automation and continuous improvement. KPIs & Success Metrics Deployment frequency increase release cadence without rollback. Mean time to recovery (MTTR) reduced for production incidents. Cost efficiency measurable reduction in AWS spend through optimization. Pipeline success rate high automation reliability with minimal manual interventions. Compliance zero high-risk security vulnerabilities in production deployments. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Engineering, Data Visualization, Design Applications, Software Configurations, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Designs, Systems Integration, Testing Preferred Skills Job Posting End Date 09/5/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R356991

Posted 1 week ago

Apply

10.0 years

0 Lacs

hyderabad, telangana, india

On-site

Hope you are doing well Greetings from Peoplefy Info solutions! We have an Opening for Platform Lead_one of the leading Product Based Company for Hyderabad Location. Please find below the further details: - Experience: - 10 to 20 Years Job Location: - Hyderabad Primary skills : -Exp on Platform engineering+ Technical Lead and technical experience +IAAC + Github + GitHub + Github actions (CICD) + SaaS + Design or automate the platform + Any Programming Languages-Python + Cloud Exp- AWS Requirements: - Lead and manage a team of platform engineers, fostering a collaborative and high-performance environment. Programming languages mandatory – Programming languages mandatory – Python Define and execute the platform roadmap in alignment with the overall technology strategy and the needs of product development teams. Design, build, and maintain the core components of the internal developer platform, including infrastructure provisioning, CI/CD pipelines, monitoring and logging solutions, and security controls. Drive the adoption of self-service capabilities to empower development teams and reduce operational overhead. Implement and promote DevOps best practices, including infrastructure as code, continuous integration and continuous delivery, and automated testing. Collaborate closely with product development teams to understand their requirements and provide them with the necessary platform tools and support. Ensure the platform is secure, reliable, scalable, and cost-effective. Troubleshoot and resolve platform-related issues, working with the team to identify root causes and implement effective solutions. Stay up-to-date on the latest platform engineering technologies and trends, and evaluate their potential benefits for the organization. Build and mature the platform engineering team, including hiring, mentoring, and performance management. Create and maintain comprehensive documentation for the platform and its components. Qualifications: Bachelor's degree in computer science or a related field. Experience in platform engineering, DevOps, or a full-stack engineering team. 2+ years of experience managing engineering teams. Strong experience with cloud platforms (e.g., AWS, Azure, GCP). Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI/CD, CircleCI). Experience with monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). Deep understanding of DevOps principles and practices. Strong leadership and management skills. Excellent communication, collaboration, and problem-solving skills. Nice to Have: - Experience with GitOps methodologies. If interested kindly revert resume to mansi.sh@peoplefy.com and share me below details: Total exp.: - Relevant Exp. In Platform engineering: - Team Handling experience: - Github exp.: - Experience in Terraform. Kubernetes & Docker: - Experience in Python :- Comfortable for Hyderabad Location(Y/N): - Reason for Re-location: - Current CTC: - Expected CTC: - Notice Period: - Current Location: - Note - This is a permanent position.

Posted 1 week ago

Apply

6.0 years

0 Lacs

gurugram, haryana, india

On-site

Job Description We are seeking a highly skilled and experienced Platform Engineer to manage and enhance our entire application delivery platform, from Cloudfront to the underlying EKS clusters and their associated components. The ideal candidate will possess deep expertise across cloud infrastructure, networking, Kubernetes, and service mesh technologies, coupled with strong programming skills. This role involves maintaining the stability, scalability, and performance of our production environment, including day-to-day operations, upgrades, troubleshooting, and developing in-house tools. Main Responsibilities Perform regular upgrades and patching of EKS clusters and associated components & oversee the health, performance, and scalability of the EKS clusters. Manage and optimize related components such as Karpenter (cluster autoscaling) and ArgoCD (GitOps continuous delivery). Implement and manage service mesh solutions (e.g., Istio, Linkerd) for enhanced traffic management, security, and observability. Participate in an on-call rotation to provide 24/7 support for critical platform issues and monitor the platform for potential issues and implement preventative measures. Develop, maintain, and automate in-house tools and scripts using programming languages like Python or Go to improve platform operations and efficiency. Configure and manage CloudFront distributions, WAF Policies for efficient & secure content delivery & routing. Develop and maintain documentation for platform architecture, processes, and troubleshooting guides. Tech Stack AWS: VPC, EC2, ECS, EKS, Lambda, Cloudfront, WAF, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK Terraform, Github Actions, Prometheus, Grafana, Atlantis, ArgoCD, OpenTelemetry Required Skills and Experiences Proven 6+ Years experience as a Platform Engineer, Site Reliability Engineer (SRE), or similar role with a focus on end-to-end platform ownership. In-depth knowledge and hands-on experience of at least 4 years with Amazon EKS and Kubernetes. Strong understanding and practical experience with Karpenter, ArgoCD, Terraform.. Solid grasp of core networking concepts and extensive experience of at least 5 years with AWS networking services (VPC, Security Groups, Network ACLs, CloudFront, WAF, ALB, DNS). Demonstrable experience with SSL/TLS certificate management. Proficiency in programming languages such as Python or Go for developing and maintaining automation scripts and internal tools. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Excellent problem-solving and debugging skills across complex distributed systems. Strong communication and collaboration abilities. Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Preferred Qualifications Prior experience working with service mesh technologies (preferably Istio) in a production environment. Experience building or contributing to Kubernetes Controllers. Experience with multi-cluster Kubernetes architectures. Experience building AZ isolated, DR architectures. Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team.

Posted 1 week ago

Apply

4.0 years

0 Lacs

mohali district, india

On-site

Job Overview: We are seeking a Site Reliability Engineer (SRE) to ensure the reliability, scalability, and performance of our cloud platform. You will work on observability, automation, incident response, capacity planning, and system optimization to minimize downtime and speed up recovery. Key Responsibilities: Build and maintain monitoring, logging, and alerting solutions Lead incident response & post-mortems Implement and test disaster recovery strategies Collaborate with teams to define and enforce SLAs Automate deployment, scaling, and recovery workflows Manage infrastructure with Terraform, GitLab CI/CD, and Kubernetes Participate in on-call rotations. Skills & Experience: 4+ years in SRE/DevOps roles Proficient in Python, Bash, Shell with exposure to Chef/Ansible Strong in AWS (EC2, EKS, RDS, CloudWatch, etc.) Hands-on Kubernetes administration experience Knowledge of IaC (Terraform/CloudFormation) Expertise in Prometheus, Grafana, ELK, and tracing systems Experience with PostgreSQL & network/security best practices Familiar with CI/CD & GitOps workflows Exposure to tools like Splunk, Datadog, Dynatrace. Preferred: AWS Solutions Architect/DevOps Engineer certification Certified Kubernetes Administrator (CKA)

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

hyderabad, telangana, india

On-site

Strictly Hyderabad based candidates only. Looking for immediate joiners. This requirement is for a US based product company, who are in the process of establishing their Offshore Development Centre in Hyderabad, India. You will be part of the initial team and will be playing a crucial role in building a world class enterprise product. If you are passionate, self-motivated, go-getter and want to explore new horizons, then you are the one we are looking for. Please apply. Experience & Skills Required: 4-6 years of experience. Strong DevOps and Cloud experience with deep Kubernetes skills. Expert in building and maintaining Helm charts. Strong knowledge in Containers, CI/CD and GitOps principles. Good understanding of Cloud Technology Concepts. In-depth knowledge of build/release systems, CI/CD technologies (Jenkins, ArgoCD). In-depth knowledge of Operational Visibility aspects and tools like Prometheus, Grafana, OpenSearch, Elastic APM, Dynatrace. Strong experience in writing Docker files and building images. Working knowledge of Agile principles, Scrum, Test Driven Development and Test Automation. Responsibilities: As a DevOps Engineer you will be responsible for the effective and efficient delivery of environments in the Cloud and ensure the delivery according to defined Service Level Agreements. You will get to know various technologies used by development and operations. You will produce solutions to enhance stability of the Environments as well providing visibility through monitoring, dashboards. You will work on creating stable and reliable CI/CD pipelines for our products. You will be responsible for providing support around the complete lifecycle for kubernetes based applications. You will be involved in blue printing and productization of modern technologies e.g. containerized environments. You interact with your peers, to share your insight, to identify areas of improvement and constantly learn about new technologies. You will work in an innovative environment with highly motivated colleagues across different organizations. Lead Design & Development of Containers/K8s based solutions, full stack applications for ensuring the offered environments are highly stable. Create required Dashboards to offer high Operational Monitoring Visibility. Adopt the latest open-source tools and technologies. Build/Enhance/Evaluate tools for build, test, deployment automation to meet business needs with respect to functionality, performance, scalability and other quality goals. Apply technical expertise to challenging programming and design problems. Be passionate about keeping up to date with latest technology and developing well architected tools and services. Those who worked in a startup environment will be advantageous.

Posted 1 week ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

Job location: Gurugram (Hybrid) About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What you would be expected to do: Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You might be a strong candidate if you have/are: Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash) and experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good to have: Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside SecOps engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

hyderabad, telangana, india

On-site

Must-Have Skills: Experience with AWS Cloud S ervices Experience in CI/CD, IAC , Observability, Gitops (added advantage) etc Exposure to containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability is an added advantage Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Databricks Knowledge/Exposure is good to have (need to upskill if hired)

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

gurgaon

On-site

Hands-on experience with Kubernetes administration and lifecycle management. Deep Knowledge of AWS EKS and associated workloads 5-8 years of work experience Proficiency in programming Language like Python or Go (Preferred) Experience in CI/CD tooling and version control: CircleCI, GitHub Action, GitHub. Experience in GitOps deployment workflow and tools ex. Flux and ArgoCD. Experience in monitoring, security alerting and data analytics tools: Aqua, Splunk, Dynatrace and others. Good working knowledge of IAC tools like Terraform. Should be able to build new TF(Terraform) Patterns. Knowledge of test automation frameworks: test-kitchen, awspec, inspec and others. Cloud Networking knowledge: load balancing, network security, standard network protocols (HTTP/s, DNS, etc.). – Basic to Intermediate

Posted 2 weeks ago

Apply

0 years

4 - 6 Lacs

gurgaon

On-site

Service Reliability Engineer Job Req ID: 50337 Posting Date: 1 Sept 2025 Function: Business Operations Unit: Finance & Business Services Location: Building No 14 Sector 24 & 25A, Gurugram, India Salary: Competitive Why this job matters The Site Reliability Engineering Associate 3 assists with a range of routine activities in the service performance, reliability and availability that internal and external customers expect. What you’ll be doing 1. Assists with routine activities in the implementation of new software development life cycle automation tools, frameworks, and code pipelines (continuous integration/continuous delivery pipelines), applies best practices that are delivered from site reliability engineering leadership with a focus on the re-use of application code, demonstrates consistent software delivery practices and produces continuous integration/continuous delivery platform solutions using Amazon Web Services cloud, infrastructure as code (IaC), GitOps, and container technologies 2. Assists in the delivery of technology development and solutions that provide business value and meet customer requirements, encouraging team's use of innovative approaches and tools to solve problems 3. Assists with the implementation of monitoring tooling used to optimise systems for uptime, performance, and reliability, and robust monitoring and alerting systems 4. Writes routine tests that investigate how infrastructure handles failure and scaling 5. Assists in executing work in scaling systems sustainably through mechanisms like automation and evolves systems by pushing for changes that improve reliability and velocity 6. Assists the delivery of infrastructure as code software to improve the availability, scalability, latency, and efficiency of services 7. Assists with the implementation of continuous integration/continuous delivery systems 8. Works with sales and application management support in capacity and performance planning The skills you’ll need Troubleshooting Infrastructure Configuration Debugging Continuous Improvement Application Performance Monitoring & Alerting Release Management Programming/Scripting IT Security Operating Systems Cloud Computing Data Analysis Agile Methodologies Continuous Integration/Continuous Deployment Automation & Orchestration Software Testing Incident Management Decision Making Growth Mindset Inclusive Leadership Our leadership standards Looking in: Leading inclusively and Safely I inspire and build trust through self-awareness, honesty and integrity. Owning outcomes I take the right decisions that benefit the broader organisation. Looking out: Delivering for the customer I execute brilliantly on clear priorities that add value to our customers and the wider business. Commercially savvy I demonstrate strong commercial focus, bringing an external perspective to decision-making. Looking to the future: Growth mindset I experiment and identify opportunities for growth for both myself and the organisation. Building for the future I build diverse future-ready teams where all individuals can be at their best. About us BT Group was the world’s first telco and our heritage in the sector is unrivalled. As home to several of the UK’s most recognised and cherished brands – BT, EE, Openreach and Plusnet, we have always played a critical role in creating the future, and we have reached an inflection point in the transformation of our business. Over the next two years, we will complete the UK’s largest and most successful digital infrastructure project – connecting more than 25 million premises to full fibre broadband. Together with our heavy investment in 5G, we play a central role in revolutionising how people connect with each other. While we are through the most capital-intensive phase of our fibre investment, meaning we can reward our shareholders for their commitment and patience, we are absolutely focused on how we organise ourselves in the best way to serve our customers in the years to come. This includes radical simplification of systems, structures, and processes on a huge scale. Together with our application of AI and technology, we are on a path to creating the UK’s best telco, reimagining the customer experience and relationship with one of this country’s biggest infrastructure companies. Change on the scale we will all experience in the coming years is unprecedented. BT Group is committed to being the driving force behind improving connectivity for millions and there has never been a more exciting time to join a company and leadership team with the skills, experience, creativity, and passion to take this company into a new era. A FEW POINTS TO NOTE: Although these roles are listed as full-time, if you’re a job share partnership, work reduced hours, or any other way of working flexibly, please still get in touch. We will also offer reasonable adjustments for the selection process if required, so please do not hesitate to inform us. DON'T MEET EVERY SINGLE REQUIREMENT? Studies have shown that women and people who are disabled, LGBTQ+, neurodiverse or from ethnic minority backgrounds are less likely to apply for jobs unless they meet every single qualification and criteria. We're committed to building a diverse, inclusive, and authentic workplace where everyone can be their best, so if you're excited about this role but your past experience doesn't align perfectly with every requirement on the Job Description, please apply anyway - you may just be the right candidate for this or other roles in our wider team.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

hyderābād

Remote

ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. Who are Tide: At Tide, we're on a mission to save businesses time and money. We're the leading provider of UK SME business accounts and one of the fastest-growing FinTechs in the UK. Using the latest tech, we design solutions with SMEs in mind and our member-driven financial platform is transforming the business banking market. Not only do we offer our members business accounts and related banking services, but also a comprehensive set of highly connected admin tools for businesses. Tide is about doing what you love. We're looking for someone to join us on our exciting scale up journey and be a part of something special. We are wanting passionate Tideans to drive innovation and help build a best-in-class platform to support our members. You will be comfortable in ambiguous situations and will be able to navigate the evolving FinTech environment. Imagine shaping how millions of Tide members discover and engage with business banking platforms and building this on a global scale. What we're looking for: As part of the Cloud Engineering team, you will be helping to design and build the core infrastructure platform that supports Tide's global businesses. You will help us complete and evolve our new Kubernetes based deployment platform and roll it out to our new markets You will work closely with our engineering teams to make sure our platform meets their needs and with our Information Security team to ensure we meet and exceed our regulatory obligations. We are committed to a 100% Infrastructure-as-Code approach to infrastructure builds using Terraform and currently deploy exclusively to AWS. We also follow the GitOps model of application deployment and use tools such as ArgoCD, Helm and Crossplane to facilitate this approach. You will work closely with our Developer Experience team to make this model for deploying and managing services usable for our engineers. As a Senior Cloud Engineer you'll be: Working as part of a larger Cloud Engineering team, comprising core-infra (your team), Developer Experience, and Platform Operations. Working with stakeholders to understand the characteristics they require from our platform. Writing Terraform and Helm charts to define our infrastructure and configure the middleware/shared services that support the platform. Working collaboratively alongside other senior Cloud engineers to define and implement best practices. Directly supporting engineers to understand what tools and access they need to effectively use the platform (and then working with Developer Experience to make sure those tools get built). Helping to identify, research and implement new technologies that will improve the platform. Promoting understanding of the capabilities of the platform throughout Tide's engineering department. Mentoring and teaching (and being open to learning from your peers) Taking part in our on-call rota What makes you a great fit: Having an extensive career on the 'DevOps' track. We're looking for 5+ years of experience and most of that should be in a DevOps/Cloud Engineering type role. Having spent the majority of your career working in Cloud hosted environments. You consider yourself an AWS expert and have recent hands-on AWS experience. Very strong Kubernetes, Terraform and Python skills. Knowledge of, or willingness to learn Golang. Having a 'platform' mentality where you understand that we're building on behalf of our engineers who will rely on our services to build and innovate. Enjoy an environment where everyone's ideas are heard and considered. Having an understanding of the GitOps pattern and experience in related technologies (ideally ArgoCD) Good understanding of the Kubernetes ecosystem and ideally some experience with services like Traefik, Linkerd, KEDA and HCP Vault. Happy to work in a highly regulated and security-conscious industry. What you'll get in return: Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their home country. Additionally, you can work from a different country for 90 days of the year. Plus, you'll get: Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 15 days of Privilege leaves 12 days of Casual leaves 12 days of Sick leaves 3 paid days off for volunteering or L&D activities Stock Options Tidean Ways of Working At Tide, we're Member First and Data Driven, but above all, we're One Team. Our Working Out of Office (WOO) policy allows you to work from anywhere in the world for up to 90 days a year. We are remote first, but when you do want to meet new people, collaborate with your team or simply hang out with your colleagues, our offices are always available and equipped to the highest standard. We offer flexible working hours and trust our employees to do their work well, at times that suit them and their team. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .

Posted 2 weeks ago

Apply

5.0 - 9.0 years

16 - 18 Lacs

mumbai

Work from Office

We are looking for an experienced Senior DevOps Cloud Engineer to design, build, and manage large-scale cloud-native systems. You will be working across multiple products, ensuring high availability, scalability, and security in production environments. Responsibilities Architect and manage multi-cloud infrastructure (AWS, GCP, Azure). Deploy and maintain containerized applications on Kubernetes (EKS/GKE) with Helm / Kustomize. Automate provisioning and scaling using Terraform and related IaC tools. Implement GitOps workflows with ArgoCD or FluxCD. Build and optimize CI/CD pipelines with GitLab CI, Jenkins, or GitHub Actions. Monitor and secure infrastructure using Prometheus, Grafana, Elastic Stack. Collaborate with engineering teams to deliver reliable and secure platforms. Drive DevSecOps practices and handle incident management & performance optimization. Requirements 5+ years of hands-on experience in DevOps / Cloud Architecture. Expertise in at least two cloud providers (AWS, GCP, Azure). Proficiency in Kubernetes for production workloads. Strong with Terraform & Helm for automation. Solid experience with CI/CD tools (GitLab CI, Jenkins, GitHub Actions). Practical knowledge of GitOps (ArgoCD / FluxCD). Good understanding of Linux environments. Nice to Have Scripting skills (Python, Bash, Go). Familiarity with Packer, Rancher, OpenShift, or k3s. Exposure to on-prem deployments and hybrid cloud.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

calcutta

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Manager Job Description & Summary At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in DevSecOps at PwC will focus on minimising software threats by integrating development, operations and security industry leading practices in order to validate secure, consistent and efficient delivery of software and applications. You will work to bridge the gap between these teams for seamless and secure application and software development. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: - Design and implement Landing Zone architectures for cloud environments (Azure preferred). - Facilitate cloud onboarding processes and lead on-premises to cloud migration (hybrid cloud) projects. - Develop and optimize container-based solutions using Docker and Kubernetes. - Implement DevOps/GitOps practices to enhance development and operational efficiency. - Manage high availability, scalability, disaster recovery of enterprise applications in cloud environments. - Assess, report and improve security posture of enterprise applications on Azure Cloud (SOC2 compliance, end-to-end encryption) - Lead initiatives to migrate monolithic applications to microservices architecture. - Collaborate with development teams on full-stack applications (.Net, NodeJS, Java) and their migration needs. - Utilize Infrastructure as Code (IaC) tools, specifically Terraform and related tools (Bicep/ARM), to automate cloud infrastructure provisioning and management. - Deploy and manage configuration management tools such as Ansible, Chef, and Puppet. - Extensive experience in Landing Zone architecture design on cloud platforms (Azure preferred). - Proven track record of successful cloud onboarding and migration projects. - Deep expertise in container technologies, especially Docker and Kubernetes. - Solid understanding of DevOps/GitOps practices and their implementation. - Experience managing and scaling enterprise applications in cloud environments. - Proficient in designing microservices and transitioning legacy monolithic systems to microservices architecture. - Familiarity with at least one full-stack development methodology: .Net, NodeJS, Java. - Strong knowledge of IaC tools (Terraform/Bicep/ARM). - Experience with configuration management tools (Ansible/Chef/Puppet). - Experience with Azure Blob Storage, File Storage, Disk Storage, and data migration strategies. - Familiarity with Azure SQL Database, Cosmos DB, and Azure Data Factory for data integration and ETL processes. - Deep understanding of Azure Virtual Network, VPN Gateway, ExpressRoute, traffic management, and DNS - Skilled in Azure Monitor, Azure Security Center, and Log Analytics for overseeing system health and performance. - Ability to conduct cloud readiness assessments and develop strategic migration plans. - Proficiency with Azure Migrate, Azure Site Recovery, and third-party migration tools - Expertise in Azure Active Directory, Azure AD Connect, and integration with on-premise AD. - Strong understanding of hybrid security measures, data encryption, network security, and compliance standards (e.g., GDPR, HIPAA). Mandatory skill sets: - Full-stack development experience with Python or Golang. - Familiarity with GitOps tooling, such as FluxCD or Argo CD. - Familiarity with Service Mesh (Istio) and FinOps technologies (Apptio, Cloudability, Opencost) - Azure Administrator Associate or Azure Solution Architect certifications. Preferred skill sets: - Strong analytical and problem-solving skills. - Excellent communication and collaboration abilities, with experience working in multi-disciplinary teams. - Proactive and innovative mindset, with the ability to drive change and optimize processes. - Commitment to staying updated with the latest cloud technologies and best practices. - Ability to work in flexible hours and experience with client facing roles Years of experience required: 7-10 Years Education qualification: B.E./B.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Cloud Architectures Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Ansible (Open-Source Tool for Software Provisioning, Configuring, and Deployment), AWS CloudFormation, Azure DevOps Server, Bicep, Cloud Infrastructure, Coaching and Feedback, Communication, Continuous Deployment, Continuous Integration (CI), Creativity, CrowdStrike, Cybersecurity, Deployment Management, Dynatrace APM, Embracing Change, Emotional Regulation, Empathy, GitHub (Version Control Platform), GitLab (DevOps Tool), Google Cloud Platform, Incident Remediation {+ 34 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

hyderabad, telangana, india

On-site

Role: Senior Linux Administrator / Engineer Experience in: Migration of Linux RHEL7 to RHEL8, Kubernetes EKS cluster Years of Experience: 5+ Location: Hybrid We need a Linux Systems Administrator with AWS and EKS experience. Someone who can take the lead on: 1. RHEL 7 to RHEL 8(or alternative) upgrade for one of our apps (40%) 2. SAS Viya platform operations on AWS EKS (60%) Revised required Skills Mix: Linux Administration Foundation: 5+ years RHEL/CentOS/Ubuntu server administration OS upgrade experience (RHEL 7→8 or similar major versions) System monitoring (CPU, memory, disk, network performance) Shell scripting (Bash) for automation tasks Package management and dependency troubleshooting Migration tools expertise: leapp utility, package management (YUM→DNF) Application compatibility assessment and dependency analysis Python 2 to Python 3 migration experience (critical for RHEL 8) Configuration management tools (Ansible, Puppet, or Chef) Systemd services migration and troubleshooting SAS Platform Operations: SAS Viya 4 administration on Kubernetes (2+ years preferred) Kubernetes basics (pods, services, persistent volumes, logs) AWS EKS cluster operations and troubleshooting ArgoCD/GitOps for application deployments Storage management (EFS, EBS) for data persistence Supporting Technologies: Docker containerization concepts YAML/JSON configuration management Git version control for configuration changes AWS services (EKS, EC2, EFS, CloudWatch) Basic networking (DNS, load balancers, ingress) Day-to-Day Activities: Monitor SAS Viya platform health and performance Respond to SAS application alerts and user issues Apply SAS hotfixes and updates through GitOps Perform routine Linux server maintenance Collaborate with SAS users on access and performance issues Document procedures and maintain runbooks Ideal Candidate Profile: Linux admin background who has evolved into SAS/Kubernetes operations Problem-solving mindset for both traditional and cloud-native issues Communication skills to work with business users and technical teams Learning agility to adapt between Linux systems and other applications Nice-to-Have: SAS programming knowledge (Base SAS, SQL) Previous RHEL migration project experience Ansible automation experience AWS certification (Solutions Architect Associate) If interested, share your resume on radhika.nalawade@leanitcorp.com

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

chandigarh, india

On-site

5 years of Minimum Experience Required Location: Chandigarh IT Park (WFO) Shift Timings: 1200 - 2100 Hours IST Roles and Responsibilities CI/CD Pipeline Management Design, implement, and manage Continuous Integration/Continuous Deployment (CI/CD) pipelines. Automate build, test, and deployment processes to ensure faster and reliable software delivery. Integrate ArgoCD and Helm for GitOps-based application deployment on Kubernetes clusters. Troubleshoot build failures and streamline deployment processes Infrastructure as Code (IaC) Use tools like Terraform, Ansible, or CloudFormation to automate infrastructure provisioning. Manage cloud infrastructure on platforms like AWS, Azure, or Google Cloud. Ensure infrastructure is scalable, resilient, and cost-optimised. Monitoring and Logging Implement robust monitoring systems using tools like Prometheus, Grafana, ELK Stack, or Datadog. Set up alerting mechanisms to identify and resolve system issues proactively. Maintain logs for performance analysis, debugging, and compliance. Automation and Scripting Automate repetitive tasks using scripting languages like Python, Bash, or PowerShell. Develop automation scripts for configuration management and deployment. Optimise system performance and ensure efficient resource utilisation. Security and Compliance Implement DevSecOps practices to ensure security at every stage of the development lifecycle. Manage secrets and credentials using tools like HashiCorp Vault or AWS Secrets Manager. Ensure compliance with security policies and standards. Collaboration and Communication Work closely with development, QA, and IT teams to understand their requirements. Collaborate on system design, capacity planning, and disaster recovery strategies. Support developers by optimising CI/CD workflows and resolving infrastructure issues. Cloud Services and Kubernetes Management Deploy, monitor, and manage applications in cloud environments (AWS, Azure, GCP). Ensure high availability, scalability, and fault tolerance of cloud resources. Must have knowledge of EKS (Elastic Kubernetes Service), Kubernetes cluster management (managed and self-managed) . Manage Kubernetes workloads using Docker, Helm charts, and ArgoCD for GitOps-driven deployments. Configuration Management Implement configuration management tools like Ansible, Puppet, or Chef to maintain consistent environments. Use Helm and ArgoCD to standardise and manage Kubernetes application configurations. Ensure that servers and environments are provisioned with the correct configurations. Backup and Disaster Recovery Implement automated backup strategies for critical systems and data. Develop and test disaster recovery plans to ensure business continuity. Performance Optimization Continuously monitor and optimise system performance. Identify and resolve performance bottlenecks across infrastructure, applications, and databases. AWS & Azure certifications are preferred.

Posted 2 weeks ago

Apply

0 years

0 Lacs

mumbai, maharashtra, india

On-site

Minimum Experience - 10 yrs About the Role We are seeking a highly skilled Cloud & Infrastructure Architect to lead the design, implementation, and optimization of hybrid cloud environments. This role demands deep expertise in cloud platforms (AWS, GCP, Azure), infrastructure automation, DevOps practices, and hybrid integrations. If you're passionate about building scalable, secure, and resilient systems, we’d love to hear from you. Key Responsibilities Cloud & Infrastructure Architecture Design secure, resilient, and high-performance cloud architectures on AWS. Support integration with GCP, Azure, and on-premise infrastructure (VMware, OpenStack). Define hybrid cloud strategies covering security, identity federation, and data governance. Develop infrastructure blueprints and reference architectures aligned with business needs. Infrastructure as Code & Automation Champion Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi. Automate environment provisioning and configuration using Ansible, Helm, etc. Establish GitOps pipelines for consistency, change tracking, and governance. DevOps & Continuous Delivery Architect and manage CI/CD pipelines using GitHub Actions, GitLab CI, Jenkins, or ArgoCD. Embed security and compliance into the software delivery lifecycle (DevSecOps). Implement release strategies like blue/green, canary deployments, and feature flagging. Hybrid Infrastructure & On-Premise Integration Lead integration of on-prem systems with cloud-native services. Manage container platforms, virtualized environments, and legacy applications. Enforce disaster recovery, backup, and failover strategies across hybrid deployments. Monitoring, SRE & Reliability Engineering Define and monitor SLAs, SLIs, and SLOs; implement proactive alerting and auto-remediation. Operationalize observability using Prometheus, Grafana, ELK, CloudWatch, Datadog. Drive incident response, RCA, and post-mortem processes for continuous improvement. AI/ML Platform Enablement (Preferred) Collaborate with data and ML teams to provision infrastructure for AI/ML workloads. Support orchestration frameworks like Kubeflow, MLflow, Airflow, and cloud-native ML services. Optimize infrastructure for data ingestion, feature engineering, and real-time inference. Technical Skills AWS Certifications: DevOps Engineer – Professional, Solutions Architect – Professional. Experience with data platforms, AI/ML pipelines, and high-volume data lake architectures. Familiarity with ITIL, SRE principles, and compliance standards (ISO 27001, SOC2, GDPR). Expertise in cloud cost optimization and FinOps best practices. Behavioral Competencies Strategic Thinking: Ability to define scalable architectures aligned with business growth. Technical Leadership: Influences cross-functional decisions and promotes engineering excellence. Security-First Mindset: Prioritizes resilience, auditability, and compliance. Communication: Comfortable presenting complex technical topics to diverse audiences.

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Title: Multi-Cloud Lead Architect – Microservices, Hybrid Cloud, BFSI Focus Location: Chennai-WFO Experience Level: 15+ years Grade-VP Industry Preference: BFSI (Banking, Financial Services, and Insurance) Role Summary: We are seeking an experienced and highly skilled Multi-Cloud Lead Architect to spearhead the architecture, design, and delivery of secure, scalable, and cost-optimized solutions across AWS, Azure, and GCP environments. This role is critical in driving cloud modernization, hybrid architecture implementation, observability, and automation initiatives for large enterprise clients—preferably within the BFSI domain. Key Responsibilities: ● Cloud Architecture Leadership: ○ Lead the design and implementation of multi-cloud strategies across AWS, Azure, and GCP. ○ Develop hybrid cloud platforms with seamless integration between on-prem and public cloud environments. ○ Establish multi-tenancy architecture patterns ensuring secure isolation, scalability, and performance. ● Microservices & Platform Architecture: ○ Architect and modernize monolithic applications into containerized microservices. ○ Define API gateways, service mesh patterns, CI/CD workflows, and container orchestration using Kubernetes. ● Security & Network Governance: ○ Ensure end-to-end network architecture, segmentation, and zero-trust security posture across all environments. ○ Design and enforce cloud-native and hybrid security controls (IAM, firewalls, WAFs, key management, etc.). ● Automation & Runbook Engineering: ○ Create standardized runbooks, workflows, and playbooks to automate provisioning, deployments, failovers, and incident response. ○ Drive Infrastructure-as-Code (IaC) adoption using Terraform, CloudFormation, or Bicep. ● Observability & Reliability: ○ Implement monitoring, logging, tracing, and alerting solutions using native and third-party tools (e.g., CloudWatch, Azure Monitor, Stackdriver, Datadog, Prometheus). ○ Lead SRE practices for proactive reliability and uptime management. ● Cost Optimization & Governance: ○ Provide expert recommendations on cost-efficient architecture, reserved instances, savings plans, and resource right-sizing. ○ Establish FinOps practices and cloud usage governance for large-scale environments. Required Skills & Expertise: ● Proven experience designing and managing enterprise-scale multi-cloud environments in AWS, Azure, and GCP (WAR expertise). ● Deep understanding of hybrid cloud models, cloud-native architecture, and on-premises integration. ● Expertise in multi-tenant SaaS architecture, including data, network, and security isolation strategies. ● Solid grasp of networking (VPC, peering, VPN, SD-WAN) and security best practices in cloud and hybrid ecosystems. ● Hands-on with DevSecOps, GitOps, Kubernetes, Docker, Helm, CI/CD tools (e.g., Jenkins, GitLab, ArgoCD).(Optional) ● Demonstrated ability to build observability stacks and manage large-scale production environments. ● Skilled in automation tools: Terraform, Ansible, PowerShell, Python scripting. ● Strong communication, leadership, and stakeholder management skills. Preferred Certifications: ● AWS Certified Solutions Architect – Professional ● Microsoft Certified: Azure Solutions Architect Expert ● Google Cloud Professional Cloud Architect / Administrator ● Certified Kubernetes Administrator (CKA) or equivalent ● TOGAF / SABSA / Certified Cloud Security Professional (CCSP) – preferred ● FinOps Certified Practitioner – preferred Preferred Background: ● 12+ years in enterprise IT, with 6+ years in cloud architecture roles. ● Strong preference for experience working with regulated industries, especially BFSI. ● Experience with global cloud transformation programs, enterprise landing zones, and security frameworks like NIST, CIS, ISO 27001.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies