Jobs
Interviews

1569 Gitops Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

4 - 18 Lacs

dehradun, uttarakhand, india

On-site

Shift: Night Shift Role Overview Join a team building resilient, scalable automated test systems for distributed, hybrid-cloud environments. This role suits QA professionals who enjoy designing practical test frameworks and scaling automation to improve reliability and deployment confidence. You'll work closely with engineers to create repeatable tests, simulate real-world failures, and ensure services behave under load and during recovery. Clear communication and pragmatic problem solving are key. Key Responsibilities Architecting Test Systems Design test frameworks to validate microservices and infrastructure across multi-cluster environments Create production-like workload simulations, resource scaling tests, failure injection, and recovery scenarios Automation & Scalability Lead CI/CD-integrated test automation (Jenkins, GitHub Actions) and embed tests into release pipelines Use Kubernetes APIs, Helm, and service mesh tools to automate health checks, failover, and network resilience Apply infrastructure-as-code to make test environments repeatable, extensible, and easy to manage Technical Expertise Familiarity with Kubernetes internals, Helm, and service meshes (Istio, Linkerd) Strong scripting skills: Python, Pytest, Bash; comfortable writing reliable test tooling Experience with observability tools (Prometheus, Grafana, Jaeger) to analyze failures and performance Knowledge of Kubernetes security (RBAC, secrets) and performance testing tools (K6) Working experience with cloud platforms (AWS, Azure, GCP) and containerized CI/CD Comfortable with Linux system administration, networking basics, and container runtimes (Docker/containerd) Hands-on with kubectl, Helm with OCI registries, and GitOps tooling (Flux) We value practical experience and a focus on improving reliability—if you have a strong testing mindset and scripting skills, we encourage you to apply even if you don't meet every item above. Skills: pki management,flux,linkerd,automation,kubectl,helm,ci/cd,istio,qa engineering,jaeger,gitops,kubernetes,rbac,python,pytest,linux,grafana,k6,firewalling,bash,github actions,docker,prometheus,cd,azure,iac,jenkins,oci registries,bash scripting,aws,ci,qa engineer,networking,gcp

Posted 1 week ago

Apply

0 years

4 - 18 Lacs

pune, maharashtra, india

On-site

Shift: Night Shift Role Overview Join a team building resilient, scalable automated test systems for distributed, hybrid-cloud environments. This role suits QA professionals who enjoy designing practical test frameworks and scaling automation to improve reliability and deployment confidence. You'll work closely with engineers to create repeatable tests, simulate real-world failures, and ensure services behave under load and during recovery. Clear communication and pragmatic problem solving are key. Key Responsibilities Architecting Test Systems Design test frameworks to validate microservices and infrastructure across multi-cluster environments Create production-like workload simulations, resource scaling tests, failure injection, and recovery scenarios Automation & Scalability Lead CI/CD-integrated test automation (Jenkins, GitHub Actions) and embed tests into release pipelines Use Kubernetes APIs, Helm, and service mesh tools to automate health checks, failover, and network resilience Apply infrastructure-as-code to make test environments repeatable, extensible, and easy to manage Technical Expertise Familiarity with Kubernetes internals, Helm, and service meshes (Istio, Linkerd) Strong scripting skills: Python, Pytest, Bash; comfortable writing reliable test tooling Experience with observability tools (Prometheus, Grafana, Jaeger) to analyze failures and performance Knowledge of Kubernetes security (RBAC, secrets) and performance testing tools (K6) Working experience with cloud platforms (AWS, Azure, GCP) and containerized CI/CD Comfortable with Linux system administration, networking basics, and container runtimes (Docker/containerd) Hands-on with kubectl, Helm with OCI registries, and GitOps tooling (Flux) We value practical experience and a focus on improving reliability—if you have a strong testing mindset and scripting skills, we encourage you to apply even if you don't meet every item above. Skills: pki management,flux,linkerd,automation,kubectl,helm,ci/cd,istio,qa engineering,jaeger,gitops,kubernetes,rbac,python,pytest,linux,grafana,k6,firewalling,bash,github actions,docker,prometheus,cd,azure,iac,jenkins,oci registries,bash scripting,aws,ci,qa engineer,networking,gcp

Posted 1 week ago

Apply

0 years

4 - 18 Lacs

navi mumbai, maharashtra, india

On-site

Shift: Night Shift Role Overview Join a team building resilient, scalable automated test systems for distributed, hybrid-cloud environments. This role suits QA professionals who enjoy designing practical test frameworks and scaling automation to improve reliability and deployment confidence. You'll work closely with engineers to create repeatable tests, simulate real-world failures, and ensure services behave under load and during recovery. Clear communication and pragmatic problem solving are key. Key Responsibilities Architecting Test Systems Design test frameworks to validate microservices and infrastructure across multi-cluster environments Create production-like workload simulations, resource scaling tests, failure injection, and recovery scenarios Automation & Scalability Lead CI/CD-integrated test automation (Jenkins, GitHub Actions) and embed tests into release pipelines Use Kubernetes APIs, Helm, and service mesh tools to automate health checks, failover, and network resilience Apply infrastructure-as-code to make test environments repeatable, extensible, and easy to manage Technical Expertise Familiarity with Kubernetes internals, Helm, and service meshes (Istio, Linkerd) Strong scripting skills: Python, Pytest, Bash; comfortable writing reliable test tooling Experience with observability tools (Prometheus, Grafana, Jaeger) to analyze failures and performance Knowledge of Kubernetes security (RBAC, secrets) and performance testing tools (K6) Working experience with cloud platforms (AWS, Azure, GCP) and containerized CI/CD Comfortable with Linux system administration, networking basics, and container runtimes (Docker/containerd) Hands-on with kubectl, Helm with OCI registries, and GitOps tooling (Flux) We value practical experience and a focus on improving reliability—if you have a strong testing mindset and scripting skills, we encourage you to apply even if you don't meet every item above. Skills: pki management,flux,linkerd,automation,kubectl,helm,ci/cd,istio,qa engineering,jaeger,gitops,kubernetes,rbac,python,pytest,linux,grafana,k6,firewalling,bash,github actions,docker,prometheus,cd,azure,iac,jenkins,oci registries,bash scripting,aws,ci,qa engineer,networking,gcp

Posted 1 week ago

Apply

100.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Our client is a global technology company headquartered in Santa Clara, California. it focuses on helping organisations harness the power of data to drive digital transformation, enhance operational efficiency, and achieve sustainability. over 100 years of experience in operational technology (OT) and more than 60 years in IT to unlock the power of data from your business, your people and your machines. We help enterprises store, enrich, activate and monetise their data to improve their customers’ experiences, develop new revenue streams and lower their business costs. Over 80% of the Fortune 100 trust our client for data solutions. The company’s consolidated revenues for fiscal 2024 (ended March 31, 2024). approximately $57.5 billion USD., and the company has approximately 296,000 employees worldwide. It delivers digital solutions utilising Lumada in five sectors, including Mobility, Smart Life, Industry, Energy and IT, to increase our customers’ social, environmental and economic value. Job Title: GCP+Devops Engineer Location: chennai Experience: 9+Years Job Type : Contract to hire. Notice Period: Immediate joiners. Mandatory Skills: GCP Cloud, Devops, Gitlab, Jenkins JD : GCP DevOps/Cloud/ Kubernetes Engineer Description What you’ll be doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a high level of technical expertise and be hands-on as part of the product team that is involved in engineering and enabling new products and services. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process ( Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies You will have autonomous control over researching and doing POCs on new services and products. This individual will lead design, architecture and problem resolution for business impacting failures and drive the resolution to meet platform service level objectives. Other Responsibilities: Exposure to different components of a container platform like, Logging, Monitoring, Security and SRE practices Responsible to provide technical guidance to the product engineering team, including architecture, design, development and end user support Evaluate new services and solutions to determine value within Verizon Analyzes new product opportunities/technologies and makes appropriate suggestions for the enhancement of the platform Perform POC(Proof of Concept) technical evaluations for new technologies for use in the cloud What we’re looking for... You’ll need to have: Bachelor’s degree with 7 or more years experience in the IT industry and a relevant experience of 5 or more years in Kubernetes platform automation Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux)and Working in Agile Ceremonies Model Very Strong Development and Engineering Expertise in the following- Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Have written Terraform Modules and Code in GitOps setting for K8s Lifecycle Management (any k8s flavor is fine) Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Fluentbit/OTEL/ADOT/Splunk) to include creating/customizing metrics and/or logging dashboards Familiarity with Cloud cost optimization (e.g. Kubecost, CloudHealth) Strong experience with infra components like ArgoCD, Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server, Keda Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Bash scripting experience to include automation scripting (netshoot, RBAC lookup, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Istio Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise creating, modifying RBAC and Pod Security Standards, Quotas, LimitRanges, OPA Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Demonstrated expertise with the K8s security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Terraform Certified Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Certified Kubernetes Security Specialist(CKS) Red Hat Certified OpenShift Administrator Expertise of SDLC and Agile Development Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Experience creating validating and/or mutating webhook, Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Database experience

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Multi-Cloud senior DevOps and Team Manager with experience in CICD, you will be responsible for leading a team and ensuring efficient deployment and management of cloud infrastructure and services across various platforms. With a minimum of 8-10 years of experience, including at least 2 years in a leadership or Architect role, you will have a strong background in technologies such as Kubernetes, GCP, AWS, DevOps practices, and more. Your role will require expertise in areas such as Public Cloud (AWS, GCP, Azure), Microservices (Kubernetes, Docker, OpenShift), Infrastructure as Code (Terraform, Pulumi), and a variety of monitoring tools and networking technologies. Proficiency in CICD tools like Jenkins, GitLabCI, and knowledge of scripting languages like Bash and Python will be essential for optimizing deployment processes. In addition to technical skills, you will be expected to demonstrate experience in Release Management, Security practices, and Agile methodologies. As a Team Manager, you will oversee the performance and development of your team members, ensuring successful project delivery and alignment with organizational goals. This is a full-time position based in Chennai, Tamil Nadu, offering a hybrid remote work environment. Please note that candidates with less than 8 years of experience need not apply, and the role is not suitable for Junior or Mid-level professionals. If you meet the specified qualifications and have a proven track record in cloud infrastructure management and team leadership, we encourage you to apply by providing details about your current employer, location, notice period, current and expected salary, and confirming your Lead or Architect level status.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

maharashtra

On-site

As a Cloud Architect, you will be responsible for designing and implementing cloud architectures using Google Cloud Platform (GCP) services. Your role will involve engaging directly with clients to provide consultation on cloud adoption strategies and best practices. You will troubleshoot and resolve complex GCP, CCoE, and cloud-related issues while also mentoring junior engineers and sharing knowledge across the organization. Staying updated with the latest cloud technologies and trends is a key aspect of your responsibilities. Your duties will include designing, delivering, and refining cloud services consumed across IaaS and PaaS GCP services. You will research and identify the appropriate tools and technology stack based on scalability, latency, and performance needs. Assessing technical feasibility through rapid PoCs and finding technological solutions for gaps will be part of your day-to-day activities. Collaboration with cross-functional teams, including developers, DevOps, security, and operations, is essential to deliver robust cloud solutions. You will be required to prepare and present regular reports on cloud infrastructure status, performance, and improvements. Transitioning teams from legacy to modern architecture in production and achieving results in a fast-paced dynamic environment are also crucial aspects of your role. Building trusted advisory relationships with strategic accounts, engaging with management, and identifying customer priorities and technical objections will be part of your strategic responsibilities. Leading requirements gathering, project scoping, solution design, problem-solving, and architecture diagramming are also key components of your role. To qualify for this position, you must have at least 3 years of GCP experience, with experience in other clouds being beneficial. A GCP Architect Certification is required, and additional certifications in Google, Kubernetes, or Terraform are advantageous. Experience in building, architecting, designing, and implementing distributed global cloud-based systems is essential. Extensive experience with security (zero trust) and networking in cloud environments is a must. Your role will also involve advising and implementing CI/CD practices using tools such as GitHub, GitLab, Cloud Build, and Cloud Deploy, as well as containerizing workloads using Kubernetes, Docker, Helm, and Artifact Registry. Knowledge of structured Cloud Architecture practices, hybrid cloud deployments, and on-premises-to-cloud migration deployments and roadmaps is required. Additionally, you should have the ability to work cross-functionally, engage and influence audiences, possess excellent organizational skills, and be proficient in customer-facing communication. Proficiency in documentation and knowledge transfer using remote meetings, written documents, and technical diagrams slide decks is also necessary. Experience in implementing hybrid connectivity using VPN or Cloud Interconnect is a plus. In return, you can expect to work in a 5-day culture with a competitive salary and commission structure. Other benefits include lunch and evening snacks, health and accidental insurance, opportunities for career growth and professional development, a friendly and supportive work environment, paid time off, and various other company benefits.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Optum is a global organization dedicated to providing care and improving health outcomes through technology. Your work with the team will involve connecting individuals with the necessary care, pharmacy benefits, data, and resources to enhance their well-being. The culture at Optum is characterized by diversity, inclusion, talented peers, comprehensive benefits, and opportunities for career development. By joining us, you will have the chance to contribute to advancing health equity on a global scale while making a positive impact on the communities we serve. Come be a part of our mission to Care, Connect, and Grow together. As part of the team, your primary responsibilities will include building and maintaining AWS and Azure resources using modern Infrastructure-as-Code tools such as Terraform and GH Actions. You will also be tasked with creating and managing pipelines and automation through GitOps, including GitHub Actions and Jenkins. Additionally, you will develop platform-level services on Kubernetes and automation scripts through code reviews on Github. Building monitoring and alerting templates for various cloud metrics using Splunk will also be a key aspect of your role. Furthermore, mentoring other team members on standard tools, processes, automation, and general DevOps practices will be essential to fostering a culture of continuous improvement and learning. To excel in this role, you should possess an undergraduate degree or equivalent experience along with proficiency in AWS, Azure, and DevOps within a complex development environment. Experience with GitOps, Docker/Containerization, and Terraform will be advantageous in fulfilling the responsibilities effectively. In addition to your technical skills, you must adhere to the terms of the employment contract, company policies, and directives while remaining flexible to changes in work locations, teams, shifts, benefits, and work environment as per the evolving business landscape. The company may modify these policies and directives at its discretion to adapt to changing needs. If you are eager to grow both professionally and personally in a dynamic environment that values collaboration, innovation, and impact, we invite you to join us in Hyderabad, Telangana, IN. #LETSGROW Apply Internal Employee Application Locations,

Posted 1 week ago

Apply

9.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: Architect Position: Technical Architect Experience: 9 - 12 Years Category: Cloud Architect Shift: General Main location: India, TN, Chennai Position ID: J0825-1570 Employment Type: Full Time Education Qualification: Bachelor's degree in computer science or related field or higher with minimum 9 years of relevant experience. Position Description: Works independently under limited supervision and applies knowledge of subject matter in Applications Development. Possess sufficient knowledge and skills to effectively deal with issues, challenges within field of specialization to develop simple applications solutions. Second level professional with direct impact on results and outcome. Your future duties and responsibilities Technical & Behavioral CompetenciesQualification & Experience: 5+ years in architecture or senior engineering roles, with 3+ years designing and operating workloads on Cloud in production environments. Mandatory Qualifications Strong knowledge of Cloud services and constructs, such as VPC, VSI, ROKS/OpenShift, Kubernetes, Cloud Databases, Object Storage (COS), Monitoring and Log Analysis. Proven experience designing for high availability, disaster recovery, and continuity, including multi-zone/region architectures, backup/restore, and RTO/RPO planning. Hands-on experience with infrastructure-as-code and automation (Terraform, IBM Cloud Schematics), CI/CD pipelines, and GitOps workflows. Solid foundation in Linux, containers, networking (firewalls, load balancers, DNS, TLS) and platform operations. Strong understanding of security-by-design: IAM/RBAC, least privilege, workload identity, encryption, key management, vulnerability management, and policy enforcement. Demonstrated ability to perform architecture reviews, document decisions (ADR), assess risks, and present trade-offs to senior stakeholders. Excellent communication and facilitation skills; able to conduct training sessions and influence cross-functional teams without direct authority. Preferred Qualifications Experience with SRE practices, reliability engineering, capacity planning, cost optimization, and performance tuning. Familiarity with compliance frameworks and internal control environments (e.g., ISO 27001, SOC 2) and aligning solutions to internal group rules and standards. Exposure to API gateways, eventing/queues, and integration patterns. Experience implementing observability stacks and defining SLOs/SLIs. Background in incident management, postmortems, and resilience testing. Certifications such as IBM Certified Solution Architect – Cloud, Red Hat OpenShift, CKA/CKAD, TOGAF, or ITIL. Responsibilities Review and provide actionable feedback on solution architectures proposed, ensuring compliance with internal policies, security baselines, observability standards, and production-readiness criteria. Validate designs against application continuity objectives, including RTO/RPO, HA, backup, DR strategy, failover/failback, chaos testing readiness, and runbook completeness. Publish and maintain architecture principles, standards, guardrails, and reference patterns for developers and infrastructure teams; drive adoption across squads. Guide implementation teams on provisioning and automation using infrastructure-as-code and GitOps practices (e.g., Terraform/Schematics, pipelines), including environment strategy, naming, tagging, secrets, and access control. Define production non-functionals and acceptance criteria across reliability, performance, capacity, cost efficiency, security, compliance, networking, and operability. Partner with Architecture, Security, SRE, and Application teams to resolve design gaps; propose constructive alternatives and trade-offs with clear rationale. Represent Production in application design discussions, vendor evaluations, change advisory boards, architecture review committees; clearly articulate Production s position, risks, and recommended mitigations. Develop and deliver information sessions, brown bags, and enablement on key initiatives, new standards, and platform capabilities. Create and evolve application reference architectures with policy enforcement and controls. Ensure observability by design, including logging, metrics, tracing, alerting, SLOs/SLIs, and dashboards; drive readiness checks prior to go-live. Contribute to incident postmortems and problem management with architectural remediations and resilience patterns. Track emerging internal cloud features and industry best practices; incorporate learnings into standards and roadmaps. Required Qualifications To Be Successful In This Role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

hyderabad, telangana, india

On-site

Dear Tech Professional Greetings from Tata Consultancy Services TCS has always been in the spotlight for being adept in “the next big technologies”. What we can offer you is a space to explore varied technologies and quench your techie soul. What we are looking for: GCP Network Engineer Location: Chennai/Bangalore/Gurgaon/Pune/Hyderabad Interview Mode: Virtual mode (Microsoft Teams) Exp: 10-15 Years GCP 1 - Product & Environment Provisioning - GCP, Jenkins, Groovy, GITOPS, SMEE, GKE, container lifecycle, Terraform & Terraform Cloud, Python, Scripting, ITIL, Incident management, Problem management GCP 1 & 2 - Networking - GCP, IPAM, DNS, Router. Interconnect, VPN, Terraform & Terraform Cloud, Python, Scripting, ITIL, Incident management, Problem management Regards Prashaanthini

Posted 1 week ago

Apply

5.0 years

0 Lacs

gurugram, haryana, india

On-site

JOB PURPOSE: Reporting to the Director, DevSecOps & SRE , the DevSecOps Engineer will be responsible for: Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure. Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. In the role as a DevSecOps Engineer, we believe that you are bringing experience of Operations and Security using DevOps. Strong analytical and automation skills that enable you to deliver the expected benefits to the business and digital products. Building and deploying distributed applications and big data pipelines in the cloud brings you excitement. You will be working with GCP and AWS. Jenkins, Groovy scripting, Shell scripting, Terraform, Ansible or an equivalent are a wide array of tools that you have used in the past. This is an exciting opportunity to influence and build the DevSecOps framework for leading Manufacturing platforms in Autonomous Buildings space, while working with the latest technologies on a cloud-based environment in a multi-disciplinary team with platform architects, tech leads, data scientists, data engineers, and insight specialists . JOB RESPONSIBILITIES: Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Ensure continuous availability of various DevOps tools supporting SCM & Release Management including Source Control, Containerization, Continuous Integration, & Change Management. (Jenkins, Docker, JIRA, SonarQube, Terraform, Google/Azure/AWS Cloud CLI). Implementing Build and release automated pipelines framework Implementing DevSecOps Tools and Quality Gates with SLO Implementing SAST, DAST, IAST, OSS tools in CICD Pipelines Implementing Automated change management policies in the pipeline from Dev-Prod. Work with cross-functional co-located teams in design, development and implementation of enterprise scalable features related to enabling higher developer productivity, environment monitoring and self-healing, and facilitating autonomous delivery teams. Build infrastructure automation tools and frameworks leveraging Docker, Kubernetes operate as a technical expert on DevOps infrastructure projects pertaining to Containerization, systems management, design and architecture. Perform performance analysis and optimization, monitoring and problem resolution, upgrade planning and execution, and process creation and documentation. Integrate newly developed and existing applications into private, public and hybrid cloud environments Automate deployment pipelines in a scalable, secure and reliable manner Leverage application monitoring tools to troubleshoot and diagnose environment issues Have a culture of automation where any repetitive work is automated Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Working closely with Cloud Infrastructure and Security teams to ensure organizational best practices are followed Translating non-functional requirements of Development, Security, and Operations architectures into a design that can be implemented using the chosen set of software for the project. Ownership of technical design and implementation for one or more software stacks of the DevSecOps team. Design and implementation of the distributed code repository. Implementing automation pipelines to support code compilation, testing, and deployment into the software components of the entire solution. Integrating the monitoring of all software components in the entire solution, and data mining the data streams for actionable events to remediate issues. Implement configuration management pipelines to standardize environments. Integrate DevSecOps software with credentials management tools. Create non-functional test scenarios for verifying the DevSecOps software setup. KEY QUALIFICATION & EXPERIENCES: At least 5 years of relevant working experience in DevSecOps, Task Automation, or GitOps. Demonstrated proficiency in installation, configuration, or implementation in one or more of the following software. Jenkins, Azure DevOps, Bamboo, or software of similar capability. GitHub, GitLab, or software of similar capability. Jira, Asana, Trello, or software of similar capability. Ansible, Terraform, Chef Automate, or software of similar capability. Flux CD, or software of similar capability. Any test automation software. Any service virtualization software. Operating Software administration experience for Ubuntu, Debian, Alpine, RHEL. Technical documentation writing experience. DevOps Engineering certification for on-premises or public cloud is advantageous. Experience with work planning and effort estimation is an advantage. Strong problem-solving and analytical skills. Strong interpersonal and written and verbal communication skills. Highly adaptable to changing circumstances. Interest in continuously learning new skills and technologies. Experience with programming and scripting languages (e.g. Java, C#, C++, Python, Bash, PowerShell). Experience with incident and response management. Experience with Agile and DevOps development methodologies. Experience with container technologies and supporting tools (e.g. Docker Swarm, Podman, Kubernetes, Mesos). Experience with working in cloud ecosystems (Microsoft Azure, AWS, Google Cloud Platform). Experience with configuration management systems (e.g. Puppet, Ansible, Chef, Salt, Terraform). Experience working with continuous integration/continuous deployment tools (e.g. Git, TeamCity, Jenkins, Artifactory). Experience in GitOps-based automation is a plus. Experience with GitHub for Actions, GitHub for Security, GitHub Copilot. BE/B-Tech/MCA or any equivalent degree in Computer Science OR related practical experience. Must have 5+ years working experience in Jenkins, GCP (or AWS/Azure), Unix & Linux OS. Must have experience with automation/configuration management tools (Jenkins using Groovy scripting, Terraform, Ansible, or an equivalent). Must have experience in Kubernetes (GKE, Kubectl, Helm) and containers (Docker). Must have experience on JFrog Artifactory and SonarQube. Extensive knowledge of institutionalizing Agile and DevOps tools not limited to but including Jenkins, Subversion, Hudson, etc. Experience in Networking Skills (TCP/IP, SSL, SMTP, HTTP, FTP, DNS, and more). Hands-on in source code management tools like Git, Bitbucket, SVN, etc. Should have working experience with monitoring tools like Grafana, Prometheus, Elasticsearch, Splunk, or any other monitoring tools/processes. Experience in Enterprise High Availability Platforms and Network and Security on GCP. Knowledge and experience in the Java programming language. Experience working on large-scale distributed systems with a deep understanding of design impacts on performance, reliability, operations, and security is a big plus. Understanding of self-healing/immutable microservice-based architectures, cloud platforms, clustering models, networking technologies. Great interpersonal and communication skills. Self-starter and able to work well in a fast-paced, dynamic environment with minimal supervision. Must have Public Cloud provider certifications (Azure, GCP, or AWS). Having CNCF certification is a plus

Posted 1 week ago

Apply

7.0 years

0 Lacs

delhi, india

On-site

Echo Base Global is a digital finance company creating an end-to-end crypto ecosystem built on Web3 technology. Echo Base drives interoperability across its products to create an integrated, user-first experience that simplifies the complexity of interacting with digital assets. Est. 2025. The Sr. DevOps Engineer is responsible for designing, managing, and securing Echo Basel's cloud-based cryptocurrency platforms. This role requires deep expertise in cloud infrastructure, security protocols, and monitoring tools. The ideal candidate ensures reliability, scalability, and compliance for Echo Base's cryptocurrency services while driving team collaboration and technical innovation. What Will You Do Develop and execute a comprehensive DevOps strategy to ensure scalability, uptime, and security for Echo Base's cryptocurrency platforms and services Oversee the design, deployment, and optimization of AWS cloud infrastructure, ensuring high performance, cost efficiency, and alignment with cloud-native architectural best practices Drive the adoption and implementation of observability tools like Splunk, Prometheus, and Grafana, enabling proactive monitoring, alerting, and rapid issue resolution Build and manage automated workflows and event-driven systems using Apache Airflow and AWS Lambda to streamline operations and enhance system reliability Implement robust data visualization and reporting solutions with Superset, delivering actionable insights for transaction metrics and operational KPIs Conduct regular security audits, enforce compliance with cryptocurrency regulations, and implement data protection best practices to safeguard user and system integrity Act as the primary escalation point for critical incidents, leading swift resolution efforts and driving improvements through comprehensive post-mortem reviews Report key DevOps metrics, operational performance updates, and technical strategies to senior leadership, aligning team efforts with company goals Who you are: Proven leader with expertise in cloud platforms, DevOps best practices, and AWS services Skilled at fostering collaboration, mentoring team members, and resolving critical issues Passionate about innovation and optimization in high-security, high-availability environments Requirements What We Are Looking For 7+ years of experience in DevOps, Cloud Engineering, or related fields, demonstrating the ability to coordinate cross-functional teams, prioritize tasks, and manage escalations Advanced expertise in AWS services, including EC2, S3, Elasticache, ECS, DynamoDB, Route 53, WAF, SNS, CloudWatch, and ELB, for designing and managing scalable, secure, and high-performance cloud architectures Strong knowledge of DevOps best practices, including GitOps, CI/CD methodologies, and infrastructure-as-code tools such as Terraform and CloudFormation Experience in AWS cost savings initiatives. Proficiency in workflow orchestration tools like Apache Airflow and event-driven architectures using Kafka and AWS Lambda Expertise in monitoring and visualization tools, including Splunk, Prometheus, Grafana, and Apache Superset, to ensure comprehensive system visibility and actionable insights In-depth knowledge of security protocols and tools, such as KMS, Certificate Manager, CloudTrail, and WAF, for safeguarding infrastructure and ensuring compliance Familiarity with cryptocurrency-specific systems, including wallet security, transaction processing, and compliance standards, with an emphasis on adhering to global regulations

Posted 1 week ago

Apply

7.0 years

0 Lacs

new delhi, delhi, india

On-site

Job Summary: We are looking for a skilled DevOps Engineer to take ownership of our infrastructure on Google Cloud Platform (GCP) and drive improvements in scalability, reliability, and cost efficiency. You will manage Kubernetes workloads, build and maintain CI/CD pipelines using Jenkins, and proactively optimize cloud usage to ensure high performance at a sustainable cost. Key Responsibilities: ● Design, implement, and manage scalable and cost-efficient infrastructure on GCP using Terraform. ● Deploy and manage workloads on Kubernetes (GKE) with a focus on performance and resource optimization. ● Build and maintain CI/CD pipelines in Jenkins, enabling automated testing and deployments. ● Set up and maintain observability systems (Prometheus, Grafana, GCP Cloud Monitoring & Logging). ● Continuously monitor cloud usage, identify inefficiencies, and implement cost-saving measures (e.g., right-sizing, autoscaling, preemptible instances). ● Collaborate with development teams to ensure infrastructure aligns with application needs and business goals. ● Automate operational processes to reduce manual effort and improve system reliability. ● Troubleshoot production issues, perform root cause analysis, and implement improvements to avoid recurrence. ● Enforce security and compliance best practices across infrastructure and pipelines. Required Qualifications: ● 4–7 years of experience in DevOps, SRE, or cloud infrastructure roles. ● Deep hands-on experience with GCP services (GKE, Cloud Build, IAM, Compute Engine, etc.). ● Strong knowledge of Kubernetes and container orchestration. ● Proven experience with Jenkins, including scripting and pipeline configuration. ● Proficient in Terraform and infrastructure automation. ● Scripting skills (e.g., Bash, Python). ● Experience with monitoring and alerting tools. ● Track record of implementing cost optimizations in cloud environments. Nice to Have: ● Familiarity with GitOps tools like ArgoCD or Flux. ● Experience managing preemptible VMs, autoscaling, and other GCP cost-control strategies. ● Knowledge of service mesh (e.g., Istio) and API gateways (e.g., Envoy, Kong). ● Understanding of compliance/security standards (SOC2, ISO 27001). What We Offer: ● Competitive compensation and performance incentives ● Health benefits and professional development support ● High-impact role with ownership of core infrastructure and cost efficiency initiatives

Posted 1 week ago

Apply

100.0 years

15 - 25 Lacs

delhi

On-site

About Kuoni Tumlare At Kuoni Tumlare, we deliver truly inspiring and innovative solutions and experiences that create value both for our Partners and Society at large. Our wide portfolio of products and solutions is built on 100+ years of destination management experience. Our solutions include series tours, technical visits, educational tours, Japan specialist travel consulting, as well as meetings, incentives, conferences, and exhibitions. Our product portfolio includes MyBus excursions at destinations as well as guaranteed departure tours devised and delivered by our Seat-in-Coach specialists, Europamundo (EMV) and MyBus Landcruise. About the Business / Function Proudly part of Kuoni Tumlare, TUMLARE SOFTWARE SERVICES (P) LTD. is a multinational technology support company that serves as a trusted technology partner for businesses since 1999. We also help established brands reimagine their business through digitalization. We are looking for an experienced Senior Frontend Developer with expertise in React/JavaScript/TypeScript and knowledge of Liferay to join our growing development team. In this role, you will be responsible for designing and building high-performance, scalable, and responsive web applications within the Liferay portal framework. You will work closely with backend developers, product managers, and designers to deliver a seamless user experience. Key Responsibilities: Developing and maintaining the complete user interface (GUI) using JavaScript/TypeScript and the React ecosystem (React, Redux, ClayUI and related libraries). This includes building interactive user forms, data listings, filters, and other UI components. Integrating these frontend components into the Liferay CE 7.4.x platform. Connecting the entire frontend to our services by working with REST API clients. Our APIs are well-documented with Open API specification to enable seamless integration with our backend systems. Collaborating with the backend team to ensure a smooth user experience. Write clean, maintainable, and well-documented code. Conduct code reviews. 5-8 years of hands-on experience in frontend development. Proficiency in JavaScript/TypeScript and the React framework. Experience with REST APIs and understanding OpenAPI specifications. Knowledge of GraphQL is added advantage. Working knowledge of GitOps, including managing infrastructure changes via pull requests. Daily use of Docker for local development with docker compose. Familiarity with Kubernetes from a user perspective. Ability to read and understand Jenkins pipelines. Basic understanding of OpenSearch (Elasticsearch) is a plus—you should be able to query data and troubleshoot errors via the GUI. Experience with Liferay CE 7.4.x and Java is a major advantage. Familiarity with responsive design and cross-browser compatibility. Excellent communication and interpersonal skills. What We Offer: Working in one of the world’s leading multinational company Probation period - only 3 months. Annual Bonus – as per company policy. Long Service Award. Paid leaves for Birthday and Wedding/Work Anniversary Learning Opportunity through an online learning platform with rich training courses and resources. Company Sponsored IT Certification - as per company policy Following insurance from Date of Joining: Group Medical Insurance with Sum Insured of up to 5 Lakh Term life Insurance - 3 times of your CTC Accidental Insurance - 3 times of your CTC Employee Engagement Activities: Fun Friday per week Annual Off-Site Team Building End Year Party CSR programs Global Employee Engagement Events If you match the requirements, excited about what we offer and interested in a new challenge, we are looking forward to receiving your full application. Job Location - Pitampura, Delhi. 5 days working.

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

noida

On-site

Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. About the Role We at Innovaccer are looking for a Site Reliability Engineer-I to build the most amazing product experience. You’ll get to work with other engineers to build delightful feature experiences to understand and solve our customer’s pain points A Day in the Life Take ownership of SRE pillars: Deployment, Reliability, Scalability, Service Availability (SLA/SLO/SLI), Performance, and Cost. Lead production rollouts of new releases and emergency patches using CI/CD pipelines while continuously improving deployment processes. Establish robust production promotion and change management processes with quality gates across Dev/QA teams. Roll out a complete observability stack across systems to proactively detect and resolve outages or degradations. Analyze production system metrics, optimize system utilization, and drive cost efficiency. Manage autoscaling of the platform during peak usage scenarios. Perform triage and RCA by leveraging observability toolchains across the platform architecture. Reduce escalations to higher-level teams through proactive reliability improvements. Participate in the 24x7 OnCall Production Support team. Lead monthly operational reviews with executives covering KPIs such as uptime, RCA, CAP (Corrective Action Plan), PAP (Preventive Action Plan), and security/audit reports. Operate and manage production and staging cloud platforms, ensuring uptime and SLA adherence. Collaborate with Dev, QA, DevOps, and Customer Success teams to drive RCA and product improvements. Implement security guidelines (e.g., DDoS protection, vulnerability management, patch management, security agents). Manage least-privilege RBAC for production services and toolchains. Build and execute Disaster Recovery plans and actively participate in Incident Response. Work with a cool head under pressure and avoid shortcuts during production issues. Collaborate effectively across teams with excellent verbal and written communication skills. Build strong relationships and drive results without direct reporting lines. Take ownership, be highly organized, self-motivated, and accountable for high-quality delivery. What You Need Experience : 1–3 years in production engineering, site reliability, or related roles. Solid hands-on experience with at least one cloud provider (AWS, Azure, GCP) with automation focus (certifications preferred). Strong expertise in Kubernetes and Linux. Proficiency in scripting/programming (Python required). Strong understanding of observability toolchains (Logs, Metrics, Tracing). Knowledge of CI/CD pipelines and toolchains (Jenkins, ArgoCD, GitOps). Familiarity with persistence stores (Postgres, MongoDB), data warehousing (Snowflake, Databricks), and messaging (Kafka). Exposure to monitoring/observability tools such as ElasticSearch, Prometheus, Jaeger, NewRelic, etc. Proven experience in production reliability, scalability, and performance systems. Experience in 24x7 production environments with process focus. Familiarity with ticketing and incident management systems. Security-first mindset with knowledge of vulnerability management and compliance. Advantageous: hands-on experience with Kafka, Postgres, and Snowflake. Excellent judgment, analytical thinking, and problem-solving skills. Ability to quickly identify and drive optimal solutions within constraints. Here’s What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days. Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition. Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most. Care Program: Whether it’s a celebration or a time of need, we’ve got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need. Financial Assistance: Life happens, and when it does, we’re here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, Instagram, and the Web.

Posted 1 week ago

Apply

2.0 - 5.0 years

7 - 8 Lacs

india

On-site

About the Role: We are seeking a skilled and motivated DevOps Engineer with 2to 5 years of hands-on experience to join our growing technology team. You will be responsible for supporting the development lifecycle by automating infrastructure, enhancing CI/CD pipelines, monitoring systems, and ensuring scalable, secure, and resilient deployments. Key Responsibilities: Design,implement and maintain scalable and secure CI/CD pipelines. Automate infrastructure provisioning using tools like Terraform, Ansible, or CloudFormation. Manage and monitor cloud environments ( AWS preffered , Azure, or GCP). Set up and maintain container orchestration platforms (Docker, Kubernetes,ECS, etc.). Collaborate with development teams to ensure smooth code releases and operational stability. Improve system reliability and performance through observability, logging, and alerting tools. Troubleshoot production issues and participate in incident response. Enforce security best practices and compliance requirements. Required Skills and Qualifications: 2-5 years of professional experience in a DevOps or Infrastructure role. Proficient in scripting languages (e.g., Bash, Python, or Shell). Hands-on experience with CI/CD tools (Jenkins, GitLab CI, CircleCI, etc.). Strong experience with cloud platforms (preferably AWS, but Azure/GCP is acceptable). Solid knowledge of containerization and orchestration (Docker, Kubernetes,Helm) Familiar with infrastructure as code (Terraform, Ansible, or similar). Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack, etc.). Good understanding of networking, security, and system administration fundamentals. Preferred Qualifications: Experience working in Agile/Scrum environments. Familiarity with version control systems like Git and GitOps practices. Certification in cloud platforms (AWS Certified DevOps Engineer, etc.) is a plus. Experience with service mesh, secrets management, and performance tuning is advantageous. Job Type: Full-time Pay: ₹65,000.00 - ₹70,000.00 per month Application Question(s): Are you comfortable with the salary? How many years of Version Controlling experience do you have? How many years of Containerization and Orchestration experience do you have? How many years of CI/CD Pipeline experience do you have? How many years of Automation Scripting experience do you have? Do you have experience with monitoring tools? How many years of Automate Infrastructure Tool Experience do you have? Are you willing to relocate to Kolkata? Experience: DevOps: 3 years (Required) License/Certification: AWS Certification (Required) Work Location: In person

Posted 1 week ago

Apply

100.0 years

0 Lacs

new delhi, delhi, india

On-site

About Kuoni Tumlare: At Kuoni Tumlare, we deliver truly inspiring and innovative solutions and experiences that create value both for our Partners and Society at large. Our wide portfolio of products and solutions is built on 100+ years of destination management experience. Our solutions include series tours, technical visits, educational tours, Japan specialist travel consulting, as well as meetings, incentives, conferences, and exhibitions. Our product portfolio includes MyBus excursions at destinations as well as guaranteed departure tours devised and delivered by our Seat-in-Coach specialists, Europamundo (EMV) and MyBus Landcruise. We cater to a wide range of customer needs in close collaboration with our trusted suppliers and powered by our team of destinations experts - enabling us to make a real difference to the world. About the Business / Function: Proudly part of Kuoni Tumlare, TUMLARE SOFTWARE SERVICES (P) LTD. is a multinational technology support company that serves as a trusted technology partner for businesses since 1999. We also help established brands reimagine their business through digitalization. About the Role: We are looking for an experienced Senior Frontend Developer with expertise in Angular to join our growing development team. In this role, you will be responsible for designing and building high-performance, scalable, and responsive web applications. You will work closely with backend developers, product managers, and designers to deliver a seamless user experience. Key Responsibilities: Developing and maintaining the complete user interface (GUI) using JavaScript/TypeScript and the React ecosystem (React, Redux, ClayUI and related libraries). This includes building interactive user forms, data listings, filters, and other UI components. Integrating these frontend components into the Liferay CE 7.4.x platform. Connecting the entire frontend to our services by working with REST API clients. Our APIs are well-documented with Open API specification to enable seamless integration with our backend systems. Collaborating with the backend team to ensure a smooth user experience. Write clean, maintainable, and well-documented code. Conduct code reviews. Job Requirements 5-8 years of hands-on experience in frontend development. Proficiency in JavaScript/TypeScript and the React framework. Experience with REST APIs and understanding OpenAPI specifications. Knowledge of GraphQL is added advantage. Working knowledge of GitOps, including managing infrastructure changes via pull requests. Daily use of Docker for local development with docker compose. Familiarity with Kubernetes from a user perspective. Ability to read and understand Jenkins pipelines. Basic understanding of OpenSearch (Elasticsearch) is a plus—you should be able to query data and troubleshoot errors via the GUI. Experience with Liferay CE 7.4.x and Java is a major advantage. Familiarity with responsive design and cross-browser compatibility. Excellent communication and interpersonal skills. Candidate should be based in Delhi NCR. We Are Looking for a Person Who Is: A team player, willing to get involved in broader issues, with a key focus on solving the requirements. A collaborative self-starter with hands-on experience and a can-do attitude. A pragmatic approach and the ability to address and solve challenges within a dynamic global environment. Having a pragmatic approach and the ability to address and solve challenges within a dynamic global environment. Focusing on accuracy and details while working towards multiple deadlines. Open-minded and with positive attitude, but also critically challenging existing processes and practices. A disciplined thinker and analytical problem solver who has the capacity to manage complex issues and develop effective solutions in a timely fashion. What We Offer: Working in one of the world’s leading multinational company. Probation period - only 3 months. Annual Bonus – as per company policy. Long Service Award. Paid leaves for Birthday and Wedding/Work Anniversary Learning Opportunity through an online learning platform with rich training courses and resources. Company Sponsored IT Certification - as per company policy Following insurance from Date of Joining: Group Medical Insurance with Sum Insured of up to 5 Lakh Term life Insurance - 3 times of your CTC Accidental Insurance - 3 times of your CTC Employee Engagement Activities: Fun Friday per week Annual Off-Site Team Building End Year Party CSR programs Global Employee Engagement Events If you match the requirements, excited about what we offer and interested in a new challenge, we are looking forward to receiving your full application. Job Location - Pitampura, Delhi. 5 days working. Number of Opening - 3

Posted 1 week ago

Apply

3.0 years

0 Lacs

pune, maharashtra, india

On-site

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. Requirements 3+ years of experience in managing OpenShift and Kubernetes based environments Having experience in virtualisation(spinning up VMs) using openshift Expertise in Red Hat Openshift 4.x and Kubernetes administration Strong Background in Linux system administration (RHEL, CentOS, Ubuntu) Experience with container technologies (Docker, Podman, CRI-O) Hands on experience with HELM, Operators and Custom Resource Definitions (CRDs) Knowledge of networking fundamentals in Kubernetes (Ingress, Service Mesh, CNI Plugins) Experience with GitOps tools like ArgoCD, Tekton or FluxCD Familiarity with infrastructure automation using Ansible, Terraform or Helm charts Strong scripting skills in Bash, Python or PowerShell Experience in monitoring and logging tools (Prometheus, Grafana, OpenShift Logging) Understanding of security best practices (RBAC, SCCs, Pod Security Policies) Experience with CI/CD Benefits We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.

Posted 1 week ago

Apply

0 years

10 - 15 Lacs

mohali district, india

On-site

Sr. QA Engineer Shift : Night Shift Location: Navi Mumai / Bangalore / Pune / Hyderabad / Mohali / Dehradun / Panchkula Technical Proficiency Deep understanding of Kubernetes internals, cluster lifecycle management, Helm, service meshes (e.g., Istio or Linkerd), and network policies. Strong scripting and automation capabilities (Python, Pytest, Bash, etc.). Familiarity with observability stacks (Prometheus, Grafana, Jaeger), Kubernetes security (RBAC, secrets management), and performance benchmarking tools (e.g., K6). Solid grounding in cloud architecture (AWS, Azure, GCP), infrastructure provisioning, and containerized CI/CD. Moderate to advanced linux knowledge and proficiency is required: Bash scripting and debugging, systemd/logs, networking/firewalling/routing, certificate/PKI management, containers (Docker/containerd), and Kubernetes tooling (kubectl/Helm with OCI registries, GitOps/Flux) to install, test, and troubleshoot multi-cluster environments. Architecting Test Systems Architect test frameworks and infrastructure for validating microservices and infrastructure components in multi-cluster and hybrid-cloud environments. Oversee the design of complex test scenarios simulating production-like workloads, resource scaling, failure injection, and recovery across distributed clusters. Automation & Scalability Spearhead the development of scalable and maintainable test automation integrated with CI/CD (Jenkins, GitHub Actions, etc.). Leverage Kubernetes APIs, Helm, and service mesh tools to build comprehensive automation coverage, including system health, failover behavior, and network resilience. Promote test infrastructure-as-code and drive IaC forward on the team making sure the infrastructure code is repeatable, extensible and reliable. Skills: kubernetes,python,bash,cloud,aws,azure,qa automation,istio,linkerd

Posted 1 week ago

Apply

0 years

10 - 15 Lacs

hyderabad, telangana, india

On-site

Sr. QA Engineer Shift : Night Shift Location: Navi Mumai / Bangalore / Pune / Hyderabad / Mohali / Dehradun / Panchkula Technical Proficiency Deep understanding of Kubernetes internals, cluster lifecycle management, Helm, service meshes (e.g., Istio or Linkerd), and network policies. Strong scripting and automation capabilities (Python, Pytest, Bash, etc.). Familiarity with observability stacks (Prometheus, Grafana, Jaeger), Kubernetes security (RBAC, secrets management), and performance benchmarking tools (e.g., K6). Solid grounding in cloud architecture (AWS, Azure, GCP), infrastructure provisioning, and containerized CI/CD. Moderate to advanced linux knowledge and proficiency is required: Bash scripting and debugging, systemd/logs, networking/firewalling/routing, certificate/PKI management, containers (Docker/containerd), and Kubernetes tooling (kubectl/Helm with OCI registries, GitOps/Flux) to install, test, and troubleshoot multi-cluster environments. Architecting Test Systems Architect test frameworks and infrastructure for validating microservices and infrastructure components in multi-cluster and hybrid-cloud environments. Oversee the design of complex test scenarios simulating production-like workloads, resource scaling, failure injection, and recovery across distributed clusters. Automation & Scalability Spearhead the development of scalable and maintainable test automation integrated with CI/CD (Jenkins, GitHub Actions, etc.). Leverage Kubernetes APIs, Helm, and service mesh tools to build comprehensive automation coverage, including system health, failover behavior, and network resilience. Promote test infrastructure-as-code and drive IaC forward on the team making sure the infrastructure code is repeatable, extensible and reliable. Skills: kubernetes,python,bash,cloud,aws,azure,qa automation,istio,linkerd

Posted 1 week ago

Apply

0 years

0 Lacs

india

On-site

Job Description: As an L3 AWS Support Engineer, you will be responsible for providing advanced technical support for complex AWS-based solutions. You will troubleshoot and resolve critical issues, architect solutions, and provide technical leadership to the support team. Key Responsibilities: Architectural Oversight: Design, implement, and optimize cloud architectures for performance, security, and scalability Conduct Well-Architected Framework reviews Complex Troubleshooting: Resolve critical issues involving hybrid environments, multi-region setups, and service interdependencies Debug Lambda functions, API Gateway configurations, and other advanced AWS services Security: Implement advanced security measures like GuardDuty, AWS WAF, and Security Hub Conduct regular security audits and compliance checks (e.g., SOC2, GDPR) Automation & DevOps: Develop CI/CD pipelines using CodePipeline, Jenkins, or GitLab Automate infrastructure scaling, updates, and monitoring workflows Automate the provisioning of EKS clusters and associated AWS resources using Terraform or CloudFormation Develop and maintain Helm charts for consistent application deployments Implement GitOps workflows Disaster Recovery & High Availability: Design and test failover strategies and disaster recovery mechanisms for critical applications Cluster Management and Operations Design, deploy, and manage scalable and highly available EKS clusters Manage Kubernetes objects like Pods, Deployments, StatefulSets, ConfigMaps, and Secrets Implement and manage Kubernetes resource scheduling, scaling, and lifecycle management Team Leadership: Provide technical guidance to Level 1 and 2 engineers Create knowledge-sharing sessions and maintain best practices documentation Cost Management: Implement resource tagging strategies and cost management tools to reduce operational expenses Required Skills and Qualifications: Technical Skills: Deep understanding of AWS core services and advanced features Strong expertise in AWS automation, scripting (Bash, Python, PowerShell), and CLI Experience with AWS CloudFormation and Terraform Knowledge of AWS security best practices, identity and access management, and networking Capacity Planning: Analyze future resource needs and plan capacity accordingly Performance Optimization: Identify and resolve performance bottlenecks Migration and Modernization: Lead complex migration and modernization projects Soft Skills: Excellent problem-solving and analytical skills Strong communication and interpersonal skills Ability to work independently and as part of a team Customer-focused approach Certifications (Preferred): AWS Certified Solutions Architect - Professional AWS Certified DevOps Engineer - Professional AWS Certified Security - Specialty

Posted 1 week ago

Apply

5.0 years

0 Lacs

chandigarh, india

On-site

5 years of Minimum Experience Required Location: Chandigarh IT Park (WFO) Shift Timings: 1200 - 2100 Hours IST Roles and Responsibilities CI/CD Pipeline Management Design, implement, and manage Continuous Integration/Continuous Deployment (CI/CD) pipelines. Automate build, test, and deployment processes to ensure faster and reliable software delivery. Integrate ArgoCD and Helm for GitOps-based application deployment on Kubernetes clusters. Troubleshoot build failures and streamline deployment processes Infrastructure as Code (IaC) Use tools like Terraform, Ansible, or CloudFormation to automate infrastructure provisioning. Manage cloud infrastructure on platforms like AWS, Azure, or Google Cloud. Ensure infrastructure is scalable, resilient, and cost-optimised. Monitoring and Logging Implement robust monitoring systems using tools like Prometheus, Grafana, ELK Stack, or Datadog. Set up alerting mechanisms to identify and resolve system issues proactively. Maintain logs for performance analysis, debugging, and compliance. Automation and Scripting Automate repetitive tasks using scripting languages like Python, Bash, or PowerShell. Develop automation scripts for configuration management and deployment. Optimise system performance and ensure efficient resource utilisation. Security and Compliance Implement DevSecOps practices to ensure security at every stage of the development lifecycle. Manage secrets and credentials using tools like HashiCorp Vault or AWS Secrets Manager. Ensure compliance with security policies and standards. Collaboration and Communication Work closely with development, QA, and IT teams to understand their requirements. Collaborate on system design, capacity planning, and disaster recovery strategies. Support developers by optimising CI/CD workflows and resolving infrastructure issues. Cloud Services and Kubernetes Management Deploy, monitor, and manage applications in cloud environments (AWS, Azure, GCP). Ensure high availability, scalability, and fault tolerance of cloud resources. Must have knowledge of EKS (Elastic Kubernetes Service), Kubernetes cluster management (managed and self-managed) . Manage Kubernetes workloads using Docker, Helm charts, and ArgoCD for GitOps-driven deployments. Configuration Management Implement configuration management tools like Ansible, Puppet, or Chef to maintain consistent environments. Use Helm and ArgoCD to standardise and manage Kubernetes application configurations. Ensure that servers and environments are provisioned with the correct configurations. Backup and Disaster Recovery Implement automated backup strategies for critical systems and data. Develop and test disaster recovery plans to ensure business continuity. Performance Optimization Continuously monitor and optimise system performance. Identify and resolve performance bottlenecks across infrastructure, applications, and databases. AWS & Azure certifications are preferred.

Posted 1 week ago

Apply

0 years

10 - 15 Lacs

dehradun, uttarakhand, india

On-site

Sr. QA Engineer Shift : Night Shift Location: Navi Mumai / Bangalore / Pune / Hyderabad / Mohali / Dehradun / Panchkula Technical Proficiency Deep understanding of Kubernetes internals, cluster lifecycle management, Helm, service meshes (e.g., Istio or Linkerd), and network policies. Strong scripting and automation capabilities (Python, Pytest, Bash, etc.). Familiarity with observability stacks (Prometheus, Grafana, Jaeger), Kubernetes security (RBAC, secrets management), and performance benchmarking tools (e.g., K6). Solid grounding in cloud architecture (AWS, Azure, GCP), infrastructure provisioning, and containerized CI/CD. Moderate to advanced linux knowledge and proficiency is required: Bash scripting and debugging, systemd/logs, networking/firewalling/routing, certificate/PKI management, containers (Docker/containerd), and Kubernetes tooling (kubectl/Helm with OCI registries, GitOps/Flux) to install, test, and troubleshoot multi-cluster environments. Architecting Test Systems Architect test frameworks and infrastructure for validating microservices and infrastructure components in multi-cluster and hybrid-cloud environments. Oversee the design of complex test scenarios simulating production-like workloads, resource scaling, failure injection, and recovery across distributed clusters. Automation & Scalability Spearhead the development of scalable and maintainable test automation integrated with CI/CD (Jenkins, GitHub Actions, etc.). Leverage Kubernetes APIs, Helm, and service mesh tools to build comprehensive automation coverage, including system health, failover behavior, and network resilience. Promote test infrastructure-as-code and drive IaC forward on the team making sure the infrastructure code is repeatable, extensible and reliable. Skills: kubernetes,python,bash,cloud,aws,azure,qa automation,istio,linkerd

Posted 1 week ago

Apply

0 years

0 Lacs

gurgaon, haryana, india

On-site

Job Description: As an L3 AWS Support Engineer, you will be responsible for providing advanced technical support for complex AWS-based solutions. You will troubleshoot and resolve critical issues, architect solutions, and provide technical leadership to the support team. Key Responsibilities: Architectural Oversight: Design, implement, and optimize cloud architectures for performance, security, and scalability Conduct Well-Architected Framework reviews Complex Troubleshooting: Resolve critical issues involving hybrid environments, multi-region setups, and service interdependencies Debug Lambda functions, API Gateway configurations, and other advanced AWS services Security: Implement advanced security measures like GuardDuty, AWS WAF, and Security Hub Conduct regular security audits and compliance checks (e.g., SOC2, GDPR) Automation & DevOps: Develop CI/CD pipelines using CodePipeline, Jenkins, or GitLab Automate infrastructure scaling, updates, and monitoring workflows Automate the provisioning of EKS clusters and associated AWS resources using Terraform or CloudFormation Develop and maintain Helm charts for consistent application deployments Implement GitOps workflows Disaster Recovery & High Availability: Design and test failover strategies and disaster recovery mechanisms for critical applications Cluster Management and Operations Design, deploy, and manage scalable and highly available EKS clusters Manage Kubernetes objects like Pods, Deployments, StatefulSets, ConfigMaps, and Secrets Implement and manage Kubernetes resource scheduling, scaling, and lifecycle management Team Leadership: Provide technical guidance to Level 1 and 2 engineers Create knowledge-sharing sessions and maintain best practices documentation Cost Management: Implement resource tagging strategies and cost management tools to reduce operational expenses Required Skills and Qualifications: Technical Skills: Deep understanding of AWS core services and advanced features Strong expertise in AWS automation, scripting (Bash, Python, PowerShell), and CLI Experience with AWS CloudFormation and Terraform Knowledge of AWS security best practices, identity and access management, and networking Capacity Planning: Analyze future resource needs and plan capacity accordingly Performance Optimization: Identify and resolve performance bottlenecks Migration and Modernization: Lead complex migration and modernization projects Soft Skills: Excellent problem-solving and analytical skills Strong communication and interpersonal skills Ability to work independently and as part of a team Customer-focused approach Certifications (Preferred): AWS Certified Solutions Architect - Professional AWS Certified DevOps Engineer - Professional AWS Certified Security - Specialty

Posted 1 week ago

Apply

0 years

10 - 15 Lacs

pune, maharashtra, india

On-site

Sr. QA Engineer Shift : Night Shift Location: Navi Mumai / Bangalore / Pune / Hyderabad / Mohali / Dehradun / Panchkula Technical Proficiency Deep understanding of Kubernetes internals, cluster lifecycle management, Helm, service meshes (e.g., Istio or Linkerd), and network policies. Strong scripting and automation capabilities (Python, Pytest, Bash, etc.). Familiarity with observability stacks (Prometheus, Grafana, Jaeger), Kubernetes security (RBAC, secrets management), and performance benchmarking tools (e.g., K6). Solid grounding in cloud architecture (AWS, Azure, GCP), infrastructure provisioning, and containerized CI/CD. Moderate to advanced linux knowledge and proficiency is required: Bash scripting and debugging, systemd/logs, networking/firewalling/routing, certificate/PKI management, containers (Docker/containerd), and Kubernetes tooling (kubectl/Helm with OCI registries, GitOps/Flux) to install, test, and troubleshoot multi-cluster environments. Architecting Test Systems Architect test frameworks and infrastructure for validating microservices and infrastructure components in multi-cluster and hybrid-cloud environments. Oversee the design of complex test scenarios simulating production-like workloads, resource scaling, failure injection, and recovery across distributed clusters. Automation & Scalability Spearhead the development of scalable and maintainable test automation integrated with CI/CD (Jenkins, GitHub Actions, etc.). Leverage Kubernetes APIs, Helm, and service mesh tools to build comprehensive automation coverage, including system health, failover behavior, and network resilience. Promote test infrastructure-as-code and drive IaC forward on the team making sure the infrastructure code is repeatable, extensible and reliable. Skills: kubernetes,python,bash,cloud,aws,azure,qa automation,istio,linkerd

Posted 1 week ago

Apply

4.0 years

0 Lacs

mohali district, india

On-site

Job Overview: We are seeking a Site Reliability Engineer (SRE) to ensure the reliability, scalability, and performance of our cloud platform. You will work on observability, automation, incident response, capacity planning, and system optimization to minimize downtime and speed up recovery. Key Responsibilities: Build and maintain monitoring, logging, and alerting solutions Lead incident response & post-mortems Implement and test disaster recovery strategies Collaborate with teams to define and enforce SLAs Automate deployment, scaling, and recovery workflows Manage infrastructure with Terraform, GitLab CI/CD, and Kubernetes Participate in on-call rotations. Skills & Experience: 4+ years in SRE/DevOps roles Proficient in Python, Bash, Shell with exposure to Chef/Ansible Strong in AWS (EC2, EKS, RDS, CloudWatch, etc.) Hands-on Kubernetes administration experience Knowledge of IaC (Terraform/CloudFormation) Expertise in Prometheus, Grafana, ELK, and tracing systems Experience with PostgreSQL & network/security best practices Familiar with CI/CD & GitOps workflows Exposure to tools like Splunk, Datadog, Dynatrace. Preferred: AWS Solutions Architect/DevOps Engineer certification Certified Kubernetes Administrator (CKA)

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies