Jobs
Interviews

1569 Gitops Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

india

On-site

Job Title: Senior Software Engineer II (8+ Years Experience) About the Role As the senior engineer with the platform engineering team, you will spearhead the creation of robust cloud platforms while enhancing application observability to ensure seamless monitoring and diagnostics. Embracing an automation-first approach, you will eliminate manual deployments, empowering developers to be self-sufficient and agile. Your leadership will be instrumental in driving innovation, efficiency, and scalability across the organization. Key Responsibilities Ensure the platform is running 24/7 and respond to incidents as needed. Design and build all aspects of an enterprise platform, e.g. tooling, CI/CD, Security, Observability. Lead the technical decision-making on configuration management, version control, build/ deployment management and automation. Automate several tasks and Increase Team and Operational efficiency. Support CI/CD and associated tools in pipelines for development and production environments. Creation, execution and maintenance of automation scripts to assist in CI/CD pipelines. Share engineering knowledge through presentations, blogs, and videos with the broader engineering community. Collaborate with product owners and teams to create relevant engineering road maps to increase adoption. Research new technology and alternative solutions to find the best solutions to problems Work with observability tools like Dynatrace, AppDynamics, Grafana, Splunk, ELK or similar. to build enterprise monitoring solutions Key Requirements Minimum 8 years of experience in devOps and SRE Has 3+ years of experience in software development using languages like .NET, Java or similar Highly proficient across foundational cloud services (IAM, Networking, IaC, Step Function, Storage, Serverless/Lambda) Hands-on experience with Docker, Kubernetes, OpenShift, and cloud platforms (AWS, Azure, GCP). Working experience in programming languages, such as Golang, Python, Typescript, Rust or Java Highly proficient with code repositories and CI/CD tools like GitHub, GitHub Actions, GitLab, Artifactory Confident working with Infrastructure as Code tools like CloudFormation, CDK, Terraform, Ansible Good understanding of software development and lifecycle management. Good experience with monitoring, alerting and dashboarding tools (Nagios, DataDog, SumoLogic, Splunk, ELK, or similar) Good understanding of Agile methodology Nice To Have AWS Certifications or similar Experience with GitOps workflows and progressive delivery strategies. Experience with System & IT operation – Windows and Linux OS administration.

Posted 5 days ago

Apply

5.0 years

0 Lacs

pune, maharashtra, india

On-site

About Position: We are looking for platform engineer with hands on experience with kubernates, grafan, cloud with python. Role: Platform Engineer Location: All Persistent Locations Experience: 8 to 13 yrs Job Type: Full Time Employment What You'll Do: Build reusable workflows using Go, empowering developers to provision infrastructure, deploy applications, manage secrets, and operate at scale without needing to become Kubernetes or cloud experts Drive platform standardization and codification of best practices across cloud infrastructure, Kubernetes, and CI/CD Create developer friendly APIs and experiences while maintaining a high bar for reliability, observability, and performance Design, develop, and maintain Go-based platform tooling and self-service automation that simplifies infrastructure provisioning, application deployment, and service management. Write clean, testable code and workflows that integrate with our internal systems such as GitLab, ArgoCD, Port, AWS, and Kubernetes. Partner with product engineering, SREs, and cloud teams to identify high-leverage platform improvements and enable adoption across brands. Establish templates, reusable libraries, and workflows to promote consistency and scalability. Maintain observability, alerting, and health monitoring for platform systems and workflows. Develop with an automation-first mindset eliminating toil and manual operations through codification. Collaborate on IAM, security, cost, and scalability decisions with AWS-native and Kubernetes-native architectures. Expertise You'll Bring: Represent Platform Engineering in strategic discussions and contribute to shaping the roadmap and team vision. 5-7+ years in a professional cloud computing role with Kubernetes, Docker and Infra-as-Code experience A BA/BS in Computer Science or equivalent work experience 5–7+ years in Cloud/DevOps/SRE/Platform Engineering roles. Proficient in Golang for backend automation and system tooling.(Any one program lang ..python, java, golang) Exp in Grafana, Prometheus, Datadog. Experience operating in Kubernetes environments and building automation for multi-tenant workloads. Deep experience with AWS (or equivalent cloud provider), infrastructure as code (e.g., Terraform), and CI/CD systems like GitLab CI. Strong understanding of containers, microservice architectures, and modern DevOps practices. Familiarity with GitOps practices using tools like ArgoCD, Helm, and Kustomize. Strong debugging and troubleshooting skills across distributed systems. Excellent written and verbal communication skills, with a product mindset. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly growth opportunities and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Values-Driven, People-Centric & Inclusive Work Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We support hybrid work and flexible hours to fit diverse lifestyles. Our office is accessibility-friendly, with ergonomic setups and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment Let’s unleash your full potential at Persistent - persistent.com/careers “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 5 days ago

Apply

1.0 years

0 Lacs

pune, maharashtra, india

On-site

About Avaya Avaya is an enterprise software leader that helps the world’s largest organizations and government agencies forge unbreakable connections. The Avaya Infinity™ platform unifies fragmented customer experiences, connecting the channels, insights, technologies, and workflows that together create enduring customer and employee relationships. We believe success is built through strong connections – with each other, with our work, and with our mission. At Avaya, you'll find a community that values your contributions and supports your growth every step of the way. Learn more at https://www.avaya.com Short Description Overview Looking for a hands-on engineer to drive infrastructure automation, hybrid cloud deployment, and security hardening across Azure and Azure Stack environments. Must be skilled in infrastructure as code (Terraform, Ansible), Kubernetes, service mesh, and CI/CD using Jenkins, GitHub Actions and Azure DevOps. Strong emphasis on secure networking, DNS, PKI, and identity integration (Keycloak or similar). Key Skills Cloud & Hybrid: Azure, Azure Stack IaC & Automation: Terraform, Ansible Containers: Kubernetes (AKS/self-managed), Service Mesh (Istio, Linkerd) CI/CD: Jenkins, GitHub Actions, Azure DevOps Networking & Security: VNETs, NSGs, PKI, DNS, TLS, Zero Trust IDP Integration: Keycloak, OAuth2 Scripting: PowerShell, Bash, Python Programming Language: Java Must-Have Experience 1+ years in DevOps or Infrastructure Engineering Built/managed hybrid Azure environments Deployed secure Kubernetes clusters with service mesh Developed reusable Terraform/Ansible/ GitHub modules Automated secure pipelines using Jenkins/Azure DevOps Integrated Java-based IDPs (Keycloak) for enterprise SSO Nice to Have Azure/Azure Security/CKA certifications Experience in regulated or enterprise-scale environments Exposure to GitOps, container security, or compliance tooling This will be a hybrid working model. Education Associate Degree or equivalent experience or a Tech certificate Footer Avaya is an Equal Opportunity employer and a U.S. Federal Contractor. Our commitment to equality is a core value of Avaya. All qualified applicants and employees receive equal treatment without consideration for race, religion, sex, age, sexual orientation, gender identity, national origin, disability, status as a protected veteran or any other protected characteristic. In general, positions at Avaya require the ability to communicate and use office technology effectively. Physical requirements may vary by assigned work location. This job brief/description is subject to change. Nothing in this job description restricts Avaya right to alter the duties and responsibilities of this position at any time for any reason. You may also review the Avaya Global Privacy Policy (accessible at https://www.avaya.com/en/privacy/policy/) and applicable Privacy Statement relevant to this job posting (accessible at https://www.avaya.com/en/documents/info-applicants.pdf).

Posted 5 days ago

Apply

4.0 years

0 Lacs

india

Remote

We are seeking a skilled Site Reliability Engineer (SRE) with a passion for managing large-scale, GPU-accelerated infrastructure. In this role, you’ll help architect and maintain robust, cloud-native platforms that power cutting-edge AI and ML workloads across multi-cloud and on-premise environments. Kubernetes Management: Design, optimize, and maintain scalable multi-cluster Kubernetes deployments across AWS , Google Cloud , and on-prem infrastructure. Potential expansion into Azure or Oracle Cloud environments. Infrastructure Automation: Use Terraform , Pulumi , and GitOps methodologies (e.g., Argo CD , Flux ) to provision and manage cloud-native resources. CI/CD Pipeline Reliability: Maintain high-availability build and deployment pipelines , ensuring rapid, safe delivery with strong rollback strategies. GPU Infrastructure Operations: Operate and scale GPU fleets with NVIDIA driver management , MIG partitioning , auto-scaling , and firmware lifecycle handling . Familiarity with AMD/ROCm and upcoming GPU platforms is a plus. Monitoring & Observability: Scale and tune observability systems including Prometheus and Grafana . Define and track SLIs/SLOs and enable proactive capacity monitoring. On-Call & Incident Response: Participate in a rotating on-call schedule, lead incident resolution efforts, and contribute to incident retrospectives and runbook documentation. Process Development & Mentorship: Help shape and evolve SRE processes, and mentor engineers across the organization on reliability practices and tooling. 4+ years as a Site Reliability Engineering Deep understanding of Kubernetes architecture and experience with managing large-scale clusters in production. Strong hands-on skills with AWS and Google Cloud Platform ; any exposure to Azure or Oracle Cloud is beneficial. Proven experience with Infrastructure as Code (Terraform, Pulumi) and GitOps practices. Solid knowledge of Linux internals , system-level debugging, and networking fundamentals. Direct experience operating GPU clusters (preferably NVIDIA , including MIG usage); bonus points for experience with ROCm or GPU-focused providers like Lambda or Nebius . Proficiency with Prometheus , Grafana , and maintaining observability at scale. Comfortable working in English (written and spoken). Paid Vacations Annual Bonus: 1-month salary This is a full-time position requiring 40 hours per week, but it will be structured as contractor work. Devices: You will be expected to use your own computer to perform the work. Sole Employment: No second job is permitted.

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

india

Remote

About Fello Fello is a profitable, hyper-growth, VC-backed B2B SaaS startup on a mission to empower businesses with data-driven intelligence. Our AI-powered marketing automation platform helps businesses optimize engagement, make smarter decisions, and stay ahead in a competitive market. With massive growth potential and a track record of success, we’re just getting started. If you’re passionate about innovation and want to be part of an industry-defining team, Fello is the place to be. About You As a Senior DevOps Engineer, you will play a key role in designing, building, and maintaining Fello’s cloud infrastructure, deployment pipelines, monitoring systems, and security practices. You will collaborate closely with engineering, product, and security teams to ensure high availability, scalability, and compliance across all environments. This is a hands-on role where you’ll architect, automate, and optimize infrastructure, while driving adoption of DevOps best practices across the organization. This is a remote role with the option to work anywhere. You Will Design, implement, and manage scalable and secure cloud infrastructure on AWS. Build and optimize CI/CD pipelines (GitLab, ArgoCD) for automated deployments across multiple environments. Deploy, monitor, and manage containerized workloads on Amazon EKS and ECS using Helm and GitOps practices. Implement and maintain observability stack (Prometheus, Grafana, Loki, AlertManager, CloudWatch) with proactive alerting and logging. Drive infrastructure as code (IaC) practices using Terraform for consistent and repeatable environment provisioning. Ensure security, compliance, and audit readiness by implementing least-privilege IAM, audit logging, and vulnerability scanning. Manage networking and security configurations (VPC, WAF, GuardDuty, Route53, VPN, SSL/ACM). Work on cost optimization strategies using RIs, savings plans, Graviton instances, lifecycle policies, and data cleanup. Troubleshoot complex system, application, and infrastructure issues across Linux, networking, and cloud layers. Collaborate with cross-functional teams to improve developer productivity, deployment velocity, and system reliability. You Have 3-5 years of DevOps experience with strong expertise in AWS cloud services (EC2, ECS, EKS, RDS, Lambda, S3, CloudWatch, VPC, SNS, SQS, API Gateway etc.). Hands-on experience with Kubernetes (EKS), Helm, Security and Networking in Kubernetes ecosystem, and GitOps workflows. Strong knowledge of CI/CD tooling – GitLab CI, ArgoCD/FluxCD. Proficiency with Infrastructure as Code – Terraform, Packer, Automation with Bash or Python. Experience with monitoring and logging tools – Prometheus, Grafana, Loki, AlertManager, CloudWatch, OpenTelemetry. Strong skills in Linux administration, networking, and troubleshooting. Familiarity with security and compliance practices (SOC2, IAM least privilege, audit logging, secret management). Experience with cost optimization strategies on AWS. Knowledge of Kafka, Redis, MongoDB, ElastiSearch, Relational Databases. Interested in expanding expertise in modern databases and administration. Exposure to deploying GPU workloads on EKS is a plus. Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced startup environment. AWS and Kubernetes certifications are good to have. Our Benefits Competitive Compensation: Attractive salary and benefits package. Flexible Work Environment: Fully remote work with flexible hours to promote work-life balance. Professional Growth: Opportunities for career advancement and professional development. Health & Wellness: Comprehensive health vision insurance plans. Paid Time Off: Generous PTO and paid holidays to recharge and relax. Collaborative Culture: A supportive team environment that values innovation and collaboration. Equity Options: Opportunity to own a part of Fello and share in our success. Cutting-Edge Projects: Work on innovative products that leverage AI and advanced technologies.

Posted 5 days ago

Apply

0.0 - 4.0 years

14 - 15 Lacs

delhi cantt, delhi, delhi

On-site

Job Description – Azure APIM Ops with Bicep Developer Role Overview Experience Required : 4years Location: Delhi NCR We are seeking an experienced Azure APIM Ops with Bicep Developer to design, implement, and manage API Management (APIM) solutions in a cloud-native environment. The role requires strong expertise in Azure API Management, Infrastructure-as-Code(IaC) using Bicep , and CI/CD automation for consistent deployment across environments (Dev, Test, Perf, Prod). The developer will work closely with architects, DevOps engineers, and integration teams to operationalize and optimize API platforms. Key Responsibilities · APIM Operations s Management o Configure, manage, and monitor Azure API Management (APIM) instances. o Implement policies for authentication, authorization, logging, throttling, and routing. o Manage APIM resources such as APIs, Products, Groups, Diagnostics, and Named Values. o Troubleshoot API failures, performance issues, and connectivity errors with backend services. · Infrastructure-as-Code (Bicep) o Design and develop Bicep templates for provisioning APIM and related resources. o Parameterize templates for multi-environment deployments (Dev/Perf/Prod). o Manage Key Vault references (identityClientId, secretIdentifier) in APIM Named Values. o Ensure IaC templates align with governance, security, and naming standards. o Experience on building Infrastructure using BICEP and APIM Ops extractor and Publisher for APIM · CI/CD Automation o Integrate Bicep templates with Azure DevOps/GitHub Actions pipelines. o Automate deployment of APIM artifacts (APIs, policies, diagnostics) through CI/CD. o Implement APIM Ops Extractor for exporting and version-controlling APIM configurations. o Support rollback, recovery, and environment synchronization processes. · Security s Compliance o Configure Managed Identity authentication between APIM and Azure Functions/App Services. o Remove dependency on function keys; adopt AAD access token authentication. o Ensure compliance with enterprise security policies (TLS 1.2+, AES-256, SOC2, etc.). · Monitoring s Observability o Set up diagnostics/logging to Application Insights/Log Analytics. o Create dashboards for API performance, failure analysis, and usage insights. o Implement proactive alerting for APIM health and performance thresholds. Required Skills · Strong experience with Azure API Management (APIM) – policies, backends, diagnostics, named values. · Experience on building Infrastructure using BICEP and APIM Ops extractor and Publisher for APIM · Hands-on expertise in Bicep IaC – templates, parameters, modules, and deployment. · Solid understanding of Azure DevOps Pipelines / GitHub Actions for CI/CD automation. · Experience with Azure Key Vault integration in APIM. · Familiarity with Azure Functions, Logic Apps, Event Grid, Service Bus integrations. · Good troubleshooting skills in API failures, policy debugging, and connectivity issues . · Knowledge of APIM Ops Extractor and GitOps practices for API lifecycle management. · Strong understanding of Azure RBAC, Managed Identity, AAD authentication . Nice-to-Have Skills · Experience with YAML/JSON APIM configuration files . · Knowledge of PowerShell or Azure CLI for automation. · Familiarity with Terraform in addition to Bicep. · Exposure to App Insights Kusto queries (KQL) for log analysis. · Understanding of multi-region API deployments and DR setup. · Knowledge of network security (private endpoints, VNET integration, NSGs, firewalls) . Qualifications · Bachelor’s degree in Computer Science, Engineering, or related field. · 4+ years of experience in Azure cloud development C operations. · 2+ years of dedicated experience with APIM and IaC (Bicep). · Relevant Azure certifications (AZ-204, AZ-400, AZ-305, or API Management specialty) are a plus. Job Types: Full-time, Permanent Pay: ₹1,400,000.00 - ₹1,500,000.00 per year Application Question(s): Do you have strong experience with Azure API Management (APIM) – policies, back ends ,diagnostics, named values? Do you have experience on building Infrastructure using BICEP and APIM Ops extractor and Publisher for APIM? What's your current and expected CTC? What's your current location? Experience: 7years: 4 years (Required) Location: Delhi Cantt, Delhi, Delhi (Required)

Posted 5 days ago

Apply

8.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job Description Role : Manager - DevOps We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are seeking a Manager DevOps who will lead and manage the organizations DevOps Infrastructure, Observability stack for applications, CI-CD Pipeline and support services. This role involves managing a team of DevOps engineers, architecting scalable infrastructure, and ensuring high availability and performance of our messaging and API management systems. This individual will oversee a team of IT professionals, ensure the seamless delivery of IT services, and implement strategies to align technology solutions with business objectives. The ideal candidate is a strategic thinker with strong technical expertise and proven leadership we entrust you with : Lead and mentor a team of DevOps Lead/Engineers in designing and maintaining scalable infrastructure. Architect and manage Kafka clusters for high-throughput, low-latency data streaming. Deploy, configure, and manage Kong API Gateway for secure and scalable API traffic Design and implement CI/CD pipelines for microservices and infrastructure. Automate infrastructure provisioning using tools like Terraform or Ansible. Monitor system performance and ensure high availability and disaster recovery. Collaborate with development, QA, and security teams to streamline deployments and enforce best practices. Ensure compliance with security standards and implement DevSecOps practices. Maintain documentation and provide training on Kafka and Kong usage and best practices. Strong understanding of observability pillars : metrics, logs, traces, and events. Hands-on experience with Prometheus for metrics collection and Grafana for dashboarding and visualization. Proficiency in centralized logging solutions like ELK Stack (Elasticsearch, Logstash, Kibana), Fluentd, or Splunk. Experience with distributed tracing tools such as Jaeger, Zipkin, or OpenTelemetry. Ability to implement instrumentation in applications for custom metrics and traceability. Skilled in setting up alerting and incident response workflows using tools like Alertmanager, PagerDuty, or Opsgenie. Familiarity with SLOs, SLIs, and SLA definitions and monitoring for service reliability. Experience with anomaly detection and root cause analysis (RCA) using observability data. Knowledge of cloud-native monitoring tools (e.g., AWS CloudWatch, Azure Monitor, GCP Operations Suite). Ability to build actionable dashboards and reports for technical and business stakeholders. Understanding of security and compliance monitoring within observability frameworks. Collaborative mindset to work with SREs, developers, and QA teams to define meaningful observability goals. Prepare and manage the IT budget, ensuring alignment with organizational priorities. Monitor expenditures and identify opportunities for cost savings without compromising quality. Well-spoken with good communication skills, as lot of stakeholder management is needed. What matters in this role : work experience : Bachelors or masters degree in computer science, Engineering, or related field. 8+ years of experience in DevOps or related roles, with at least 5 years in a leadership position. Strong hands-on experience with Apache Kafka (setup, tuning, monitoring, security). Proven experience with Kong API Gateway (plugins, routing, authentication, rate limiting). Proficiency in cloud platforms (AWS, Azure, or GCP). Kafka certification or Kong Gateway certification. Experience with service mesh technologies (e.g., Istio, Linkerd). Knowledge of event-driven architecture and microservices patterns. Experience with GitOps and Infrastructure as Code (IaC). Experience with containerization and orchestration (Docker, Kubernetes). Strong scripting skills (Bash, Python, etc.). Hands on with monitoring tools (Prometheus, Grafana, Mimir, ELK you should be comfortable with : Working from office : 5 days a week ( Sector 62, Noida) Pushing The Boundaries Have a big idea? See something that you feel we should do but havent done? We will hustle hard to make it happen. We encourage out of the box thinking, and if you bring that with you, we will make sure you get a bag that fits all the energy you bring along. What We Value In Our People You take the shot : You Decide Fast and You Deliver Right You are the CEO of what you do : you show ownership and make things happen You own tomorrow : by building solutions for the merchants and doing the right thing You sign your work like an artist : You seek to learn and take pride in the work you do (ref:hirist.tech)

Posted 5 days ago

Apply

5.0 - 12.0 years

0 Lacs

karnataka

On-site

You have an exciting opportunity to join a growing ICT Services company with a global portfolio. Role Overview: You will be in a technical Expert-level position in the Cloud organization within Getronics. Your responsibility will be to maintain and provide excellent cloud operations services to customers. You will work closely with bid management, pre-sales, solution & service architects to support the execution of complex solutions. Key Responsibilities: - Combine expertise in Kubernetes with the development of cloud-native applications using serverless and container technologies. - Build, manage, and optimize Kubernetes clusters at scale. - Work on cloud-agnostic Kubernetes deployments, including Azure Kubernetes Service (AKS), Elastic Kubernetes Service (EKS, and private cloud deployments on VMware. - Be comfortable leveraging serverless technologies to develop new intellectual property (IP). Qualifications Required: - Bachelor's Degree from a reputed institute. Additional Details: This role requires more than 12+ years of experience in IT infrastructure and Development roles. You should have at least 5 years of experience managing Kubernetes workloads, from building and hardening to overseeing cloud-agnostic Kubernetes clusters. A minimum of 5+ years working on microservices/containers/serverless/event-driven architectures is essential. Additionally, experience in creating and managing production-scale Kubernetes clusters and a deep understanding of Kubernetes networking is required. Experience with Serverless technologies like Lambda, Functions is preferred, along with a solid understanding of internal workings within Kubernetes clusters and experience upgrading/patching production-grade Kube clusters. You should have experience performing application deployments on Kubernetes clusters, preferably using GitOps, and managing various classes of Kubernetes objects. Setting up observability for Kubernetes clusters is also a key requirement. Preferred Skills: - Certifications such as CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), CKS (Certified Kubernetes Security Specialist). - Vendor certifications in Virtualization, Systems, Storage, Networking. - Any Public cloud administrator/architect certifications.,

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Position Summary... As a Principal Engineer in the Run Time Platforms organization, OneOps team at Walmart Global Techs IDC, you will lead the vision, design, and development of innovative developer productivity platforms. You will spearhead the creation of scalable, high-performance solutions that empower Walmarts engineering community to deliver exceptional results. Collaborating with global cross-functional teams, you will drive platform reliability, optimize workflows, and ensure seamless developer experiences. Your expertise in Java, cloud development, distributed systems, OneOps product, and Application Lifecycle Management (ALM) and Infrastructure Lifecycle Management (ILM) will be critical in shaping Walmarts technology ecosystem, enabling thousands of engineers to innovate efficiently and deliver high-quality solutions to millions of customers. What you&aposll do... About The Team The Orchestration Foundation platform team at Walmart Global Tech. has a mission to provide Walmart developers a world class platform capability to deliver reliable applications faster. We help thousands of developers save time and code better, so that millions of Walmart associates can help hundreds of millions of customers to save money and live better. We enable application teams deploy workloads symmetrically that brings consistent outcome and predictable behaviour of applications across public, private, and edge environments. We strive to provide a simple and seamless experience to our developers keeping developer productivity is our top priority. We sustain millions of transactions per second, process petabytes of data, and enable tens of thousands of production deployments per day. We simplify the complexities of scale and unify the software development for all aspects of business, digital and physical. We are developers developers. We provide and foster the cloud native culture to our organization. You will be an integral part of the Runtime Platforms (RTP) Organization, where you will play a key role in building, enhancing, and maintaining platforms that empower Walmarts developer community. Your responsibilities will include designing and developing new features, ensuring platform reliability, and driving product stability to meet the evolving needs of our engineers. By focusing on scalability, performance, and intuitive user experiences, you will directly contribute to improving developer productivity and efficiency. In this role, you will collaborate with cross-functional teams to deliver seamless solutions, proactively resolve issues, and continuously optimize workflows. Ultimately, your work will have a direct impact on shaping the productivity tools and developer experience at Walmart, enabling thousands of engineers to innovate faster and deliver high-quality solutions to millions of customers. What Youll Do Lead the vision, design, and development of OneOps platform features to enhance developer productivity. Architect and implement scalable, high-performance microservices using Java, Spring Boot, and RESTful APIs. Optimize platform performance through root cause analysis and resolution of critical issues. Leverage expertise in OneOps, ALM, and ILM to streamline application and infrastructure lifecycle processes. Develop and implement the GitOps model to streamline deployment and management of infrastructure and applications. Utilize knowledge of Agentic AI and Generative AI to enhance platform capabilities and developer tools. Collaborate with global Walmart engineering teams to share knowledge and strengthen the tech community. Implement and enhance CI/CD pipelines, observability, logging, monitoring, and alerting for production systems. Provide hands-on production support, including issue triage, troubleshooting, and resolution. What Youll bring: Deep expertise in Java, JVM internals (concurrency, multithreading), Spring Boot, Hibernate, and JAX-RS. Hands-on experience with the OneOps platform, with a focus on optimizing developer and infrastructure workflows. Strong proficiency in Application Lifecycle Management (ALM) and Infrastructure Lifecycle Management (ILM). Expertise in developing and implementing the GitOps model to streamline deployment and management of infrastructure and applications. Solid foundation in computer science fundamentals, including algorithms, data structures, databases, and SQL. Proven experience in designing and implementing Service-Oriented Architecture (SOA) and RESTful Web Services. Extensive experience in building and managing cloud-based applications and distributed systems. Familiarity with storage and messaging technologies such as Elasticsearch, PostgreSQL, Kafka, or Python (preferred). Proficiency in frontend technologies like HTML5, JavaScript, CSS3, React, Redux, or Node.js (preferred). Hands-on experience with CI/CD pipelines, observability, logging, monitoring, and alerting tools. Demonstrated ability to lead technical strategy, mentor teams, and drive innovation in a fast-paced environment. Hands on exposure to production support related activities (issue identification, resolution) About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. Thats what we do at Walmart Global Tech. Were a team of software engineers, data scientists, cybersecurity expert&aposs and service professionals within the worlds leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone isand feelsincluded, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, were able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Option 1: Bachelor&aposs degree in computer science, computer engineering, computer information systems, software engineering, or related area and 5 years experience in software engineering or related area. Option 2: 7 years experience in software engineering or related area. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Masters degree in computer science, computer engineering, computer information systems, software engineering, or related area and 3 years' experience in software engineering or related area., We value candidates with a background in creating inclusive digital experiences, demonstrating knowledge in implementing Web Content Accessibility Guidelines (WCAG) 2.2 AA standards, assistive technologies, and integrating digital accessibility seamlessly. The ideal candidate would have knowledge of accessibility best practices and join us as we continue to create accessible products and services following Walmarts accessibility standards and guidelines for supporting an inclusive culture. Primary Location... 4,5,6, 7 Floor, Building 10, Sez, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-2259158 Show more Show less

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

kochi, kerala

On-site

Role Overview: As a Cloud AWS Administrator at Wipro, your role involves providing significant technical expertise in architecture planning and design of the concerned tower (platform, database, middleware, backup, etc.) and managing its day-to-day operations. You will be responsible for hands-on expertise in Azure Cloud Services and AWS, including cloud migration experience. Proficiency in Infrastructure as Code, CI/CD pipelines, FinOps principles, and DevSecOps tooling is essential. Additionally, you should have strong troubleshooting skills in cloud performance and application-level issues. Key Responsibilities: - Forecast talent requirements and hire adequate resources for the team - Train direct reportees for recruitment and selection decisions - Ensure compliance to onboarding and training standards for team members - Set goals, conduct performance reviews, and provide feedback to direct reports - Lead engagement initiatives, track team satisfaction scores, and identify engagement-building initiatives - Manage operations of the tower, ensuring SLA adherence, knowledge management, and customer experience - Deliver new projects timely, avoiding unauthorized changes and formal escalations Qualifications Required: - 4-8 years of experience in multi-cloud environments - Hands-on expertise in Azure Cloud Services and AWS - Experience with cloud migration, Infrastructure as Code, and CI/CD pipelines - Knowledge of FinOps principles, DevSecOps tooling, and DevOps practices - Strong troubleshooting skills in cloud performance and application-level issues About the Company: Wipro is an end-to-end digital transformation partner with ambitious goals. The company is focused on reinvention and evolution, empowering employees to design their own reinvention. Wipro welcomes applications from people with disabilities, promoting diversity in leadership positions and encouraging career progression within the organization. Join Wipro to realize your ambitions and be part of a business powered by purpose.,

Posted 6 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You'll Be Doing... You will be providing expert-level database administration support for mission-critical production environments on AWS, leading complex cloud database projects and mentoring junior team members. Leading design and implementation of Cassandra, MongoDB, and PostgreSQL/Oracle database solutions on AWS. Implementing AWS database services (RDS, Aurora, DynamoDB). Performing advanced database performance tuning and optimization using AWS Performance Insights and native tools. Managing complex database upgrades, migrations, and patches across AWS environments and regions. Implementing and managing database replication, clustering, and high availability solutions across AWS AZs. Developing automation scripts and infrastructure as code using AWS CLI, CloudFormation, and Terraform. Designing and implementing AWS backup and disaster recovery strategies across regions. Providing technical leadership and mentoring to junior DBAs on AWS best practices. Collaborating with architecture teams on AWS database design, capacity planning, and cost optimization. Managing shift-based database support operations and incident response using AWS monitoring tools. Implementing basic AWS backup strategies using automated snapshots and cross-region backup. Establishing database SLI/SLO metrics and error budgets for AWS-hosted databases. Where you'll be working... This hybrid role will have a defined work location that includes work from home and assigned office days as set by the manager. What We're Looking For... You'll Need To Have Bachelor's degree or four or more years of relevant work experience. Three or more years of experience in database administration with proven expertise in AWS environments. Expert-level knowledge of Cassandra, MongoDB, and PostgreSQL/Oracle on AWS. Advanced SQL and NoSQL query optimization skills in cloud environments. Extensive experience with AWS database services (RDS, Aurora, DynamoDB). Proven experience with AWS database clustering, replication, and multi-AZ high availability. Experience with AWS infrastructure automation tools (CloudFormation, Terraform). Deep knowledge of AWS security services (IAM, KMS, Secrets Manager, WAF) for database protection. Experience with AWS monitoring and alerting ecosystem (CloudWatch, SNS, EventBridge). Knowledge of AWS cost management and optimization strategies for database workloads. Even better if you have one or more of the following: AWS Solutions Architect Associate or Professional certification. AWS Database Specialty certification. MongoDB, Cassandra, or PostgreSQL professional certifications. Experience with AWS container services (ECS, EKS) for database workloads. Experience with database-as-code and GitOps workflows using AWS services. Understanding of microservices architecture and AWS database patterns. Knowledge of AWS disaster recovery strategies and cross-region replication. Excellent analytical, debugging, and problem-solving skills for AWS-hosted systems. Experience with AWS Well-Architected Framework implementation and reviews. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don't meet every "even better" qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Locations - Chennai, India

Posted 6 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What Youll Be Doing You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leverage monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Work with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implement security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Work closely with the engineering and operations teams to design and implement cloud-based solutions. Provide mentorship and support to team members while sharing best practices for cloud engineering. Maintain detailed documentation of cloud architecture and platform configurations and regularly provide status reports, performance metrics, and cost analysis to leadership. What were looking for... Youll Need To Have Bachelors degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of work experience in Kubernetes administration. Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI. GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory. Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards. Backend development experience with languages to include Golang (preferred), Spring Boot, and Python. Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices. Familiarity with Cloud cost optimization (e.g. Kubecost). Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server. Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis. Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.). Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues. Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA & Gatekeeper Policies. Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.). Networking of microservices, solid understanding of Kubernetes networking and troubleshooting. Certified Kubernetes Administrator (CKA). Demonstrated very strong troubleshooting and problem-solving skills. Excellent verbal communication and written skills. Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD). Red Hat Certified OpenShift Administrator. Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.). Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Locations Chennai, India

Posted 6 days ago

Apply

10.0 years

0 Lacs

delhi, india

On-site

Job Summary We are looking for a highly skilled Cloud Solutions Architect with strong expertise in AWS cloud services, modern data platforms, and AI/ML-driven transformation. This role balances presales solutioning and delivery execution, supporting US clients during 2 PM – 10 PM IST. The ideal candidate will excel at helping enterprises modernize workloads, data ecosystems, and AI adoption on AWS, while maintaining working knowledge of Azure and GCP. Key Responsibilities Presales & Solution Engineering Partner with US-based enterprise customers to identify modernization opportunities in cloud, data, and AI platforms. Architect cloud-native, data-driven solutions on AWS with scalability, security, and cost efficiency in mind. Create reference architectures, modernization roadmaps, and technical proposals with strong business alignment. Lead RFP/RFI responses, client workshops, and technical deep dives on cloud/data/AI transformations. Build confidence with customers through Proof-of-Concepts (PoCs), particularly around data modernization and AI/ML workloads. Delivery & Execution Transition solutions from presales into successful delivery execution. Provide end-to-end technical leadership across architecture design, implementation, data migration, AI/ML pipeline setup, and optimization. Guide clients on best practices in cloud adoption, data engineering, cost optimization, and governance. Collaborate closely with delivery teams to ensure success of large-scale modernization programs. Technical Expertise AWS Cloud (Core) Infra: EC2, VPC, S3, EBS, ELB, Auto Scaling PaaS: RDS, DynamoDB, Aurora, Lambda, API Gateway, Fargate, Step Functions Security & Compliance: IAM, AWS Organizations, WAF, GuardDuty, CloudTrail Data & AI/ML (Modernization Focus) Data Platforms: Redshift, EMR, Athena, Glue, Kinesis, Data Lakes AI/ML: SageMaker, Comprehend, Rekognition; integration with TensorFlow, PyTorch MLOps & Orchestration: MLflow, Airflow, Kubeflow (preferred) Data Governance & Modernization: Designing data lakes, lakehouses, and migration strategies Multi-Cloud (Secondary) Azure: Synapse, AKS, Azure ML GCP: BigQuery, GKE, Vertex AI Understanding of hybrid and multi-cloud strategies DevOps & Automation CI/CD, GitOps, Infrastructure as Code with Terraform & CloudFormation Required Skills & Qualifications Bachelor’s/Master’s in Computer Science, IT, or related discipline. 10+ years in IT, with 5+ years in cloud/data/AI modernization roles. AWS Certified Solutions Architect – Professional (preferred). Strong ability to bridge presales discussions with hands-on delivery execution. Excellent client-facing communication; skilled at presenting complex architectures to CXOs and technical teams alike. Nice to Have Industry knowledge in finance, healthcare, or retail modernization journeys. Hands-on experience with container orchestration (EKS, AKS, GKE). Familiarity with real-time streaming and advanced analytics platforms. Shift & Location Work Hours: 2:00 PM – 10:00 PM IST (aligned to US business hours)

Posted 6 days ago

Apply

5.0 years

0 Lacs

sadar, uttar pradesh, india

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Networking DevOps engineering team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Responsibilities Includes: Requirement Gathering and Analysis: Collaborate with stakeholders to gather automation requirements, understanding business objectives, and network infrastructure needs. Analyse existing network configurations and processes to identify areas for automation and optimization. Analyse existing automation, opportunities to reuse/redeploy them with required modifications. End-to-End Automation Development: Design, develop and implement automation solutions for network provisioning, configuration management, monitoring and troubleshooting. Utilize programming languages such as Ansible, Terraform, Python, PHP to automate network tasks and workflows. Ensure scalability, reliability, and security of automation solutions across diverse network environments. Testing and Bug Fixing: Develop comprehensive test plans and procedures to validate the functionality and performance of automation scripts and frameworks. Identify and troubleshoot issues, conduct root cause analysis and implement corrective actions to resolve bugs and enhance automation stability. Collaborative Development: Work closely with cross-functional teams, including network engineers, software developers, and DevOps teams, to collaborate on automation projects and share best practices. Reverse Engineering and Framework Design: Reverse engineer existing Ansible playbooks, Python scripts and automation frameworks to understand functionality and optimize performance. Design and redesign automation frameworks, ensuring modularity, scalability, and maintainability for future enhancements and updates. Network Design and Lab Deployment: Provide expertise in network design, architecting interconnected network topologies, and optimizing network performance. Setup and maintain network labs for testing and development purposes, deploying lab environments on demand and ensuring their proper maintenance and functionality. Documentation and Knowledge Sharing: Create comprehensive documentation, including design documents, technical specifications, and user guides, to facilitate knowledge sharing and ensure continuity of operations. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Minimum 5+ years of relevant experience as a Network DevOps SME / Automation Engineer Hands on Experience in below technologies: Data Network: Strong experience in configuring, managing, and troubleshooting Cisco, Juniper, HP, and Nokia routers and switches. Hands-on experience with SDWAN & SDN technologies (e.g., Cisco Viptela, Versa, VMWare NSX, Cisco ACI, DNAC, etc.) Network Security: Experience in configuring, managing, and troubleshooting firewalls and load balancers, including Firewalls: Palo Alto, Checkpoint, Cisco ASA/FTD, Juniper SRX Load Balancers: F5 LTM/GTM, Citrix NetScaler, A10. Deep understanding of network security principles, firewall policies, NAT, VPN (IPsec/SSL), IDS/IPS. Programming & Automation: Proficiency in Ansible development and testing for network automation. Strong Python or Shell scripting skills for automation. Experience with REST APIs, JSON, YAML, Jinja2 templates and GitHub for version control. Cloud & Linux Skills: Hands-on experience with Linux server administration (RHEL, CentOS, Ubuntu). Experience working with cloud platforms such as Azure, AWS, or GCP. DevOps: Basic understanding of CI/CD pipelines, GitOps, and automation tools. Familiarity with Docker, Kubernetes, Jenkins, and Terraform in a DevOps environment. Experience working with Infrastructure as Code (IaC) and configuration management tools Ansible Architecture & Design: Ability to design, deploy, and recommend network setups or labs independently. Strong problem-solving skills in troubleshooting complex network and security issues. Certifications Required: CCNP Security / CCNP Enterprise (Routing & Switching) Preferred Technical And Professional Experience Bachelor’s degree and above. Experience in Terraform experience is a plus (for infrastructure as code). Experience in Zabbix template development is a plus. Certifications Preferred: CCIE-level working experience (Enterprise, Security, or Data Center) – PCNSE (Palo Alto), CCSA (Checkpoint), Automation & Cloud, Python, Ansible, Terraform. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 6 days ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

hyderabad, bengaluru

Work from Office

Job : Senior Infrastructure Security & Compliance Engineer (Zero-Touch GPU Cloud GitOps-Driven Compliance & Resilience) We are seeking a Senior Infrastructure Security & Compliance Engineer with 10+ years of experience in infrastructure and platform automation to drive the Zero-Touch Build, Upgrade, and Certification pipeline for our on-prem GPU cloud environment. This role is focused on integrating security scanning, policy enforcement, compliance validation, and backup automation into a fully GitOps-managed GPU cloud stack, spanning hardware OS Kubernetes platform layers. Key Responsibilities Design and implement GitOps-native workflows to automate security, compliance, and backup validation as part of the GPU cloud lifecycle. Integrate Trivy into CI/CD pipelines for container and system image vulnerability scanning. Automate kube-bench execution and remediation workflows to enforce Kubernetes security benchmarks (CIS/STIG). Define and enforce policy-as-code using OPA/Gatekeeper to validate cluster and workload configurations. Deploy and manage Velero to automate backup and disaster recovery operations for Kubernetes workloads. Ensure that all compliance, scanning, and backup logic is declarative and auditable through Git-backed repositories. Collaborate with infrastructure, platform, and security teams to define security baselines, enforce drift detection, and integrate automated guardrails. Drive remediation automation and post-validation gates across build, upgrade, and certification pipelines. Monitor evolving security threats and ensure tooling is regularly updated to detect vulnerabilities, misconfigurations, and compliance drift. Required Skills & Experience 10+ years of hands-on experience in infrastructure, platform automation, and systems security. Primary key skills required are Python/Go/Bash scripting, OPA Rego policy writing, CI integration for Trivy & kube-bench, GitOps Strong knowledge and practical experience with: Trivy for container, filesystem, and configuration scanning kube-bench for Kubernetes CIS benchmark compliance Velero for Kubernetes-native backup and disaster recovery OPA/Gatekeeper for policy-as-code and admission control Deep understanding of GitOps workflows (e.g., Argo CD, Flux) and how to integrate security tools declaratively. Proven experience automating security, compliance, and backup validation in CI/CD pipelines. Solid foundation in Kubernetes internals, RBAC, pod security, and multi-tenant best practices. Familiarity with vulnerability management lifecycles and security risk remediation strategies. Experience with Linux systems administration, OS hardening, and secure bootstrapping. Proficiency in scripting languages such as Python, Go, or Bash for automation and tooling integration. Bonus: Experience with SBOMs, image signing, or container supply chain security Exposure to regulated environments (e.g., PCI-DSS, HIPAA, FedRAMP) Contributions to open-source security/compliance projects

Posted 6 days ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

hyderabad, bengaluru

Work from Office

Job : Senior Kubernetes Platform Engineer (Zero-Touch GPU Cloud GitOps Automation) We are looking for a Senior Kubernetes Platform Engineer with 10+ years of infrastructure experience to design and implement the Zero-Touch Build, Upgrade, and Certification pipeline for our on-premises GPU cloud platform. This role focuses on automating the Kubernetes layer and its dependencies (e.g., GPU drivers, networking, runtime) using 100% GitOps workflows . You will work across teams to deliver a fully declarative, scalable, and reproducible infrastructure stackfrom hardware to Kubernetes and platform services. Key Responsibilities Architect and implement GitOps-driven Kubernetes cluster lifecycle automation using tools like kubeadm , ClusterAPI , Helm , and Argo CD . Develop and manage declarative infrastructure components for: GPU stack deployment (e.g., NVIDIA GPU Operator ) Container runtime configuration ( Containerd ) Networking layers ( CNI plugins like Calico, Cilium, etc.) Lead automation efforts to enable zero-touch upgrades and certification pipelines for Kubernetes clusters and associated workloads. Maintain Git-backed sources of truth for all platform configurations and integrations. Standardize deployment practices across multi-cluster GPU environments, ensuring scalability, repeatability, and compliance. Drive observability, testing, and validation as part of the continuous delivery process (e.g., cluster conformance, GPU health checks). Collaborate with infrastructure, security, and SRE teams to ensure seamless handoffs between lower layers (hardware/OS) and the Kubernetes platform. Mentor junior engineers and contribute to the platform automation roadmap. Required Skills & Experience 10+ years of hands-on experience in infrastructure engineering, with a strong focus on Kubernetes-based environments. Primary key skills required are Kubernetes API, Helm templating, Argo CD GitOps integration, Go/Python scripting, Containerd Deep knowledge and hands-on experience with: Kubernetes cluster management (kubeadm, ClusterAPI) Argo CD for GitOps-based delivery Helm for application and cluster add-on packaging Containerd as a container runtime and its integration in GPU workloads Experience deploying and operating the NVIDIA GPU Operator or equivalent in production environments. Solid understanding of CNI plugin ecosystems , network policies, and multi-tenant networking in Kubernetes. Strong GitOps mindset with experience managing infrastructure as code through Git-based workflows. Experience building Kubernetes clusters in on-prem environments (vs. managed cloud services). Proven ability to scale and manage multi-cluster, GPU-accelerated workloads with high availability and security. Solid scripting and automation skills (Bash, Python, or Go). Familiarity with Linux internals, systemd, and OS-level tuning for container workloads. Bonus: Experience with custom controllers, operators, or Kubernetes API extensions Contributions to Kubernetes or CNCF projects Exposure to service meshes, ingress controllers, or workload identity providers

Posted 6 days ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

hyderabad, bengaluru

Work from Office

Job : Senior Infrastructure Test & Validation Engineer (Zero-Touch GPU Cloud GitOps Validation & Certification) We are seeking a Senior Infrastructure Test & Validation Engineer with 10+ years of experience to lead the Zero-Touch Validation, Upgrade, and Certification automation of our on-prem GPU cloud platform. This role focuses on ensuring the stability, performance, and conformance of the entire stackfrom hardware to Kubernetesusing automated, GitOps-based validation pipelines. The ideal candidate has a strong infrastructure background with deep hands-on skills in Sonobuoy, LitmusChaos, k6, and pytest, and is passionate about automated test orchestration, platform resilience, and continuous conformance. Key Responsibilities Design and implement automated, GitOps-compliant pipelines for validation and certification of the GPU cloud stack across hardware, OS, Kubernetes, and platform layers. Integrate Sonobuoy for Kubernetes conformance and certification testing. Design and orchestrate chaos engineering workflows using LitmusChaos to validate system resilience across failure scenarios. Implement performance testing suites using k6 and system-level benchmarks, integrated into CI/CD pipelines. Develop and maintain end-to-end test frameworks using pytest and/or Go, focusing on cluster lifecycle events, upgrade paths, and GPU workloads. Ensure test coverage and validation across multiple dimensions: conformance, performance, fault injection, and post-upgrade validation. Build and maintain dashboards and reporting for automated test results, including traceability, drift detection, and compliance tracking. Collaborate with infrastructure, SRE, and platform teams to embed testing and validation early in the deployment lifecycle. Own quality assurance gates for all automation-driven deployments. Required Skills & Experience 10+ years of hands-on experience in infrastructure engineering, systems validation, or SRE roles. Primary key skills required are pytest, Go, k6 scripting, automation frameworks integration (Sonobuoy, LitmusChaos), CI integration Strong experience with: o Sonobuoy for Kubernetes conformance and diagnostics o LitmusChaos for fault injection and resilience validation o k6 for performance/load testing in distributed environments o pytest or Go-based test frameworks for automation and validation scripting Deep understanding of Kubernetes architecture, upgrade patterns, and operational risks. Experience validating infrastructure components (GPU drivers, kernel modules, CNI, CRI, etc.) across lifecycle events. Proficient in GitOps workflows and integrating tests into declarative, Git-backed pipelines (e.g., with Argo CD, Flux). Hands-on experience with CI/CD systems (e.g., GitHub Actions, GitLab CI, Jenkins) to automate test orchestration. Solid scripting and automation experience (Python, Bash, or Go). Familiarity with GPU-based infrastructure and its performance characteristics is a strong plus. Strong debugging, root cause analysis, and incident investigation skills.

Posted 6 days ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

hyderabad, bengaluru

Work from Office

Job : Senior Infrastructure Automation Engineer (Zero-Touch GPU Cloud Stack Linux Image Lifecycle) We are seeking a Senior Infrastructure Automation Engineer with 10+ years of experience to lead the design and implementation of a Zero-Touch Build, Upgrade, and Certification pipeline for our on-prem GPU cloud infrastructure. This role focuses on automating the full stackfrom hardware provisioning through OS and Kubernetes deploymentleveraging 100% GitOps workflows . The candidate will bring deep expertise in Linux systems automation, image management, and compliance hardening, with a strong foundation in infrastructure engineering. Key Responsibilities Architect and implement a fully automated, GitOps-based pipeline for building, upgrading, and certifying the Linux operating system layer in the GPU cloud stack (hardware OS Kubernetes platform). Design and automate Linux image builds using Packer , Kickstart , and Ansible . Integrate CIS/STIG compliance hardening and OpenSCAP scanning directly into the image lifecycle and validation workflows. Own and manage kernel module/driver automation , ensuring version compatibility and hardware enablement for GPU nodes. Collaborate with platform, SRE, and security teams to standardize image build and deployment practices across the stack. Maintain GitOps-compliant infrastructure-as-code repositories, ensuring traceability and reproducibility of all automation logic. Build self-service capabilities and frameworks for zero-touch provisioning, image certification, and drift detection. Mentor junior engineers and contribute to strategic automation roadmap initiatives. Required Skills & Experience 10+ years of hands-on experience in Linux infrastructure engineering, system automation, and OS lifecycle management. Primary key skills required are Ansible, Python, Packer, Kickstart, OpenSCAP Deep expertise with: Packer for automated image builds Kickstart for unattended OS provisioning OpenSCAP for security compliance and policy enforcement Ansible for configuration management and post-build customization Strong understanding of CIS/STIG hardening standards and their application in automated pipelines. Experience with kernel and driver management , particularly in hardware-accelerated (GPU) environments. Proven ability to implement GitOps workflows for infrastructure automation (e.g., Git-backed pipelines for image release and validation). Solid knowledge of Linux internals , bootloaders, and provisioning mechanisms in bare-metal environments. Exposure to Kubernetes , particularly in the context of OS-level customization and compliance. Strong collaboration skills across teams including security, SRE, platform, and hardware engineering. Bonus: Familiarity with image signing, SBOM generation, or secure boot workflows Experience working in regulated or compliance-heavy environments (e.g., FedRAMP, PCI-DSS) Contributions to infrastructure automation frameworks or open-source tools

Posted 6 days ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

hyderabad, bengaluru

Work from Office

Job : Senior Infrastructure Automation Engineer (Zero-Touch GPU Cloud Build & Upgrade) We are looking for a Senior Infrastructure Automation Engineer with 10+ years of hands on experience in building and scaling infrastructure automation systems to lead the design and implementation of a Zero-Touch Build, Upgrade, and Certification framework for our on-prem GPU cloud environment. This role demands deep technical expertise across bare-metal provisioning, configuration management, and full-stack automationfrom hardware to Kubernetesbuilt entirely on GitOps principles . Key Responsibilities Architect, lead, and implement a fully automated, zero-touch deployment pipeline for GPU cloud infrastructure spanning hardware OS Kubernetes platform layers. Build robust GitOps-based workflows to manage end-to-end infrastructure lifecyclefrom provisioning to continuous compliance. Design and maintain automation for: Bare-metal control : Power cycling, provisioning, remote installs Firmware and configuration flashing : BIOS, NIC, RAID, etc. Hardware inventory management Configuration drift detection and remediation Develop and extend internal automation frameworks using Ansible, Python , and related infrastructure tooling. Serve as a technical authority and mentor , guiding junior engineers and collaborating cross-functionally with hardware, SRE, and platform engineering teams. Lead architectural and design reviews for infrastructure automation systems. Define and implement best practices for infrastructure as code , compliance, and operational resilience. Champion automation-driven operational models and reduce manual intervention to near-zero. Bonus: Familiarity with Terraform, Chef, and Cloud Automation Platforms . Required Skills & Experience 10+ years of hands-on experience in infrastructure engineering, automation, and systems design, with a strong track record of delivering scalable and maintainable solutions. Primary key skills required are Ansible, Python, ipmitool, firmware scripting, Linux shell scripting Deep expertise in: Ansible for automation and configuration management Python for scripting, integration, and automation logic ipmitool and related tools for low-level hardware management (e.g., IPMI, Redfish) Proven experience with bare-metal automation in data center environments, including: Power control and PXE booting BIOS/NIC/RAID firmware upgrades Hardware and platform inventory systems Strong foundation in Linux systems , networking, and Kubernetes infrastructure. Fluency with GitOps workflows and tools. Experience with CI/CD systems and managing Git-based pipelines for infrastructure. Familiarity with infrastructure monitoring, logging, and drift detection. Strong cross-team collaboration and communication skills, especially across hardware, platform, and SRE teams. Bonus: Prior leadership or mentorship roles Experience contributing to or maintaining open-source infrastructure projects Exposure to GPU-based compute stacks and high-performance workloads

Posted 6 days ago

Apply

12.0 years

0 Lacs

mumbai, maharashtra, india

On-site

12+ Years of Experience. Looking for SRE Role  Help define, drive and implement the SRE strategy  Promote an “Automate-first” culture in operating services, through the reduction of toil  Develop methodologies and strategies for identification of toil-heavy and inefficient processes, and for the automation and elimination of toil, delay and redundancy in such processes.  Assist in developing engineering and operational service metrics with actionable plans to improve operational efficiency, enhance service quality/SLA, and optimize delivery  Working with all parties, develop and implement SLOs for critical services  Define monitoring strategy with Engineering and implement appropriate capabilities  Design and implement reliability improvements  Conduct capacity planning  Perform chaos engineering exercises  Lead architectural reviews for reliability  Drive continuous improvement from incidents  Contribute to the Test and Deployment processes, ensuring that they are as reliable and automated as possible Skills and qualifications  A bachelor’s degree or higher in computer science, information systems, or a related field, or equivalent work experience  Hands on SRE Practitioner with 5+ years working experience in SRE role.  Practical experience defining and implementing Service Level Objectives, and operating to Error Budgets  Have implemented and operated monitoring and observability technologies for a wide range of enterprise-grade Production systems.  Experience in a corporate software development lifecycle methodology. Some experience implementing gitops a plus  Demonstrates a strong understanding of how technical systems work and interact  Strong analytical skills and a solid understanding of all critical Production Support processes  2+ years of experience with one/more public/private cloud platforms (e.g. AWS, Azure etc.). Knowledge Required  Comprehensive understanding of SRE principles and ability to evangelise  Working knowledge of modern observability tooling, including OpenTelemetry, Prometheus, Grafana, and associated projects  Experience of Infrastructure as Code (IaC) principles and design  Extensive knowledge of Configuration Management Solutions such as Ansible, Chef or Puppet

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

mumbai, maharashtra, india

Remote

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world's most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description Cloud/Edge Engineer - GKE Specialist Location: PAN India Experience: 8-12 years Choosing Capgemini means choosing a place where you'll be empowered to shape your career, supported by a collaborative global community, and inspired to reimagine what's possible. Join us in helping leading organizations unlock the value of cloud and edge technologies to drive scalable, intelligent, and sustainable digital transformation. Your Role As a Cloud/Edge Engineer specializing in Google Kubernetes Engine (GKE), you will play a key role in architecting and deploying scalable edge and cloud-native solutions across distributed environments. You will work on cutting-edge technologies like K3s, KubeEdge, and Kosmotron to enable seamless orchestration between edge clusters and centralized GCP infrastructure. In this role, you will: Architect and deploy Kubernetes-based edge clusters (K3s) across distributed environments such as retail or restaurant chains. Integrate shared storage solutions using Rook.io and manage persistent volumes across edge nodes. Implement Kosmotron for multi-cluster Kubernetes control plane management. Extend Kubernetes capabilities to edge devices using KubeEdge for real-time device communication. Deploy and manage workloads on Google Cloud GKE for centralized analytics and cloud-native services. Leverage CloudCore for edge node coordination and data synchronization. Your Profile 8-12 years of experience in cloud-native engineering, with a strong focus on GCP and Kubernetes. Deep expertise in GKE, Cloud Monitoring, Cloud Logging, and other GCP services. Proficiency in containerization technologies including Docker and Kubernetes. Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, Helm, and Deployment Manager. Strong understanding of DevOps practices including CI/CD, GitOps, automated testing, and release management. Solid grasp of cloud security principles including IAM, VPC design, encryption, and vulnerability management. Programming proficiency in Python, Go, or Java for automation and cloud-native development. Experience working in distributed edge environments is a strong plus. What You'll Love About Working Here Flexible work options and remote-friendly culture to support work-life balance. A collaborative and inclusive environment that values innovation and continuous learning. Access to cutting-edge projects and certifications in cloud, edge, and DevOps technologies. About Us Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.

Posted 6 days ago

Apply

0 years

0 Lacs

gurgaon, haryana, india

On-site

Job Description Key Responsibilities: Design and implement Kubernetes security features including Admission Controllers, RBAC, Pod Security Standards, Network Policies, and Secret management. Develop and maintain Infrastructure as Code using Terraform, Helm, and Ansible. Manage CI/CD pipelines using GitOps with ArgoCD. Automate deployments and monitor workloads in Kubernetes using Prometheus and Grafana. Conduct security audits, vulnerability assessments, and implement image scanning and compliance controls. Collaborate with developers and SREs to maintain secure and reliable cloud-native systems on AWS. Write and maintain scripts in Python and Shell to support automation and operational tasks. Ensure adherence to security best practices including CIS benchmarks for Kubernetes. Job Description - Grade Specific Strong hands-on experience with Kubernetes and related security features (e.g., Admission Controllers, PSPs, Network Policies). Proficiency in AWS cloud services. Hands-on experience with ArgoCD for GitOps-based delivery. Expertise in Terraform, Helm, and Ansible. Experience with monitoring and observability using Prometheus and Grafana. Strong scripting skills in Python and Shell. Experience with container security tools (e.g., image scanning, policy enforcement).

Posted 6 days ago

Apply

5.0 years

0 Lacs

madgaon

On-site

About the Role: We are seeking an experienced Senior DevOps Engineer with strong expertise in AWS to join our growing team. You will be responsible for designing, implementing, and managing scalable, secure, and reliable cloud infrastructure. This role demands a proactive, highly technical individual who can drive DevOps practices across the organization and work closely with development, security, and operations teams. Key Responsibilities: Design, build, and maintain highly available cloud infrastructure using AWS services. Implement and manage CI/CD pipelines for automated software delivery and deployment. Collaborate with software engineers to ensure applications are designed for scalability, reliability, and performance. Manage Infrastructure as Code (IaC) using tools like Terraform, AWS CloudFormation, or similar. Optimize system performance, monitor production environments, and ensure system security and compliance. Develop and maintain system and application monitoring, alerting, and logging using tools like CloudWatch, Prometheus, Grafana, or ELK Stack. Manage containerized applications using Docker and orchestration platforms such as Kubernetes (EKS preferred). Conduct regular security assessments and audits, ensuring best practices are enforced. Mentor and guide junior DevOps team members. Continuously evaluate and recommend new tools, technologies, and best practices to improve infrastructure and deployment processes. Required Skills and Qualifications: 5+ years of professional experience as a DevOps Engineer, with a strong focus on AWS. Deep understanding of AWS core services (EC2, S3, RDS, IAM, Lambda, ECS, EKS, etc.). Expertise with Infrastructure as Code (IaC) – Terraform, CloudFormation, or similar. Strong experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or AWS CodePipeline. Hands-on experience with containerization (Docker) and orchestration (Kubernetes, EKS). Proficiency in scripting languages (Python, Bash, Go, etc.). Solid understanding of networking concepts (VPC, VPN, DNS, Load Balancers, etc.). Experience implementing security best practices (IAM policies, KMS, WAF, etc.). Strong troubleshooting and problem-solving skills. Familiarity with monitoring and logging frameworks. Good understanding of Agile/Scrum methodologies. Preferred Qualifications: AWS Certified DevOps Engineer – Professional or other AWS certifications. Experience with serverless architectures and AWS Lambda functions. Exposure to GitOps practices and tools like ArgoCD or Flux. Experience with configuration management tools (Ansible, Chef, Puppet). Knowledge of cost optimization strategies in cloud environments.

Posted 6 days ago

Apply

10.0 - 12.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job Description Qualifications and Experience A Bachelor's degree in Engineering, around 10+ years of professional technology experience Experience deploying and running enterprise grade public cloud infrastructure, preferably with GCP Hands-on Automation with Terraform, Groovy and experience with CI-CD. Hands-on experience in Linux/Unix environment and scripting languages: (eg Shell, Perl, Python, Javascript, Golang etc). Hands-on experience in two or more of the following areas Databases (NoSQL/ SQL): Hadoop, Cassandra, MySQL Messaging system configuration and maintenance (Kafka+Zookeeper, MQTT, RabbitMQ) WAF, CloudArmor, NGINX Apache/Tomcat/JBoss based web applications and services (REST) Observability stacks (eg ELK, Grafana Labs) Hands-on experience with Kubernetes (GKE, AKS) Hands-on experience with Jenkins GitOps experience is a plus Experience working with large Enterprise grade SAAS products. Proven capability for critical thinking, problem solving and the patience to see hard problems through to the end Job Description - Grade Specific

Posted 6 days ago

Apply

3.0 years

3 - 6 Lacs

india

On-site

Job Title: DevOps Engineer (3+ Years Experience) Location: Delhi Job Type: Full-time Experience Required: Minimum 3 Years --- Job Summary: We are seeking a highly skilled and motivated DevOps Engineer with a minimum of 3 years of hands-on experience in managing CI/CD pipelines, cloud infrastructure (preferably AWS), container orchestration, configuration management, and infrastructure monitoring. You will work closely with the development, QA, and IT teams to streamline deployments, ensure system reliability, and automate operational tasks. --- Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab CI or similar tools. Manage and scale cloud infrastructure on AWS (EC2, S3, IAM, RDS, Route53, Lambda, CloudWatch, etc.). Containerize applications using Docker and orchestrate using Kubernetes (EKS preferred). Implement Infrastructure as Code (IaC) using Ansible, Terraform, or CloudFormation. Maintain secure and scalable Linux server environments, ensuring optimal performance and uptime. Write and maintain shell scripts or Python scripts for automation and monitoring tasks. Setup and manage monitoring, alerting, and logging systems using tools like Grafana, Prometheus, ELK Stack (Elasticsearch, Logstash, Kibana), or CloudWatch. Implement robust backup and disaster recovery strategies. Collaborate with development teams for efficient DevSecOps practices including secrets management and vulnerability scans. Troubleshoot and resolve production issues, performing root cause analysis and preventive planning. --- Required Skills and Experience: 3+ years of experience as a DevOps Engineer or similar role. Proficient in GitLab (or GitHub Actions, Jenkins), including runners and CI/CD pipelines. Strong hands-on experience with AWS services (EC2, RDS, S3, VPC, EKS, etc.). Proficient with Docker and Kubernetes, including Helm, volumes, services, autoscaling. Solid experience with Ansible for configuration management and automation. Good understanding of Linux systems administration and troubleshooting. Strong scripting skills in Bash, Shell, or Python. Experience with monitoring and alerting tools such as Grafana, Prometheus, or Zabbix. Familiar with log management tools (ELK Stack, Fluentd, or CloudWatch Logs). Familiarity with SSL/TLS, DNS, load balancers (Nginx/HAProxy), and firewall/security configurations. Knowledge of version control systems (Git), branching strategies, and GitOps practices. --- Good to Have (Optional but Preferred): Experience with Terraform or Pulumi for cloud infrastructure provisioning. Knowledge of security compliance standards (ISO, SOC2, PCI DSS). Experience with Kafka, RabbitMQ, or Redis. Familiarity with service meshes like Istio or Linkerd. Experience with cost optimization and autoscaling strategies on AWS. Exposure to incident management tools (PagerDuty, Opsgenie). Certification (e.g., AWS Certified DevOps Engineer, CKA, RHCE) is a plus. Job Type: Full-time Pay: ₹30,000.00 - ₹50,000.00 per month Work Location: In person

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies