Home
Jobs
Companies
Resume

407 Gitops Jobs - Page 4

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

India

Remote

Linkedin logo

Experience Required: 6+ Years Location: Remote Role Overview We are looking for a Data Site Reliability Engineer (SRE) with a strong background in infrastructure automation, continuous deployment, and Kubernetes-based environments. The ideal candidate should have hands-on experience with AWS , Terraform , Argo CD , GitLab , and Kubernetes to support and enhance high-availability, data-intensive systems. Key Responsibilities Ensure reliability, availability, and performance of cloud-native applications and data platforms. Design and maintain Kubernetes clusters and deployments using infrastructure as code. Automate CI/CD pipelines using GitLab and Argo CD for streamlined release processes. Write, manage, and maintain Terraform code for infrastructure provisioning and compliance. Monitor, troubleshoot, and improve system performance and reliability. Collaborate with development, DevOps, and data engineering teams to ensure best practices are implemented. Participate in incident management and post-mortem processes for system outages. Required Skills & Experience 6+ years of total experience in DevOps/SRE roles. 3+ years of hands-on experience with Kubernetes in production environments. 3+ years of experience with Terraform for infrastructure automation. 1–2 years experience with Argo CD for GitOps workflows. 1–2 years experience with GitLab CI/CD. Strong understanding of SRE principles including observability, automation, and incident response. Familiarity with monitoring tools (e.g., Prometheus, Grafana) and logging systems. Experience working with containerized environments and microservices. Ability to operate independently in a remote, distributed team setting. Show more Show less

Posted 3 days ago

Apply

0.0 - 10.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

DevOps Architect / Senior DevOps Engineer Experience: 10+Years Location: Noida Employment Type: Full-Time Job Summary: We are seeking a highly skilled and experienced DevOps Architect / Senior DevOps Engineer with 10+ years of expertise in designing, implementing, and managing robust DevOps ecosystems across AWS , Azure , and GCP . The ideal candidate will possess a deep understanding of cloud infrastructure, automation, CI/CD pipelines, container orchestration, and infrastructure as code. This role is both strategic and hands-on—driving innovation, scalability, and operational excellence in cloud-native environments. Key Responsibilities: Architect and manage DevOps solutions across multi-cloud platforms (AWS, Azure, GCP) . Build and optimize CI/CD pipelines and release management processes. Define and enforce cloud-native best practices for scalability, reliability, and security. Design and implement Infrastructure as Code (IaC) using tools like Terraform , Ansible , CloudFormation , or ARM templates . Deploy and manage containerized applications using Docker and Kubernetes . Implement monitoring, logging, and alerting frameworks (e.g., ELK, Prometheus, Grafana, CloudWatch). Drive automation initiatives and eliminate manual processes across environments. Collaborate with development, QA, and operations teams to integrate DevOps culture and workflows. Lead cloud migration and modernization projects. Ensure compliance, cost optimization, and governance across environments. Required Skills & Qualifications: 10+years of experience in DevOps / Cloud / Infrastructure / SRE roles. Strong expertise in at least two major cloud platforms ( AWS , Azure , GCP ) with working knowledge of the third. Advanced knowledge of Docker , Kubernetes , and container orchestration. Deep understanding of CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps, ArgoCD). Hands-on experience with IaC tools : Terraform, Ansible, Pulumi, etc. Proficiency in scripting languages like Python , Shell , or Go . Strong background in networking , cloud security , and cost optimization . Experience with DevSecOps and integrating security into DevOps practices. Bachelor's/Master's degree in Computer Science, Engineering, or related field. Relevant certifications preferred (e.g., AWS DevOps Engineer, Azure DevOps Expert, Google Professional DevOps Engineer). Preferred Skills: Multi-cloud or hybrid cloud experience. Exposure to service mesh , API gateways , and serverless architectures . Familiarity with GitOps , policy-as-code , and site reliability engineering (SRE) principles. Experience in high-availability, disaster recovery, and compliance (SOC2, ISO, etc.). Agile/Scrum or SAFe experience in enterprise environments. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,500,000.00 per year Ability to commute/relocate: Gurgaon, Haryana: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 10 years (Required) Work Location: In person Speak with the employer +91 8580563551

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderābād

On-site

We are seeking a talented and motivated DevOps Engineer with strong expertise in Azure and AWS Cloud environments. This individual will be responsible for automating infrastructure provisioning, deployment pipelines, version control, and continuous integration/continuous deployment (CI/CD) processes. You will leverage your knowledge of ARM Templates, BiCep Templates, CloudFormation, and CDK to optimize and manage infrastructure. The ideal candidate will also be proficient in scripting and automation tools, such as Python and/or .NET, and have a strong understanding of versioning practices. Key Responsibilities: Design, implement, and manage infrastructure-as-code (IaC) using tools such as ARM Templates, BiCep Templates, AWS CloudFormation, and CDK (with Python or .NET). Develop and maintain CI/CD pipelines to automate code deployments and application updates on Azure and AWS Cloud platforms. Work closely with development, operations, and security teams to ensure seamless integration, automated testing, and smooth application deployment. Manage and provision cloud resources on Azure and AWS using native and third-party tools. Implement version control strategies to maintain source code and configuration in Git repositories. Monitor cloud environments for performance, security, and reliability, troubleshooting and resolving issues as they arise. Ensure cloud infrastructure is secure, compliant with organizational standards, and continuously optimized for cost-efficiency. Collaborate with cross-functional teams to enhance the deployment process and improve system reliability. Provide support and guidance for cloud infrastructure upgrades, scaling, and disaster recovery planning. Required Skills and Qualifications: Proven experience as a DevOps Engineer or similar role with a strong background in cloud platforms (Azure and AWS). Expertise in Infrastructure-as-Code (IaC) using tools such as ARM Templates, BiCep Templates, AWS CloudFormation, and CDK (Python and/or .NET). Strong experience with CI/CD pipelines and version control tools (Git, GitHub, GitLab, etc.). Proficiency in cloud automation and orchestration frameworks and tools. Experience working with Azure DevOps, Jenkins, Terraform, or similar CI/CD tools. Knowledge of scripting languages such as Python, Bash, or PowerShell. Understanding of containerization technologies such as Docker and orchestration tools like Kubernetes. Familiarity with monitoring tools and cloud resource management (e.g., Azure Monitor, AWS CloudWatch). Strong knowledge of cloud security best practices and governance. Excellent problem-solving, troubleshooting, and debugging skills. Familiarity with Agile and DevOps methodologies. Ability to work independently and collaborate effectively in a team-oriented environment. Preferred Skills: Experience with serverless architecture and platforms like AWS Lambda or Azure Functions. Knowledge of Microsoft PowerShell and Azure CLI. Exposure to database management in cloud environments (SQL, NoSQL, etc.). Experience with cloud cost management and optimization tools. Familiarity with GitOps principles.

Posted 3 days ago

Apply

5.0 - 7.0 years

4 - 6 Lacs

Hyderābād

On-site

5 – 7 years of experience in DevOps DevOps, CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps), Terraform, Ansible playbook, JIRA, Monitoring tools – Dynatrace, Grafana etc. Must-Have: Design, implement, and manage CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, and Azure DevOps to automate and optimize development workflows. Use Terraform for infrastructure as code (IaC) to provision, configure, and manage cloud resources in AWS, Azure, or other cloud environments. Develop and maintain Ansible playbooks for automation of server configurations, application deployment, and orchestration tasks. Implement and maintain monitoring and alerting solutions using tools like Dynatrace, Grafana, and other performance monitoring platforms to ensure the health and reliability of systems. Collaborate with development teams to improve application scalability, reliability, and performance in production environments. Troubleshoot, identify, and resolve issues related to infrastructure, applications, and DevOps processes. Maintain version control repositories and ensure best practices for code quality, deployments, and rollbacks. Participate in the setup and management of cloud environments, ensuring secure, cost-efficient, and scalable infrastructures. Work closely with developers to ensure smooth deployments, automated testing, and continuous integration of new features. Track and manage tasks and projects using JIRA to collaborate effectively with stakeholders. Stay up-to-date with industry trends and best practices related to DevOps, automation, and cloud technologies. Proficient with version control systems such as Git. Strong experience with cloud platforms (e.g., AWS, Azure, or Google Cloud). JIRA experience for task management and tracking project progress. Strong scripting skills (e.g., Bash, Python, PowerShell). Experience with containerization and orchestration tools like Docker and Kubernetes is a plus. Excellent troubleshooting and problem-solving skills. Strong communication skills to collaborate with cross-functional teams effectively. Nice-Have: Familiarity with monitoring and log aggregation tools such as Prometheus, ELK Stack, or Splunk. Knowledge of serverless computing and architectures. Familiarity with GitOps or other DevOps methodologies. Exposure to Agile development practices and working in an Agile environment. Your future duties and responsibilities Required qualifications to be successful in this role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 3 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Organization: Leading Global Management Consulting firm Role :- Sr Manager - AI/LLM Ops Location: - Gurugram Experience: - 12+ Yrs Working Mode:- Hybrid What You'll Do We are seeking a highly skilled and experienced Platform Engineer Senior Manager with Gen AI /LLM expertise to join our Gen AI platform team. This role focuses on building and maintaining cloud-native platforms specifically designed for Generative AI and Large Language Model (LLM) development. The ideal candidate will have a strong background in platform engineering, cloud infrastructure, LLM Observability and DevOps/MLOps/LLMOps, with a passion for creating scalable, reliable, and efficient systems. What You'll Bring Bachelor's degree in computer science engineering (or equivalent degree or experience) Proven building and supporting scalable, reliable Gen AI or LLM powered applications in production. Proven experience in building & enabling self-serve observability tools, dashboards for engineering teams. 12+ years of relevant experience in delivering and maintaining platform or complex technology solutions with strong technical background, preferably in global organization/ enterprise. 1+ years of experience in develop, deploy, test, and maintain production-grade access to LLMs and related services. Strong expertise in Modern Engineering, DevOps / GitOps, Infra as a Code practices - Strong understanding of CI/CD, AI/ ML pipelines and automation tools. Experience & Skills (Mandatory)¿ Strong knowledge and experience in Generative AI/ LLM based development. Strong experience in building and integrating LLM observability tools Strong experience in optimizing LLM performance, usage and cost Strong experience in building CI/CD pipeline templates for AI SDLC (ML Ops, RAG pipelines, Data Ingestion pipelines) Proficiency in cloud platforms (AWS, Azure, GCP) and infrastructure as code (Terraform, Terraform Cloud, Github Actions, ArgoCD and equivalent). Experience with containerization and orchestration technologies (Kubernetes, Docker). Strong experience working with and enabling key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Expertise in building enterprise grade, secure data ingestion pipelines for structured/ unstructured data – including indexing, search, and advance retrieval patterns. Proficiency in Python, Go, Terraform (and equivalent) Experience in LLM Testing and Evaluation is preferred Experience with security related to LLM integration is preferred Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

We're hiring: Head of Cloud & DevOps (Remote) Join us to lead and scale a global, decentralized cloud platform powering AI and blockchain workloads. We're looking for a hands-on leader with deep Kubernetes, DevOps, and infrastructure experience. 📍 100% Remote (India, Eastern Europe, UK, or US) What you'll do: Lead our migration to Distributed Kubernetes Service (DKS) Own cloud infrastructure , uptime (99.9%+), and scaling Drive CI/CD, GitOps, IaC (Terraform, Helm) Ensure security, compliance (SOC 2, ISO 27001) Build and lead a high-performing DevOps/SRE team Must-have: 8+ yrs in cloud infrastructure 4+ yrs in Kubernetes/DevOps leadership Strong experience with AWS, multi-cluster K8s, and automation Interested can share your resume noopur.tandon@humansofev.com #DevOps #CloudJobs #Kubernetes #RemoteJobs #TechLeadership #SRE #InfrastructureAsCode #Web3 #AIInfrastructure #Hiring Show more Show less

Posted 3 days ago

Apply

6.0 - 8.0 years

7 - 8 Lacs

Noida

On-site

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Role: AWS DevOps Engineer Skillset: AWS, Terraform, CI/CD Pipeline & Kubernetes (AKS) Experience: 6-8 years Certification: AWS Certified Location: Noida and Chennai Professional Experience in: Cloud-native software architecture (Microservices, Patterns, DDD, …) Working with internal developer platforms such as backstage AWS Platform (Infrastructure, Services, Administration, Provisioning, Monitoring…) Managing Kubernetes (AKS) Operations Networking Autoscaling High Availability GitOps based deployments (argocd) Working with basic web technologies (DNS, HTTPS, TLS, JWT, OAuth2.0, OIDC, etc…) Container Technology such as Docker Infrastructure as Code via Terraform Experience in: Cloud Monitoring, Alerting, Observability mechanisms e.g. via Elastic Stack Defining, Implementing and Maintaining build and release pipelines -> Continuous Deployment Working with git version control systems Code Collaboration Platforms (AWS DevOps, GitHub) Cloud Security patterns Developing software based on Spring & Spring Boot Preparation of Release and deployment scripts (In test and production environments) Qualifications BE, MCA, BTech Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Strictly Hyderabad based candidates only. Looking for immediate joiners. This requirement is for a US based product company, who are in the process of establishing their Offshore Development Centre in Hyderabad, India. You will be part of the initial team and will be playing a crucial role in building a world class enterprise product. If you are passionate, self-motivated, go-getter and want to explore new horizons, then you are the one we are looking for. Please apply. Experience & Skills Required: 3-5 years of experience. Strong DevOps and Cloud experience with deep Kubernetes skills. Expert in building and maintaining Helm charts. Strong knowledge in Containers, CI/CD and GitOps principles. Good understanding of Cloud Technology Concepts. In-depth knowledge of build/release systems, CI/CD technologies (Jenkins, ArgoCD). In-depth knowledge of Operational Visibility aspects and tools like Prometheus, Grafana, OpenSearch, Elastic APM, Dynatrace. Strong experience in writing Docker files and building images. Working knowledge of Agile principles, Scrum, Test Driven Development and Test Automation. Responsibilities: As a DevOps Engineer you will be responsible for the effective and efficient delivery of environments in the Cloud and ensure the delivery according to defined Service Level Agreements. You will get to know various technologies used by development and operations. You will produce solutions to enhance stability of the Environments as well providing visibility through monitoring, dashboards. You will work on creating stable and reliable CI/CD pipelines for our products. You will be responsible for providing support around the complete lifecycle for kubernetes based applications. You will be involved in blue printing and productization of modern technologies e.g. containerized environments. You interact with your peers, to share your insight, to identify areas of improvement and constantly learn about new technologies. You will work in an innovative environment with highly motivated colleagues across different organizations. Lead Design & Development of Containers/K8s based solutions, full stack applications for ensuring the offered environments are highly stable. Create required Dashboards to offer high Operational Monitoring Visibility. Adopt the latest open-source tools and technologies. Build/Enhance/Evaluate tools for build, test, deployment automation to meet business needs with respect to functionality, performance, scalability and other quality goals. Apply technical expertise to challenging programming and design problems. Be passionate about keeping up to date with latest technology and developing well architected tools and services. Those who worked in a startup environment will be advantageous. Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

About The Job: Red Hat's Services team is seeking an experienced and highly skilled support engineer or systems administrator with an overall 3-5 years of experience, to join us as Technical Account Manager for our enterprise customers covering Middleware and Red Hat OpenShift Container Platform. In this role, you'll provide personalized, proactive technology engagement and guidance, and cultivate high-value relationships with clients as you seek to understand and meet their needs with the complete Red Hat portfolio of products. As a Technical Account Manager, you will provide a level of premium advisory-based support that builds, maintains, and grows long-lasting customer loyalty by tailoring support for each of our customer's environments, facilitating collaboration with their other vendors, and advocating on the customer's behalf. At the same time, you'll work closely with our Engineering, R&D, Product Management, Global Support, Sales & Services teams to debug, test, and resolve issues. What Will You Do: Perform technical reviews & share knowledge to proactively identify & prevent issues Understand your customers' technical infrastructures, hardware, processes, and offerings Perform initial or secondary investigations and respond to online and phone support requests Deliver key portfolio updates and assist customers with upgrades Manage customer cases and maintain clear and concise case documentation Create customer engagement plans & keep documentation on customer environments updated Ensure a high level of customer satisfaction with each qualified engagement through the complete adoption life cycle of our offerings Engage with Red Hat's field teams, customers to ensure a positive Red Hat product & technology experience and a successful outcome resulting in long-term success Communicate how specific Red Hat product road-map align to customer use cases Capture Red Hat product capabilities and identify gaps as related to customer use cases through a closed-loop process for each step of the engagement life cycle Engage with Red Hat's product engineering teams to help develop solution patterns based on customer engagements and personal experience that guide platform adoption Contribute internally to the Red Hat team, share knowledge and best practices with team members, contribute to internal projects and initiatives, and serve as a subject matter expert (SME) and mentor for specific technical and process areas Travel to visit customers, partners, conferences, and other events as needed. What Will You Do: Bachelor's degree in science or a technical field; engineering or computer science Competent reading and writing skills in English Ability to effectively manage and grow existing enterprise customers by delivering proactive, relationship-based, best-in-class support Upstream involvement in open source projects is a plus Indian citizenship or authorization to work in India Middleware Java coding skills and solid understanding of JEE platform, Java Programming APIs Hands-on experience with Java application platform and JBoss, WebSphere, and WebLogic Integration Experience working with Red Hat 3Scale API Management, SSO Runtimes Experience working with microservices development using Spring Boot and developing cloud-native applications RHOCP Experience in Red Hat OpenShift Container Platform (RHOCP) or Kubernetes or Dockers cluster management Understanding of RHOCP Architecture & different types of RHOCP installations Experience in RHOCP Troubleshooting and data/logs collection Strong working knowledge of Prometheus, Grafana, Gitops ArgoCD , ACS , ACM will be considered a plus About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

About The Job Be the expert customers turn to when they need to build strategic, scalable systems. Red Hat Services is looking for a well-rounded Architect to join our team in Mumbai covering Asia Pacific. In this role, you will design and implement modern platforms, onboard and build cloud-native applications, and lead architecture engagements using the latest open source technologies. You’ll be part of a team of consultants who are leaders in open hybrid cloud, platform modernisation, automation, and emerging practices - including foundational AI integration. Working in agile teams alongside our customers, you’ll build, test, and iterate on innovative prototypes that drive real business outcomes. This role is ideal for architects who can work across application, infrastructure, and modern AI-enabling platforms like Red Hat OpenShift AI. If you're passionate about open source, building solutions that scale, and shaping the future of how enterprises innovate — this is your opportunity. What Will You Do Design and implement modern platform architectures with a strong understanding of Red Hat OpenShift, container orchestration, and automation at scale. Strong experience in managing “Day-2” operations of Kubernetes container platforms by collaborating with infrastructure teams in defining practices for platform deployment, platform hardening, platform observability, monitoring and alerting, capacity management, scalability, resiliency, security operations. Lead the discovery, architecture, and delivery of modern platforms and cloud-native applications, using technologies such as containers, APIs, microservices, and DevSecOps patterns. Collaborate with customer teams to co-create AI-ready platforms, enabling future use cases with foundational knowledge of AI/ML workloads. Remain hands-on with development and implementation — especially in prototyping, MVP creation, and agile iterative delivery. Present strategic roadmaps and architectural visions to customer stakeholders, from engineers to executives. Support technical presales efforts, workshops, and proofs of concept, bringing in business context and value-first thinking. Create reusable reference architectures, best practices, and delivery models, and mentor others in applying them. Contribute to the development of standard consulting offerings, frameworks, and capability playbooks. What Will You Bring Strong experience with Kubernetes, Docker, and Red Hat OpenShift or equivalent platforms In-depth expertise in managing multiple Kubernetes clusters across multi-cloud environments. Proven expertise in operationalisation of Kubernetes container platform through the adoption of Service Mesh, GitOps principles, and Serverless frameworks Migrating from XKS to OpenShift Proven leadership of modern software and platform transformation projects Hands-on coding experience in multiple languages (e.g., Java, Python, Go) Experience with infrastructure as code, automation tools, and CI/CD pipelines Practical understanding of microservices, API design, and DevOps practices Applied experience with agile, scrum, and cross-functional team collaboration Ability to advise customers on platform and application modernisation, with awareness of how platforms support emerging AI use cases. Excellent communication and facilitation skills with both technical and business audiences Willingness to travel up to 40% of the time Nice To Have Experience with Red Hat OpenShift AI, Open Data Hub, or similar MLOps platforms Foundational understanding of AI/ML, including containerized AI workloads, model deployment, open source AI frameworks Familiarity with AI architectures (e.g., RAG, model inference, GPU-aware scheduling) Engagement in open source communities or contributor background About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are seeking a talented and motivated DevOps Engineer with strong expertise in Azure and AWS Cloud environments. This individual will be responsible for automating infrastructure provisioning, deployment pipelines, version control, and continuous integration/continuous deployment (CI/CD) processes. You will leverage your knowledge of ARM Templates, BiCep Templates, CloudFormation, and CDK to optimize and manage infrastructure. The ideal candidate will also be proficient in scripting and automation tools, such as Python and/or .NET, and have a strong understanding of versioning practices. Key Responsibilities: Design, implement, and manage infrastructure-as-code (IaC) using tools such as ARM Templates, BiCep Templates, AWS CloudFormation, and CDK (with Python or .NET). Develop and maintain CI/CD pipelines to automate code deployments and application updates on Azure and AWS Cloud platforms. Work closely with development, operations, and security teams to ensure seamless integration, automated testing, and smooth application deployment. Manage and provision cloud resources on Azure and AWS using native and third-party tools. Implement version control strategies to maintain source code and configuration in Git repositories. Monitor cloud environments for performance, security, and reliability, troubleshooting and resolving issues as they arise. Ensure cloud infrastructure is secure, compliant with organizational standards, and continuously optimized for cost-efficiency. Collaborate with cross-functional teams to enhance the deployment process and improve system reliability. Provide support and guidance for cloud infrastructure upgrades, scaling, and disaster recovery planning. Required Skills and Qualifications: Proven experience as a DevOps Engineer or similar role with a strong background in cloud platforms (Azure and AWS). Expertise in Infrastructure-as-Code (IaC) using tools such as ARM Templates, BiCep Templates, AWS CloudFormation, and CDK (Python and/or .NET). Strong experience with CI/CD pipelines and version control tools (Git, GitHub, GitLab, etc.). Proficiency in cloud automation and orchestration frameworks and tools. Experience working with Azure DevOps, Jenkins, Terraform, or similar CI/CD tools. Knowledge of scripting languages such as Python, Bash, or PowerShell. Understanding of containerization technologies such as Docker and orchestration tools like Kubernetes. Familiarity with monitoring tools and cloud resource management (e.g., Azure Monitor, AWS CloudWatch). Strong knowledge of cloud security best practices and governance. Excellent problem-solving, troubleshooting, and debugging skills. Familiarity with Agile and DevOps methodologies. Ability to work independently and collaborate effectively in a team-oriented environment. Preferred Skills: Experience with serverless architecture and platforms like AWS Lambda or Azure Functions. Knowledge of Microsoft PowerShell and Azure CLI. Exposure to database management in cloud environments (SQL, NoSQL, etc.). Experience with cloud cost management and optimization tools. Familiarity with GitOps principles. Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Linkedin logo

Job Requirements We are looking for a seasoned DevOps Architect to lead the design and implementation of automated, scalable, and secure DevOps pipelines and infrastructure. The ideal candidate will bridge development and operations by architecting robust CI/CD processes, ensuring infrastructure as code (IaC) adoption, promoting a culture of automation, and enabling rapid software delivery across cloud and hybrid environments. Key Roles and Responsibilities Design end-to-end DevOps architecture and tooling that supports development, testing, and deployment workflows. Define best practices for source control, build processes, code quality, and artifact repositories. Collaborate with stakeholders to align DevOps initiatives with business and technical goals. Architect and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, Azure DevOps, CircleCI, or ArgoCD. Ensure pipelines are scalable, efficient, and support multi-environment deployments. Integrate automated testing, security scanning, and deployment verifications into pipelines. Lead the implementation of IaC using tools like Terraform, AWS CloudFormation, Azure ARM, or Pulumi. Enforce version-controlled infrastructure and promote immutable infrastructure principles. Manage infrastructure changes through GitOps practices and reviews. Design and support containerized workloads using Docker and orchestration platforms like Kubernetes or OpenShift. Implement Helm charts, Operators, and auto-scaling strategies. Architect cloud-native infrastructure in AWS, Azure, or GCP for microservices applications. Set up observability frameworks using tools like Prometheus, Grafana, ELK/EFK, Splunk, or Datadog. Implement alerting mechanisms and dashboards for system health and performance. Participate in incident response, root cause analysis, and postmortem reviews. Integrate security practices (DevSecOps) into all phases of the delivery pipeline. Enforce policies for secrets management, access controls, and software supply chain integrity. Ensure compliance with regulations like SOC 2, HIPAA, or ISO 27001. Automate repetitive tasks such as provisioning, deployments, and environment setups. Integrate tools across the DevOps lifecycle, including Jira, ServiceNow, SonarQube, Nexus, etc. Promote the use of APIs and scripting to streamline DevOps workflows. Act as a DevOps evangelist, mentoring engineering teams on best practices. Drive adoption of Agile and Lean principles within infrastructure and operations. Facilitate knowledge sharing through documentation, brown-bag sessions, and training. Work Experience Bachelor's/Master’s degree in Computer Science, Engineering, or related discipline. 8+ years of IT experience, with at least 3 in a senior DevOps role. Deep experience with CI/CD tools and DevOps automation frameworks. Proficiency in scripting (Bash, Python, Go, or PowerShell). Hands-on experience with one or more public cloud platforms: AWS, Azure, or GCP. Strong understanding of GitOps, configuration management (Ansible, Chef, Puppet), and observability tools. Experience managing infrastructure and deploying applications in Kubernetes-based environments. Knowledge of software development lifecycle and Agile/Scrum methodologies. Certifications such as: - AWS Certified DevOps Engineer – Professional - Azure DevOps Engineer Expert - Certified Kubernetes Administrator (CKA) or Developer (CKAD) - Terraform Associate Experience implementing FinOps practices and managing cloud cost optimization. Familiarity with service mesh (Istio, Linkerd) and serverless architectures Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: DevOps Engineer Experience Required: 4+ years Employment Type: Full-Time Location: Pune Role Summary: We are seeking skilled DevOps Engineers with at least 4 years of experience in managing cloud infrastructure, automation, and modern CI/CD workflows. This role requires strong hands-on expertise in designing, deploying, and maintaining scalable cloud environments using Infrastructure-as-Code (IaC) principles. Candidates must be comfortable working with container technologies, cloud security, networking, and monitoring tools to ensure system efficiency and reliability in large-scale applications. Key Responsibilities: Design and manage cloud infrastructure using platforms like AWS, Azure , or GCP . Write and maintain Infrastructure-as-Code (IaC) using tools such as Terraform or CloudFormation . Develop and manage CI/CD pipelines with tools like GitHub Actions, Jenkins, GitLab CI/CD, Bitbucket Pipelines , or AWS CodePipeline . Deploy and manage containers using Kubernetes, OpenShift, AWS EKS, AWS ECS , and Docker . Ensure security compliance with frameworks including SOC 2, PCI, HIPAA, GDPR , and HITRUST . Lead and support cloud migration projects from on-premise to cloud infrastructure. Implement and fine-tune monitoring and alerting systems using tools such as Datadog, Dynatrace, CloudWatch, Prometheus, ELK , or Splunk . Automate infrastructure setup and configuration with Ansible, Chef, Puppet , or equivalent tools. Diagnose and resolve complex issues involving cloud performance, networking , and server management . Collaborate across development, security, and operations teams to enhance DevSecOps practices. Required Skills & Experience: 3+ years in a DevOps, cloud infrastructure , or platform engineering role. Strong knowledge and hands-on experience with AWS Cloud . In-depth experience with Kubernetes, ECS, OpenShift , and container orchestration. Skilled in writing IaC using Terraform , CloudFormation , or similar tools. Proficiency in automation using Python, Bash , or PowerShell . Familiar with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD , or Bitbucket Pipelines . Solid background in Linux distributions (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server environments. Strong grasp of networking concepts : VPCs, subnets, load balancers, firewalls, and security groups. Experience working with monitoring/logging platforms such as Datadog, Prometheus, ELK, Dynatrace, etc. Excellent communication skills and a collaborative mindset. Understanding of cloud security practices including IAM policies, WAF, GuardDuty , and vulnerability management . Preferred/Good-to-Have Skills: Exposure to cloud-native security platforms (e.g., AWS Security Hub, Azure Security Center, Google SCC). Familiarity with regulatory compliance standards like SOC 2, PCI, HIPAA, GDPR , and HITRUST . Experience managing Windows Server environments in tandem with Linux. Understanding of centralized logging tools such as Splunk, Fluentd , or AWS OpenSearch . Knowledge of GitOps methodologies using tools like ArgoCD or Flux . Background in penetration testing, threat detection , and security assessments . Proven experience with cloud cost optimization strategies . A passion for coaching, mentoring , and sharing DevOps best practices within the team. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Linkedin logo

Required Qualifications & Skills: 5+ years in DevOps, SRE, or Infrastructure Engineering. Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform, CloudFormation). Proficient in Docker & Kubernetes. Hands-on with CI/CD tools & scripting (Bash, Python, or Go). Strong knowledge of Linux, networking, and security best practices. Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation Key Responsibilities: Design & manage CI/CD pipelines (Jenkins, GitLab CI/CD, GitHub Actions). Automate infrastructure provisioning (Terraform, Ansible, Pulumi). Monitor & optimize cloud environments (AWS, GCP, Azure). Implement containerization & orchestration (Docker, Kubernetes - EKS/GKE/AKS). Maintain logging, monitoring & alerting (ELK, Prometheus, Grafana, Datadog). Ensure system security, availability & performance tuning. Manage secrets & credentials (Vault, AWS Secrets Manager). Troubleshoot infrastructure & deployment issues. Implement blue-green & canary deployments. Collaborate with developers to enhance system reliability & productivity Preferred Skills: Certifications (AWS DevOps Engineer, CKA/CKAD, Google Cloud DevOps Engineer). Experience with multi-cloud, microservices, event-driven systems. Exposure to AI/ML pipelines & data engineering workflows. Show more Show less

Posted 4 days ago

Apply

15.0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

Educational Qualifications: BE/B Tech/ M.E/M To lead the operations of UIDAI's critical infrastructure, primarily hosted on Open-stack on-premise Private Cloud architecture, ensuring 24/7 availability of Aadhaar services. Manage a team of experts to design application deployment architecture to ensure high availability. Manage a team of experts to provide infra-deployment guidelines to bake into app design. Ensure robust security, scalability, and reliability of UIDAI's data centres and networks. Participate in architectural design review sessions, develop proof of concepts/pilots, implement projects, and deliver ongoing upgrades and enhancements. Revamp applications for AADHAR's Private Cloud Deployment in today's constantly shifting digital landscape to increase operational efficiency and reduce infrastructure costs. Role & Innovation & Technology Transformation Align with the Vision, Mission and Core Values UIDAI while closely aligning with inter-disciplinary teams. Lead Cloud Operations/ Infra team in fine-tuning & optimization of cloud-native platforms to improve performance and to achieve cost efficiency. Drive solution design for RFPs, POCs, and pilots for new and upcoming projects or R&D initiatives, using open-source cloud and infrastructure to build a scalable and elastic Data Center. Encourage & create an environment for Knowledge sharing within and outside the UIDAI. To interact/ partner with leading institutes/ R&D establishments/ educational institutions to stay up to date with new technologies and trends in cloud computing. Be a thought leader in architecture design and development of complex operational data analytics solutions to monitor various metrics related to infrastructure and app Architecture Design & the design, implementation, and deployment of OpenStack-based on-premise private cloud infrastructure. Develop scalable, secure, and highly available cloud architectures to meet business and operational needs. Architect and design infrastructure solutions that support both virtualized and containerized workloads. Solution Integration, Performance Monitoring & Integrate OpenStack with existing on-premise data centre systems, network infrastructure, and storage platforms. Work with cross-functional teams to ensure seamless integration of cloud solutions in UIDAI. Monitor cloud infrastructure performance and ensure efficient use of resources. Identify areas for improvement and implement optimizations to reduce costs and improve performance. Security & Compliance: - Implement security best practices for on-premise cloud environments, ensuring data protection and compliance with industry standards. Regularly perform security audits and vulnerability assessments to maintain a secure cloud Collaboration & Collaborate with internal teams (App development and Security) to align cloud infrastructure with UIDAIs requirements and objectives & manage seamless communication within tech teams and across the organization. Maintain detailed live documentation of cloud architecture, processes, and configurations to establish trails of decision-making and ensure transparency and accountability. Role More than 15 years of experience in Technical, Infra and App Solutioning, and at least 7+ years of experience in spearheading large multi-disciplinary technology teams working across various domains in a leadership position. Excellent problem-solving and troubleshooting skills. Must have demonstrable experience in application performance analysis through low-level debugging. Experience on transformation projects for On-Premise data solutions, open-source CMP - OpenStack, CloudStack. Should be well versed with Site Reliability Engineering (SRE) concepts with a focus on extreme automation & infrastructure as code (IaC) methodologies & have led such teams before; including exp on Gitops, and platform automation tools like Terraform, Ansible etc. Strong knowledge of Linux-based operating systems (Ubuntu, CentOS, RedHat, etc). Strong understanding on the HTTP1.1, Http2 with gRPC and HTTP/2 (QUICK) protocol functioning. Experience in System Administration, Server storage, Networking, virtualization, Data Warehouse, Data Integration, Data Migration and Business Intelligence/Artificial Intelligence solutions on the Cloud. Proficient in technology administration, remote infrastructure management, cloud assessment, QA, monitoring, and DevOps practices. Extensive experience in Cloud platform architecture, Private cloud deployment, large-scale ] transformation or migration of applications to cloud-native platforms. Should have experience in building cloud-native platforms on Kubernetes, including awareness & experience of service mesh, cloud-native storage, integration with SAN & NAS, Kubernetes operators, CNI, CSI, CRI etc. Should have strong expertise in networking background in terms of routing, switching, BGP, technologies like TRILL, MP-BGP, EVPN etc. Preferably, should have experience in SAN networking & Linux networking concepts like networking namespaces, route tables, and ss utilities. Experience on Cloud and On-Premise Databases like Cloud SQL, Cloud Spanner, Big Table, RDS, Aurora, DynamoDB, Oracle, Teradata, MySQL, DB2, SQL Server. Exposure to any of the No-SQL databases like Mongo dB, CouchDB, Cassandra, Graph dB, etc. Experience in ML Ops pipeline is preferable. Experience with distributed computing platforms and enterprise environments like Hadoop, GCP/AWS/Azure Cloud is preferred. Experience with various Data Integration, and ETL technologies on Cloud like Spark, Pyspark/Scala, and Dataflow is preferred. (ref:iimjobs.com) Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices. Skills: gitlab ci/cd,ci/cd,cloud security,infrastructure-as-code (iac),terraform,github actions,windows server,devops,security,cloud infrastructure,linux,puppet,ci,aws,chef,scripting (python, bash, powershell),infrastructure,azure,cloud,networking,gcp,automation,monitoring tools,kubernetes,log management,ansible,cd,jenkins,containerization,monitoring tools (datadog, prometheus, elk) Show more Show less

Posted 4 days ago

Apply

6.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Role: AWS DevOps Engineer Skillset: AWS, Terraform, CI/CD Pipeline & Kubernetes (AKS) Experience: 6-8 years Certification: AWS Certified Location: Noida and Chennai Professional Experience in: Cloud-native software architecture (Microservices, Patterns, DDD, …) Working with internal developer platforms such as backstage AWS Platform (Infrastructure, Services, Administration, Provisioning, Monitoring…) Managing Kubernetes (AKS) Operations Networking Autoscaling High Availability GitOps based deployments (argocd) Working with basic web technologies (DNS, HTTPS, TLS, JWT, OAuth2.0, OIDC, etc…) Container Technology such as Docker Infrastructure as Code via Terraform Experience in: Cloud Monitoring, Alerting, Observability mechanisms e.g. via Elastic Stack Defining, Implementing and Maintaining build and release pipelines -> Continuous Deployment Working with git version control systems Code Collaboration Platforms (AWS DevOps, GitHub) Cloud Security patterns Developing software based on Spring & Spring Boot Preparation of Release and deployment scripts (In test and production environments) Qualifications BE, MCA, BTech Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join our Team About this opportunity: Our Team, belonging to Software Pipeline & Support organization (SWPS), is looking for a Senior DevOps Engineer having strong technical leading capabilities and a genuine interest in Automation able to shape and drive new initiatives to maintain and develop the Ericsson Support Systems Verification Tool (aka ESSVT) Product Build pipelines. ESSVT is a Production Grade Cloud-Native Application used by Engineering and Services organization within and outside Ericsson premises. ESSVT coordinates the automated execution of both Functional and Non-Functional tests covering the entire Business & Operations Support Systems (BOS) product portfolio. ESSVT supports Product, Offering and Solution Testing improving testing lead times. ESSVT leverages on the most used Open-Source Testing technologies: Robot Framework and Apache JMeter. What You Will Do Analyze, Design and Develop new pipeline Features. Maintain the pipelines up’n’running. Mandatory Skills Python (Advanced) Git/Gerrit (Advanced) Jenkins (Advanced) Docker (Advanced) Kubernetes (Average) Shell (Average) Jira (Average) Nice-to-have Skills ADP’s bob GitLab Test Automation tools (e.g. Robot FW, Apache JMeter) AWS, OCI or any other Cloud Platform GitOps – Flux What´s in it for you? You will be part of a well-established, diverse and automation driven team spread among the world. Our mission is to make the life of the Test organization easier pushing a standardized test automation through the whole SW Lifecycle, from Development to Operations. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Chennai Req ID: 764206 Show more Show less

Posted 4 days ago

Apply

8.0 years

7 - 10 Lacs

Hyderābād

On-site

Job Description Summary As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the world’s leading provider of intelligent information, we want your unique perspective to create the solutions that advance our business—and your career. About the Role As a “ Senior DevOps Engineer ” you will be responsible for building and supporting AWS infrastructure used to host a platform offering audit solutions. This engineer is constantly looking to optimize systems and services for security, automation, and performance/availability, while ensuring solutions developed adhere and align to architecture standards. This individual is responsible for ensuring that technology systems and related procedures adhere to organizational values. The person will also assist Developers with technical issues in the initiation, planning, and execution phases of projects. These activities include: the definition of needs, benefits, and technical strategy; research & development within the project life cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. This role will be responsible for: Plan, deploy, and maintain critical business applications in prod/non-prod AWS environments Design and implement appropriate environments for those applications, engineer suitable release management procedures and provide production support Influence broader technology groups in adopting Cloud technologies, processes, and best practices Drive improvements to processes and design enhancements to automation to continuously improve production environments Maintain and contribute to our knowledge base and documentation Provide leadership, technical support, user support, technical orientation, and technical education activities to project teams and staff Manage change requests between development, staging, and production environments Provision and configure hardware, peripherals, services, settings, directories, storage, etc. in accordance with standards and project/operational requirements Perform daily system monitoring, verifying the integrity and availability of all hardware, server resources, systems and key processes, reviewing system and application logs, and verifying completion of automated processes Perform ongoing performance tuning, infrastructure upgrades, and resource optimization as required Provide Tier II support for incidents and requests from various constituencies Investigate and troubleshoot issues Research, develop, and implement innovative and where possible automated approaches for system administration tasks About you You are fit for the role of a Senior DevOps Engineering role if your background includes: Required: 8+ years at Senior DevOps Level. Knowledge of Azure / AWS cloud platform – s3, cloudfront, cloudformation, RDS, OpenSearch, Active MQ. Knowledge of CI/CD, preferably on AWS Developer tools Scripting knowledge, preferably in Python / Bash or Powershell Have contributed as a DevOps engineer responsible for planning, building and deploying cloud-based solutions Knowledge on building and deploying containers / Kubernetes. (also, exposure to AWS EKS is preferable) Knowledge on Infrastructure as code like: Bicep or Terraform, Ansible Knowledge on GitHub Action, Powershell and GitOps Nice to have: Experience with build and deploying .net core / java-based solutions Strong understanding on API first strategy Knowledge and some experience implementing testing strategy in a continuous deployment environment Have owned and operated continuous delivery / deployment. Have setup monitoring tools and disaster recovery plans to ensure business continuity. #LI-AM1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 4 days ago

Apply

5.0 - 7.0 years

4 - 6 Lacs

Hyderābād

On-site

Category: Business Consulting, Strategy and Digital Transformation Main location: India, Andhra Pradesh, Hyderabad Position ID: J0525-1091 Employment Type: Full Time Position Description: 5 – 7 years of experience in DevOps DevOps, CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps), Terraform, Ansible playbook, JIRA, Monitoring tools – Dynatrace, Grafana etc. Must-Have: Design, implement, and manage CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, and Azure DevOps to automate and optimize development workflows. Use Terraform for infrastructure as code (IaC) to provision, configure, and manage cloud resources in AWS, Azure, or other cloud environments. Develop and maintain Ansible playbooks for automation of server configurations, application deployment, and orchestration tasks. Implement and maintain monitoring and alerting solutions using tools like Dynatrace, Grafana, and other performance monitoring platforms to ensure the health and reliability of systems. Collaborate with development teams to improve application scalability, reliability, and performance in production environments. Troubleshoot, identify, and resolve issues related to infrastructure, applications, and DevOps processes. Maintain version control repositories and ensure best practices for code quality, deployments, and rollbacks. Participate in the setup and management of cloud environments, ensuring secure, cost-efficient, and scalable infrastructures. Work closely with developers to ensure smooth deployments, automated testing, and continuous integration of new features. Track and manage tasks and projects using JIRA to collaborate effectively with stakeholders. Stay up-to-date with industry trends and best practices related to DevOps, automation, and cloud technologies. Proficient with version control systems such as Git. Strong experience with cloud platforms (e.g., AWS, Azure, or Google Cloud). JIRA experience for task management and tracking project progress. Strong scripting skills (e.g., Bash, Python, PowerShell). Experience with containerization and orchestration tools like Docker and Kubernetes is a plus. Excellent troubleshooting and problem-solving skills. Strong communication skills to collaborate with cross-functional teams effectively. Nice-Have: Familiarity with monitoring and log aggregation tools such as Prometheus, ELK Stack, or Splunk. Knowledge of serverless computing and architectures. Familiarity with GitOps or other DevOps methodologies. Exposure to Agile development practices and working in an Agile environment. Skills: Analytical Thinking Ansible Change Management DevOps Engineering English Grafana Terraform What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Hyderābād

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role There are two halves to this role: First, contributing to current projects where you analyze problems and tech issues, offer solutions, and test, modify, automate, and integrate systems. And second, long-range strategic planning of IT infrastructure and operational execution. This role isn’t specific to any one platform, so you’ll need a good feel for all of them. And because of this, you’ll experience variety and growth at Kyndryl that you won’t find anywhere else. You’ll be involved early to offer solutions, help decide whether something can be done, and identify the technical and timeline risks up front. This means dealing with both client expectations and internal challenges – in other words, there are plenty of opportunities to make a difference, and a lot of people will witness your contributions. In fact, a frequent sign of success for our Infrastructure Specialists is when clients come back to us and ask for the same person by name. That’s the kind of impact you can have! Design and manage AWS cloud infrastructure using Terraform (Infrastructure as Code - IaC).Deploy, configure, and maintain EKS (Elastic Kubernetes Service) clusters for containerized applications. Optimize AWS networking, security, and storage solutions for performance and scalability. Implement monitoring, logging, and alerting solutions using AWS-native and open-source tools. Build and maintain CI/CD pipelines using Azure DevOps (ADO) and Git to automate application deployments. Integrate Terraform, Helm, and Kubernetes manifests into deployment workflows. Implement deployment strategies such as blue-green deployments, canary releases, and rolling updates. Deploy and optimize containerized applications in EKS using Kubernetes best practices. Work with developers to containerize applications and streamline Docker image management. Implement Kubernetes networking, service discovery, and service mesh configurations. Develop Python and Bash scripts to automate infrastructure provisioning and application deployments. Implement GitOps workflows for managing Kubernetes environments. Optimize cloud resource utilization through automation and cost management strategies. Enforce RBAC (Role-Based Access Control), IAM policies, and secrets management for secure infrastructure. Implement observability solutions using AWS CloudWatch, Prometheus, Grafana, and ELK stack. Conduct infrastructure security audits and remediate compliance risks. Work closely with development, security, and operations teams to improve DevOps processes. Advocate for DevOps culture, automation, and best practices within the organization. Document infrastructure designs, deployment procedures, and standard operating guidelines. This is a project-based role where you’ll enjoy deep involvement throughout the lifespan of a project, as well as the chance to work closely with Architects, Technicians, and PMs. Whatever your current level of tech savvy or where you want your career to lead, you’ll find the right opportunities and a buddy to support your growth. Boredom? Trust us, that won’t be an issue. Your future at Kyndryl There are lots of opportunities to gain certification and qualifications on the job, and you’ll continuously grow as a Cloud Hyperscaler. Many of our Infrastructure Specialists are on a path toward becoming either an Architect or Distinguished Engineer, and there are opportunities at every skill level to grow in either of these directions. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: 7+ years of hands-on experience in AWS (AWS, EKS, Terraform, Git, ADO, Python) cloud infrastructure management. Strong expertise in Terraform (IaC) for AWS provisioning and automation. Experience with Kubernetes (EKS), Docker, and containerized workloads. Proficiency in CI/CD pipelines using Azure DevOps (ADO) and Git workflows. Strong scripting skills in Python, Bash, and YAML for automation. Knowledge of AWS networking, VPC, IAM, Load Balancers, and security groups. Experience with monitoring and logging tools (CloudWatch, Prometheus, Grafana, ELK). Understanding of DevOps methodologies, GitOps principles, and Agile environments. Preferred Technical and Professional Experience Experience with Helm, ArgoCD, and Kubernetes operators. Familiarity with AWS Lambda, Fargate, and serverless architectures. Exposure to multi-cloud environments (Azure, GCP) and hybrid cloud solutions. Knowledge of SRE (Site Reliability Engineering) principles and incident response best practices. Certifications (Preferred but Not Mandatory) AWS Certified DevOps Engineer – Professional Certified Kubernetes Administrator (CKA) HashiCorp Certified: Terraform Associate Microsoft Certified: Azure DevOps Engineer Expert Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh

On-site

Overview: We are seeking a skilled and proactive Support Engineer with deep expertise in Azure cloud services, Kubernetes, and DevOps practices. with 5+ years of industry experience with same technologies. The ideal candidate will have experience working with Azure services, including Kubernetes, API management, monitoring tools, and various cloud infrastructure services. You will be responsible for providing technical support, managing cloud-based systems, troubleshooting complex issues, and ensuring smooth operation and optimization of services within the Azure ecosystem. Key Responsibilities: Provide technical support for Azure-based cloud services, including Azure Kubernetes Service (AKS), Azure API Management, Application Gateway, Web Application Firewall, Azure Monitor with KQL queries Manage and troubleshoot various Azure services such as Event Hub, Azure SQL, Application Insights, Virtual Networks and WAF. Work with Kubernetes environments, troubleshoot deployments, utilizing Helm Charts, check resource utilization and managing GitOps processes. Utilize Terraform to automate cloud infrastructure provisioning, configuration, and management. Troubleshoot and resolve issues in MongoDB and Microsoft SQL Server databases, ensuring high availability and performance. Monitor cloud infrastructure health using Grafana and Azure Monitor, providing insights and proactive alerts. Provide root-cause analysis for technical incidents, propose and implement corrective actions to prevent recurrence. Continuously optimize cloud services and infrastructure to improve performance, scalability, and security. Required Skills & Qualifications: Azure Certification (e.g., Azure Solutions Architect, Azure Administrator) with hands-on experience in Azure services such as AKS, API Management, Application Gateway, WAF, and others. Any Kubernetes Certification (e.g CKAD or CKA) with Strong hands-on expertise in Kubernetes Helm Charts, and GitOps principles for managing/toubleshooting deployments. Hands-on experience with Terraform for infrastructure automation and configuration management. Proven experience in MongoDB and Microsoft SQL Server, including deployment, maintenance, performance tuning, and troubleshooting. Familiarity with Grafana for monitoring, alerting, and visualization of cloud-based services. Experience using Azure DevOps tools, including Repos and Pipelines for CI/CD automation and source code management. Strong knowledge of Azure Monitor, KQL Queries, Event Hub, and Application Insights for troubleshooting and monitoring cloud infrastructure. Solid understanding of Virtual Networks, WAF, Firewalls, and other related Azure networking tools. Excellent troubleshooting, analytical, and problem-solving skills. Strong written and verbal communication skills, with the ability to explain complex technical issues to non-technical stakeholders. Ability to work in a fast-paced environment and manage multiple priorities effectively. Preferred Skills: Experience with cloud security best practices in Azure. Knowledge of infrastructure as code (IaC) concepts and tools. Familiarity with containerized applications and Docker. Education: Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent work experience. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the role You’ll thrive if you’re hands-on, grounded, and passionate about building with technology. Our diverse tech stack includes TypeScript, Java, Scala, Kotlin, Golang, Elixir, Python, .Net, Node.js, and Rust. This role offers significant impact and growth opportunities while staying hands-on. We focus on lean teams without traditional management layers, working in small, collaborative teams (2-5 people) where a well-founded argument holds more weight than the years of experience. You’ll develop tailored software solutions to meet clients’ unique needs across multiple domains. Responsibilities Remain fully hands-on and write high-quality, production-ready code that enables smooth deployment of solutions. Lead architecture and design decisions, ensuring adherence to best practices in technology choices and system design. Utilize DevOps tools and practices to automate and streamline the build and deployment processes. Work closely with Data Scientists and Engineers to deliver robust, production-level AI and Machine Learning systems. Develop frameworks and tools for efficient data ingestion from diverse and complex sources. Operate in short, iterative sprints, delivering working software aligned with clear deliverables and client-defined deadlines. Demonstrate flexibility by learning and working across multiple programming languages and technologies as required. Additional Responsibilities Actively contribute to a high-performing engineering culture by working from the office regularly to collaborate closely with fellow senior techies, fostering strong technical discussions and decision-making. Provide hands-on mentorship and technical guidance that encourages knowledge sharing, continuous improvement, and innovation within your team. Skills you’ll need 8+ years experience as a Software Engineer. Deep understanding of programming fundamentals and expertise with at least one programming language (functional or object-oriented). A nuanced and rich understanding of code quality, maintainability and practices like Test Driven Development. Experience with one or more source control and build toolchains. Working knowledge of CI/CD will be an added advantage. Understanding of web APIs, contracts and communication protocols. Understanding of Cloud platforms, infra-automation/DevOps, IaC/GitOps/Containers, design and development of large data platforms. A maker’s mindset – To be resourceful and have the ability to do things that have no instructions. What will you experience in terms of culture at Sahaj? A culture of trust, respect and transparency Opportunity to collaborate with some of the finest minds in the industry Work across multiple domains What are the benefits of being at Sahaj? Unlimited leaves Life Insurance & Private Health insurance paid by Sahaj Stock options No hierarchy Open Salaries Show more Show less

Posted 4 days ago

Apply

13.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

We are seeking a highly experienced Cloud Native Architect to lead the design and implementation of cloud-native platforms and enterprise-grade solutions. The ideal candidate will have over 13 years of experience in IT, with a strong focus on cloud-native technologies, microservices architecture, Kubernetes, and DevOps practices. This role demands deep architectural expertise, strong leadership, and hands-on experience with building scalable, resilient, and secure applications in cloud environments GCP. Key Responsibilities Architect and design cloud-native solutions that are scalable, secure, and resilient across public, private, and hybrid clouds. Lead the development of microservices-based platforms using containers, Kubernetes, and service mesh technologies. Define and enforce cloud-native standards, best practices, and reusable patterns across teams. Oversee implementation of CI/CD pipelines and DevSecOps processes to support modern software delivery. Collaborate with enterprise architects, product owners, and engineering leads to ensure alignment with business and technical goals. Perform architecture reviews, cloud cost optimizations, and performance tuning. Stay current with cloud technologies and provide strategic input for adopting emerging trends like serverless, edge computing, and AI/ML in cloud. Mentor and guide junior architects, engineers, and DevOps teams. Required Skills & Qualifications 13+ years of total experience in IT, with at least 5-7 years in cloud-native architecture. Deep expertise in Kubernetes, Docker, Helm, Istio, or other container and orchestration tools. Strong experience with at least one major cloud provider (AWS, GCP, Azure); multi-cloud knowledge is a plus. Proficient in DevOps practices, CI/CD pipelines (Jenkins, ArgoCD, GitOps), and Infrastructure as Code (Terraform, CloudFormation). Deep knowledge of distributed systems, event-driven architecture, and API management. Understanding of cloud security, IAM, compliance, and governance frameworks. Strong programming/scripting knowledge (e.g., Go, Python, Java, Bash). Experience leading large-scale migrations to the cloud and building greenfield cloud-native applications. Excellent communication and stakeholder management skills. Preferred Certifications Google Professional Cloud Architect Certified Kubernetes Administrator (CKA) / CKAD Why Join Us Work on cutting-edge cloud-native initiatives with global impact Be part of a culture that promotes continuous learning and innovation Competitive compensation and remote flexibility Opportunity to influence strategic technology direction Show more Show less

Posted 4 days ago

Apply

14.0 - 16.0 years

14 - 16 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Primary Responsibilities: Deploy, Manage, and Automate (Kubernetes Focus): Design, deploy, and manage highly available and scalable Kubernetes clusters on AWS EKS using Terraform and/or Crossplane. Implement infrastructure as code (IaC) best practices for managing EKS clusters and related infrastructure. Kubernetes Expertise: Configure and maintain Kubernetes deployments, services, ingresses, and other resources using YAML manifests or GitOps workflows. Implement and utilize GitOps practices with FluxCD for automated deployments and configuration management of containerized applications. Ensure Reliability & Scalability: Proactively ensure the reliability, security, and scalability of AWS production systems, with a particular focus on Kubernetes clusters and containerized applications. Problem-Solving Expertise: Resolve problems across multiple platforms and application domains, including those specific to Kubernetes environments and containerized applications, using advanced system troubleshooting and problem- solving techniques. Operational Support: Provide primary operational support and engineering expertise for all cloud and enterprise deployments, with a focus on Kubernetes deployments. Performance Monitoring: Monitor system performance, identify downtime incidents, and diagnose underlying causes, with a specific focus on Kubernetes cluster and container health. Cost Optimization: Design and develop cost-effective systems within allocated budgets, considering the optimization of Kubernetes deployments. Secondary Responsibilities: Collaboration: Collaborate effectively with developers, testers, and system administrators to ensure smooth deployments and operations of containerized applications on Kubernetes. Process Improvement: Champion the implementation of new processes, tools, and methodologies to enhance efficiency throughout the software development lifecycle (SDLC) and pipeline management, particularly for containerized application deployments. Security Integration: Integrate robust security measures into the development lifecycle, considering the specific security requirements of containerized applications. Typical Qualifications: 14+ years of experience building, scaling, and supporting highly available systems and services. 5+ years of experience managing and operating Kubernetes clusters in production.. Proven experience in building and managing AWS platforms, with a strong focus on Amazon EKS (Elastic Kubernetes Service). In-depth knowledge of Kubernetes architecture, core concepts, best practices, and security considerations. Expertise in infrastructure as code (IaC) tools like Terraform and Crossplane. Familiarity with GitOps principles and experience with GitOps tools like FluxCD (a plus). Proficiency in at least one scripting or programming language (e.g., Python , Go, Ruby, Shell) . Experience implementing solutions using SRE and DevOps principles, continuous integration & continuous delivery (CI/CD), and source code management tools (e.g., version control systems like Bitbucket or GitHub). Familiarity with telemetry, observability, and modern monitoring and visualization tools (e.g., Prometheus, Alertmanager, Grafana, or similar) with a focus on Kubernetes monitoring. Expertise in promoting and driving system visibility to facilitate rapid detection and resolution of issues, particularly within Kubernetes clusters. Behaviours & Abilities Required: A strong ability to learn and adapt in a fast-paced environment, particularly as Kubernetes and container orchestration technologies continue to evolve. Excellent teamwork skills, with the ability to collaborate effectively on a cross- functional team with diverse experience levels, including developers, testers, and system administrators. Strong prioritization skills, managing both individual workload and team needs, while ensuring efficient delivery of containerized applications. Excellent negotiation and problem-solving skills, adept at troubleshooting complex issues within Kubernetes environments and containerized applications. The ability to manage multiple projects simultaneously and keep projects on track by providing regular progress updates. Ability to context-switch effectively and handle unexpected situations. Willingness to participate in rotational on-calls or work shifts to ensure continuous monitoring and support of Kubernetes clusters and containerized applications. A strong work ethic and a commitment to continuous learning and improvement in the field of Kubernetes and container orchestration.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies