Home
Jobs

75 Argocd Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 4.0 years

10 - 20 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Naukri logo

Description: The RedHat DevOps Engineer will implement and manage OpenShift infrastructure, ensuring the smooth operation of containerized workloads and CI/CD pipelines. This role focuses on deployment automation, cluster management, and performance optimization within the OpenShift ecosystem. Responsibilities: Deploy and maintain OpenShift infrastructure while ensuring high availability and scalability. Manage and optimize OpenShift CI/CD pipelines (GitOps, ArgoCD, Tekton) to streamline application delivery. Implement Kubernetes-to-OpenShift migrations, ensuring compatibility and best practices. Automate deployments and infrastructure provisioning using Terraform and Ansible. Configure and fine-tune OpenShift clusters for performance and security. Establish monitoring, logging, and alerting solutions for proactive platform management. Troubleshoot and resolve OpenShift and Kubernetes-related performance and operational issues. Required Expertise: Strong knowledge of Azure and OpenShift, with hands-on experience managing containerized workloads. Proficiency in Kubernetes, Terraform, Ansible, and Docker for infrastructure and deployment automation. Experience in CI/CD pipeline management using GitOps (ArgoCD, Tekton, Jenkins, etc.). Strong understanding of container security, RBAC, and networking in OpenShift. Hands-on experience with performance tuning, monitoring, and troubleshooting OpenShift clusters. Exp : 3- 4 years Location : - Remote,New Delhi,Mumbai,Bengaluru

Posted 2 weeks ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary: We are looking for a highly skilled and adaptable Senior Site Reliability Engineer / Principal Site Reliability Engineer to become a key member of our Cloud Engineering team. In this crucial role, you will be instrumental in designing and refining our cloud infrastructure with a strong focus on reliability, security, and scalability . As an SRE, you'll apply software engineering principles to solve operational challenges, ensuring the overall operational resilience and continuous stability of our systems. This position requires a blend of managing live production environments and contributing to engineering efforts such as automation and system improvements. Responsibilities Cloud Infrastructure Architecture and Management: Design, build, and maintain resilient cloud infrastructure solutions to support the development and deployment of scalable and reliable applications. This includes managing and optimizing cloud platforms for high availability, performance, and cost efficiency. Enhancing Service Reliability: Lead reliability best practices by establishing and managing monitoring and alerting systems to proactively detect and respond to anomalies and performance issues. Utilize SLI, SLO, and SLA concepts to measure and improve reliability. Identify and resolve potential bottlenecks and areas for enhancement. Driving Automation and Efficiency: Contribute to the automation, provisioning, and standardization of infrastructure resources and system configurations. Identify and implement automation for repetitive tasks to significantly reduce operational overhead. Develop Standard Operating Procedures (SOPs) and automate workflows using tools like Rundeck or Jenkins. Incident Response and Resolution: Participate in and help resolve major incidents, conduct thorough root cause analyses, and implement permanent solutions. Effectively manage incidents within the production environment using a systematic problem-solving approach. Collaboration and Innovation: Work closely with diverse stakeholders and cross-functional teams, including software engineers, to integrate cloud solutions, gather requirements, and execute Proof of Concepts (POCs). Foster strong collaboration and communication. Guide designs and processes with a focus on resilience and minimizing manual effort. Promote the adoption of common tooling and components, and implement software and tools to enhance resilience and automate operations. Be open to adopting new tools and approaches as needed. Requirements Experience: 8 to 17 Years Role: We have multiple roles the final role will depend on the candidate's experience and credentials Education: BE/B. Tech/MCA/M.Sc./MTech/M.S Technology Stack: Linux Administration, Shell / Python Scripting, AWS Cloud Services (EC2, S3), Cloud Operations, Linux (CentOS, Rocky Linux), Jenkins, ArgoCD, Kubernetes Management, Ansible, Terraform, OS Patching, Release Management, Incident Management Infrastructure Management: Proven proficiency in on-premises hosting and virtualization platforms (VMware, Hyper-V, or KVM). Solid understanding of storage internals (NAS, SAN, EFS, NFS) and protocols (FTP, SFTP, SMTP, NTP, DNS, DHCP). Experience with networking and firewall technologies. Strong hands-on experience with Linux internals and operating systems (RHEL, CentOS, Rocky Linux). Experience with Windows operating systems to support varied environments. Service Reliability Concepts: Good understanding of SLI, SLO, SLA and error budgeting Other Mandatory Requirements: 1) Excellent communication skills 2) 24/7 support with monthly rotation shifts

Posted 2 weeks ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Sapiens is on the lookout for a Senior DevOps Engineer to become a key player in our Bangalore team. If you're a seasoned DevOps pro and ready to take your career to new heights with an established, globally successful company, this role could be the perfect fit. Location: Bangalore Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens Digital (Data Suite) R&D division, for more information about it, click here: What youll do: Implement secure, resilient, and cost-efficient architecture for our cloud-native platform service. Build and maintain a cloud-native platform Infrastructure following the "infrastructure as code" principle. Maintain and optimize the application layer of a multi-DC environment. Deliver solutions, architectures, and automation for Sapiens Applications. Conduct research to bring innovative solutions to a complex environment to improve processes and tech stack. Application and infrastructure logging and monitoring solutions. What to Have for this position. Must have Skills. 5-8 years of experience as a DevOps Engineer. Windows / Linux 5 to 8 years of experience administering Linux servers. Kubernetes -Hands-on experience in developing, deploying, tuning and debugging applications on Kubernetes. Experience with designing and implementing CI/CD pipelines and automation solutions like GitHub / ArgoCD, Azure DevOps is a plus. Cloud -Hands-on experience in working on public Cloud Azure, AWS. Code - Vast scripting experience in PowerShell, Python and Bash. Applications - Vast experience in working with Java web applications. IaC - At least 2-year experience with at least one automation tool - Ansible\Terraform. Security knowledge with web security aspects as WAF, certificates, OS hardening, security policies, VPNs Advantage. Monitoring - Good understanding of monitoring stack ELK/Grafana/Prometheus/DataDog/Azure Monitoring. Experience with a live production environment. Accountability, ownership, and independence. Great verbal and written communication skills. Good to have Skills: Experience with Packer / Chocolatey Knowledge with Azure Blueprints

Posted 2 weeks ago

Apply

3.0 - 4.0 years

10 - 20 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Naukri logo

The DevOps Engineer will implement and manage OpenShift infrastructure, ensuring the smooth operation of containerized workloads and CI/CD pipelines. This role focuses on deployment automation, cluster management, and performance optimization within the OpenShift ecosystem. Responsibilities: Deploy and maintain OpenShift infrastructure while ensuring high availability and scalability. Manage and optimize OpenShift CI/CD pipelines (GitOps, ArgoCD, Tekton) to streamline application delivery. Implement Kubernetes-to-OpenShift migrations, ensuring compatibility and best practices. Automate deployments and infrastructure provisioning using Terraform and Ansible. Configure and fine-tune OpenShift clusters for performance and security. Establish monitoring, logging, and alerting solutions for proactive platform management. Troubleshoot and resolve OpenShift and Kubernetes-related performance and operational issues. Required Expertise: Strong knowledge of Azure and OpenShift, with hands-on experience managing containerized workloads. Proficiency in Kubernetes, Terraform, Ansible, and Docker for infrastructure and deployment automation. Experience in CI/CD pipeline management using GitOps (ArgoCD, Tekton, Jenkins, etc.). Strong understanding of container security, RBAC, and networking in OpenShift. Hands-on experience with performance tuning, monitoring, and troubleshooting OpenShift clusters. Location - Remote, hyderabad,ahmedabad,pune,chennai,kolkata. Keyword GitOps,ArgoCD,Tekton,Terraform,Docker,CI/CD,performance tuning,troubleshooting,OpenShift*,Kubernetes*,Ansible*,DevOps*,Azure*

Posted 2 weeks ago

Apply

7.0 - 9.0 years

0 Lacs

, India

On-site

Foundit logo

Introduction At IBM, work is more than a job - its a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things youve never thought possible. Are you ready to lead in this new era of technology and solve some of the worlds most challenging problems If so, lets talk. Your role and responsibilities Your Role and Responsibilities . Software developers at IBM are the backbone of our overall strategy, and software development is the essential activity that drives the success of IBM and our clients worldwide. At IBM, you will use the latest software development tools, techniques and technologies and work with leading minds in the industry to build products, path-breaking technologies, and solutions that you can be proud of. . Do you have the skills and passion for building the future If yes, come and be part of a niche team at IBM Software Labs focused on building an AI-driven Digital Labor platform, Watson Orchestrate, an AI platform that offers digeys (aka digital employees) with custom skills that can automate todays businesses. Look for more details at [1] . We seek a DevOps technical leader/architect with robust expertise in designing distributed SaaS platforms and associated end-to-end build, deployment, CD/CI pipelines, frameworks, and tooling. Experience in quickly isolating problems and identifying root causes in complex production systems. The ideal candidate would have rich experience in understanding enterprise architecture and complex systems and be able to architect a solution that eases the deployment, identifies issues via monitoring and ensures the system is always highly available, reliable and resilient. ReferencesVisible links 1. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise . 7+ years of experience with at least 5+ years of experience as a DevOps/SRE Architect . Designed, implemented, and supported complex distributed Saas platforms. . Deep understanding and working experience on Kubernetes, Containers, Red Hat Open Shift Clusters on AWS, AWS services, ArgoCD, Jenkins, Grafana and other pipelines, and monitoring tools. . Has an approach to troubleshooting systematically and has a deep sense of ownership . Maintains personal responsibility and commitment to address and respond to incidents quickly . Passionate about automation and innovations that improve productivity and reliability. . Experience in technically coaching and mentoring junior SRE/DevOps Engineers Preferred technical and professional experience . Good communication, collaboration, negotiation skills and technical leadership qualities . Strong Go Skills

Posted 2 weeks ago

Apply

8.0 - 12.0 years

13 - 23 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

DevOps Tools expertise: GitLab, Jenkins, ArgoCD etc., Artifact Management using JFrog Application Security Auotmation Testing Public Cloud: Google and AWS Cloud DevOps platform migration project

Posted 2 weeks ago

Apply

6.0 - 8.0 years

40 - 50 Lacs

Mumbai, Pune

Hybrid

Naukri logo

Congratulations, you have taken the first step towards bagging a career-defining role. Join the team of superheroes that safeguard data wherever it goes. What should you know about us? Seclore protects and controls digital assets to help enterprises prevent data theft and achieve compliance. Permissions and access to digital assets can be granularly assigned and revoked, or dynamically set at the enterprise-level, including when shared with external parties. Asset discovery and automated policy enforcement allow enterprises to adapt to changing security threats and regulatory requirements in real-time and at scale. Know more about us at www.seclore.com You would love our tribe: If you are a risk-taker, innovator, and fearless problem solver who loves solving challenges of data security, then this is the place for you! Role: Lead Product Engineer - Developer Productivity Experience: 6 - 8 Years Location: Mumbai/Pune A sneak peek into the role: We are seeking a highly motivated and experienced Lead, Developer Productivity & Platform Engineering to spearhead our efforts in building, scaling, and continuously improving our internal developer platform. In this critical role, you will be responsible for empowering our development teams with the tools, infrastructure, and processes necessary to achieve exceptional productivity, accelerate software delivery, and enhance their overall experience. You will driving the vision, strategy, and execution of our IDP initiatives, with a strong focus on measuring and improving developer effectiveness. Here's what you will get to explore: Leadership: This role blends the responsibilities of an individual contributor with the need to lead a team as the practice grows. While the primary focus is on individual contributions and expertise, the role also requires guiding, mentoring, and coordinating the work of others. Foster a collaborative, innovative, and results-oriented team culture. Define clear roles, responsibilities, and performance expectations for team members. Platform Vision, Strategy & Roadmap: Define and articulate a clear vision, strategy, and roadmap for our internal developer platform (IDP), aligning with overall engineering and business objectives. Identify and prioritize key features and improvements for the IDP based on developer needs and productivity goals. Stay abreast of industry trends and emerging technologies in platform engineering, developer experience, and IDPs (e.g., Backstage). Collaboration & Stakeholder Management: Work closely with application development teams, product managers, security teams, operations, and other stakeholders to understand their pain points, needs, and requirements for the IDP. Effectively communicate the value and progress of the IDP to both technical and non-technical audiences. IDP Design, Development & Maintenance: Lead the design, development, and maintenance of core components of our internal developer platform, emphasizing self-service capabilities, automation, standardization, and a seamless developer experience. Drive the adoption of Infrastructure as Code (IaC), Continuous Integration/Continuous Delivery (CI/CD), and robust observability practices within the platform. Ensure the IDP is scalable, reliable, secure, and cost-effective. Focus on Developer Productivity & Measurement: Define and track key metrics to measure the impact of the IDP on developer productivity (e.g., deployment frequency, lead time for changes, time to recovery, developer satisfaction). Implement mechanisms for collecting and analyzing data related to developer workflows and platform usage. Identify and implement solutions to streamline developer workflows, reduce toil, and accelerate application delivery based on data and feedback. Potentially lead initiatives to integrate and leverage tools like Backstage to enhance developer experience and provide a centralized platform. Tooling & Integration: Evaluate and integrate relevant tools and technologies into the IDP ecosystem, including CI/CD systems, monitoring tools, logging solutions, security scanners, and potentially IDP frameworks like Backstage. Ensure seamless integration between different platform components and existing development tools. We can see the next Entrepreneur At Seclore if you: 6+ years of relevant experience in software engineering, platform engineering, or DevOps roles, with increasing levels of responsibility. Proven experience leading and managing engineering teams, including hiring, mentoring, and performance management. Strong understanding of the software development lifecycle and common developer workflows. Deep technical expertise in cloud platforms (e.g., AWS, Azure, GCP) and cloud-native technologies (e.g., Kubernetes, Docker, serverless). Extensive experience with Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Significant experience designing and implementing CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, CircleCI, Argo CD, or Flux CD. Solid understanding of observability principles and hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Datadog), logging solutions (e.g., ELK stack, Splunk), and distributed tracing (e.g., Jaeger, Zipkin). Strong understanding of security best practices for cloud environments and containerized applications, and experience with security scanning tools and secrets management. Experience in managing and configuring Code Quality tools like SonarQube Experience in managing and configuring Git tools like Gitlab Proficiency in at least one Programming language (e.g., Python, Go) for automation. Understanding of API design principles (REST, GraphQL) and experience with building and consuming APIs. Experience with data collection and analysis to identify trends and measure the impact of platform initiatives. Excellent communication, collaboration, and interpersonal skills, with the ability to influence and build consensus across teams. Strong problem-solving and analytical abilities. Experience working in an Agile development environment. Prior experience building and maintaining an Internal Developer Platform (IDP). Hands-on experience with IDP frameworks like Backstage, including setup, configuration, plugin development, and integration with other tools. Familiarity with developer productivity frameworks and methodologies. Experience with other programming languages commonly used by development teams (e.g., Java, Node.js, C++). Experience with service mesh technologies. Knowledge of cost management and optimization in the cloud. Experience in defining and tracking developer productivity metrics. Experience with data visualization tools (e.g., Grafana, Tableau). Why do we call Seclorites Entrepreneurs not Employees? We value and support those who take the initiative and calculate risks. We have an attitude of a problem solver and an aptitude that is tech agnostic. You get to work with the smartest minds in the business. We are thriving not living. At Seclore, it is not just about work but about creating outstanding employee experiences. Our supportive and open culture enables our team to thrive. Excited to be the next Entrepreneur, apply today! Don't have some of the above points in your resume at the moment? Don't worry. We will help you build it. Let's build the future of data security at Seclore together.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Chennai

Work from Office

Naukri logo

What youll be doing You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leveraging monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Working with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implementing security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Working closely with the engineering and operations teams to design and implement cloud-based solutions. Maintaining detailed documentation of cloud architecture and platform configurations and regularly provide status reports and performance metrics. What were looking for... Youll need to have: Bachelors degree or one or more year of work experience. Experience years in Kubernetes administration Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.)

Posted 2 weeks ago

Apply

2.0 - 6.0 years

50 - 55 Lacs

Chandigarh

Work from Office

Naukri logo

As a Golang developer, you will be immersed in our backend infrastructure, taking charge of complex architecture and coding challenges. Your primary focus will be based on pure backend coding, strategy thinking, and working closely with the frontend development team to deliver seamless and innovative solutions. Role and Responsibilities : - Backend Language : Using Go as your backend language throughout any development and maintenance procedures - Backend Development : Development of scalable and robust backend solutions, including maintaining, enhancing, and integrating backend technologies - Hands-on Coding : Implementing end-to-end solutions that seamlessly integrate with the frontend, ensuring a cohesive user experience. - API Development : Creating and implementing robust RESTful APIs to extend application functionalities, facilitating seamless integration with third-party services 2 for enhanced features - Third-Party Integrations : Seamlessly incorporating external services into the system, including payment gateways, real-time call functionalities, and live communication features. - Collaboration : Collaborating closely with the frontend team to ensure smooth functionality and user experiences. - Reliability and Scalability : Assuring the reliability, scalability, and efficiency of our backend systems to meet the demands of our applications. - Technical Guidance : Providing technical guidance and mentorship when needed to the development team, fostering a culture of excellence and continuous improvement. - Innovation : staying updated with the latest technologies and industry trends, driving technical innovation within the organization. Work experience requirements : - Experience in national and/or global technology projects with significant demand. - Experience in the implementation and management of payment solutions and real-time data systems. - Experience in the implementation of cybersecurity protocols, with a preference for military-grade protocols. - Experience in the implementation of highly complex database architectures in AWS or similar. Qualifications : To be successful in this role, you must possess the following qualifications : - Education: Bachelors degree in Computer Science or Software Engineering. - Masters Degree: Masters degree in Computer Science or Software Engineering. - Experience : 4 to 8 years of professional experience as a backend developer, with a proven track record of building complex, scalable applications. - GO Proficiency : Proficiency in Go Language or similar languages. - Additional Backend Languages : Proficiency and previous experience in other backend languages such as Java, Node.js, or Python are a plus. - Frameworks : Experience with frameworks such as Gin, Echo, Spring Boot, Express.js, or Django. - Database Expertise : Solid understanding of database systems, including SQL and NoSQL databases. - Containerization Technologies : Master Level in containerization technologies Kubernetes and Docker - Cloud Experience : Previous in-depth level experience with AWS or Azure. - Software Engineering : Strong knowledge of software engineering principles, design patterns, and best practices. - Problem-Solving : Excellent problem-solving skills and attention to detail. - Communication : Ability to motivate the team with exceptional communication and interpersonal skills. To be successful in the application for the position : You must have master-level experience in the following technologies : - Golang - Postgres - gRPC - Redis - RabbitMQ - OAuth2 You should have advanced knowledge in the following technologies : - Kubernetes - Docker - Gitlab CI/CD - Prometheus - Grafana4 - Kong - ArgoCD

Posted 3 weeks ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.

Posted 3 weeks ago

Apply

1.0 - 5.0 years

3 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/ OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Requirements: (Must have Qualifications) Solid cloud infrastructure background and operational, fixing and problem-solving experience Strong software development experience in Python. Experience in building and maintaining code distribution through automated pipelines Experience in deploying and managing (IaaS) infrastructure in Private/Public Cloud using Openstack. Experience with Ansible or Puppet for configuration management IaaC experience Terraform, Ansible, Git, GitLab, Jenkins, Helm, ArgoCD, Conjur/Vault

Posted 3 weeks ago

Apply

4.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That's why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don't need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey Bring your brightest version of you and have a brighter work day here. About the Team The Data Platform and Observability team is based in Pleasanton,CA Boston,MA Atlanta, GA, Dublin, Ireland and Chennai, India. Our focus is on the development of large scale distributed data systems to support critical Workday products and provide real-time insights across Workday's platforms, infrastructure and applications. The team provides platforms that process 100s of terabytes of data that enable core Workday products and use cases like core HCM, Fins, AI/ML skus, internal data products and Observability. If you enjoy writing efficient software or tuning and scaling large distributed systems you will enjoy working with us. Do you want to tackle exciting challenges at massive scale across private and public clouds for our 10000+ global customers Do you want to work with world class engineers and facilitate the development of the next generation Distributed systems platforms If so, we should chat. About the Role The Messaging, Streaming and Caching team is a full-service Distributed Systems Engineering team. We architect and provide async messaging, streaming, and NoSQL platforms and solutions that power the Workday products and SKUs ranging from core HCM, Fins, Integrations, and AI/ML. We develop client libraries and SDK's that make it easy for teams to build Workday products. We develop automation to deploy and run hundreds of clusters, and we also operate and tune our clusters as well. As a team member you will play a key role in improving our services and encouraging their adoption within Workday's infrastructure both in our private cloud and public cloud. As a member of this team you will design and build new capabilities from inception to deployment to exploit the full power of the core middleware infrastructure and services, and work hand in hand with our application and service teams! Primary Responsibilities Design, build, and enhance critical distributed services, including Kafka, Redis, RabbitMQ etc. Design, develop, build, deploy and maintain core distributed services using a combination of open source and proprietary stacks across diverse infrastructure environments (Kubernetes, OpenStack, Bare Metal, etc.) Design and develop core software modules for streaming, messaging and caching. Construct observability modules, alerts and automation for Dashboard lifecycle management for the distributed services. Build, deploy and operate infrastructure components in production environments. Champion all aspects of streaming, messaging and caching with a focus on resiliency and operational excellence. Evaluate and implement new open-source and cloud-native tools and technologies as needed. Participate in the on-call rotation to support the distributed systems platforms. Manage and optimize Workday distributed services in AWS, GCP & Private cloud env. About You You are a senior software engineer with a distributed systems background and significant experience in distributed systems products liketKafka, Redis, RabbitMQ or Zookeeper. You have independently led product features and deployed on large scale NoSQL clusters. Basic Qualifications 4-12 years of software engineering experience using one or more of the following: Java/Scala, Golang. 4+ years of distributed systems experience 3+ years of development and DevOps experience in designing and operating large-scale deployments of distributed NoSQL & messaging systems. 1+ year of leading a NoSQL technology related product right from conception to deployment and maintenance. Preferred Qualifications a consistent track record of technical project leadership and success involving collaborators and interested partners across the enterprise. expertise in developing distributed system software and deployments that perform well and degrade gracefully under excessive load. hands-on experience with atleast one or more distributed systems technologies like Kafka/RabbitMQ, Redis, Cassandra experience learning complex open source service internals via code inspection. extensive experience with modern software development tools including CI/CD and methodologies like Agile expertise with configuration management using Chef and service deployment on Kubernetes via Helm and ArgoCD. experience with Linux system internals and tuning. experience with distributed system performance analysis and optimization. strong written and oral communication skills and the ability to explain esoteric technical details clearly to engineers without a similar background. Pursuant to applicable Fair Chance law, Workday will consider for employment qualified applicants with arrest and conviction records. Workday is an Equal Opportunity Employer including individuals with disabilities and protected veterans. Are you being referred to one of our roles If so, ask your connection at Workday about our Employee Referral process!

Posted 3 weeks ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary: Executing direction from leadership, delivering results that align with strategic objectives, communicating critical information to other teams, managing vendor relationships, developing processes that align to organizational goals, specific technical skills required for managing a process. In this role, you will use your expertise and technical skills around cloud computing to design scalable platform to be used at the organization. You will help build viable long term infrastructure solutions. Roles & Responsibilities: Core Responsibilities: Completely Hands-on, help us build the next gen platform based on mentioned technologies Suggest improvements in automation, CI/CD practices, security, and platform services. Drive evaluation of different tools and executing technical feasibility assessments. Years of Experience 3+ years of experience with Cloud Technologies. Skill Set Required Primary Skills: Experience of working on GCP Cloud environment Working knowledge on containerization. Experience of setting up/working with Kubernetes Cluster in Production. Including customizing the Kubernetes setup. Worked on configuring and deploying Rancher, RKE, Flux. Dabbled with open-source tools like Terraform/Vault/Jenkins etc. Knowledge on CI/CD automation or tools Proficient in scripting shell/bash Comfortable in Go/Python programming. Strong communication skills both verbal and written skills to develop technical documentation and presentations. Secondary Skills: Drive evaluation of different tools and executing technical feasibility assessments. Good knowledge of Linux and Knowledge on setting up HA distributed streaming platform such as Kafka/NoSQL Databases and Prometheus, ELK and Pinpoint

Posted 3 weeks ago

Apply

1 - 6 years

5 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD) Troubleshoot Kubernetes/OpenShift-related issues and optimize performance Implement security policies and best practices for containerized workloads Work with developers to containerize applications and manage microservices Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools

Posted 1 month ago

Apply

1 - 5 years

3 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/ multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/ OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.

Posted 1 month ago

Apply

5 - 9 years

0 Lacs

Bengaluru

Work from Office

Naukri logo

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below client services offerings are used to create the Internet solutions that make networks possible providing easy access to information anywhere, at any time. Job Title: DevOps Engineer Location: Bangalore Duration: 5 Months Work Type: Onsite Job Description: 5+ years of experience are required. Requirements: (Must have Qualifications) Solid cloud infrastructure background and operational, fixing and problem-solving experience Strong software development experience in Python. Experience in building and maintaining code distribution through automated pipelines Experience in deploying and managing (IaaS) infrastructure in Private/Public Cloud using Openstack. Experience with Ansible or Puppet for configuration management IaaC experience Terraform, Ansible, Git, GitLab, Jenkins, Helm, ArgoCD, Conjur/Vault TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 1 month ago

Apply

5 - 8 years

9 - 19 Lacs

Hyderabad, Ahmedabad

Work from Office

Naukri logo

JD Devops Engineer Job Description Roles and Responsibilities Responsible for managing capacity across public and private cloud resource pools, including automating scale-down/-up of environments. Improve cloud product reliability, availability, maintainability, and cost/benefitincluding developing fault-tolerant tools to ensure the general robustness of the cloud infrastructure. Design and implement CI/CD pipeline elements to provide automated compilation, assembly, and testing of containerized and non-containerized components. Design and implement infrastructure solutions on GCP that are scalable, secure, and highly available. Automate infrastructure deployment and management using Terraform, Ansible, or equivalent tools. Create and maintain CI/CD pipelines for our applications. Monitor and troubleshoot system and application issues to ensure high availability and reliability. Work closely with development teams to identify and address infrastructure issues. Collaborate with security teams to ensure infrastructure is compliant with company policies and industry standards. Participate in on-call rotations to provide 24/7 support for production systems. Continuously evaluate and recommend new technologies and tools to improve infrastructure efficiency and performance. Mentor and guide junior DevOps engineers. Other duties as assigned. Requirements Minimum Special Certifications or Technical Skills Proficient in at least two or more software languages (e.g. Python, Java, Go, etc. concerning to designing, coding, testing, and software delivery. Strong knowledge on CI/CD, Jenkins and github action More of a application devops engineer rather than infra devops engineer Strong knowledge on maven, sonarqube Strong knowledge on scripting and some knowledge on java Strong knowledge on agocd, helm Hands-on experience with Google Cloud Platform (GCP) and its services such as Compute Engine, Cloud Storage, Kubernetes Engine, Cloud SQL, Cloud Functions, etc. Strong understanding of infrastructure-as-code principles and tools such as Terraform, Ansible, or equivalent. Experience with CI/CD tools such as Jenkins, GitLab CI, or equivalent Strong understanding of networking concepts such as DNS, TCP/IP, and load balancing Preferred candidate profile

Posted 1 month ago

Apply

8 - 12 years

25 - 30 Lacs

Mumbai

Work from Office

Naukri logo

Job Summary: We are seeking a skilled and motivated System Programmer to join our IT Infrastructure team. This role is responsible for the installation, configuration, maintenance, and performance of critical enterprise systems including Linux servers , Apache HTTP Server , and Oracle WebLogic . The ideal candidate will have strong scripting abilities and experience with writing SQL queries to support operational and development teams. Key Responsibilities: Install, configure, and maintain Linux operating systems , Apache HTTP Server , and Oracle WebLogic application servers in development, test, and production environments. Perform regular system patching and software updates to ensure platform security and stability. Develop and maintain automation scripts (e.g., Bash, Python, or similar) to streamline system management tasks. Write and optimize SQL queries to support reporting, troubleshooting, and system integration needs. Monitor system performance and implement tuning improvements to maximize availability and efficiency. Work closely with development, QA, and operations teams to support application deployments and troubleshoot system-related issues. Maintain accurate system documentation, including configurations, procedures, and troubleshooting guides. Participate in an on-call rotation and respond to incidents as required. Required Qualifications: Overall 8-12 years of experience. Proven experience with Linux system administration (RHEL, CentOS, or equivalent). Hands-on experience with Apache HTTP Server and Oracle WebLogic . Proficiency in scripting languages such as Bash, Python, or Perl. Strong understanding of SQL and relational databases (e.g., Oracle, MySQL). Familiarity with system monitoring tools and performance tuning. Knowledge of security best practices and patch management procedures. Excellent troubleshooting, analytical, and problem-solving skills. Strong communication skills and ability to work in a collaborative team environment. Preferred Qualifications: Experience with CI/CD pipelines , Ansible, ArgoCD , or other automation tools. Exposure to cloud environments (e.g., AWS, Azure) or container technologies (e.g., Docker, Kubernetes).

Posted 1 month ago

Apply

3 - 8 years

5 - 10 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

The Impact you will have in this role: The Systems Engineering family is responsible for the entire technical effort to evolve and verify solutions that satisfy client needs. The primary focus is centered on reducing risk and improving the efficiency, performance, stability, security, and quality of all systems and platforms. The Systems Engineering role specializes in analysis, evaluation, design, testing, implementation, support, and debugging of all middleware, mainframe, and distributed platforms, tools, and systems of the firm. Your Primary Responsibilities: Proficiency using Linux platform and hands-on experience with scripting languages Shell scripting, Python, Ansible is a must Experience with the AWS services, handling Infrastructure as a code using Terraform, Pipelines Experience with containerization using platforms like Docker, Kubernetes, OCP Experience using code repositories like GitHub, Bitbucket, Jenkins, CI/CD Pipeline Knowledge with SSL/TLS certificates, Autosys job scheduling, basic networking concepts like firewall, load balancing Experience working with DataPower setup, install and configuration is desirable Knowledge of working with the vendor, open tickets and follow-up with them to resolve issues. Experience working with Sterling Connect Direct, Control Center Monitor configuration and setup is desirable Qualifications: Minimum of 03+ years of related experience Bachelor's degree preferred or equivalent experience Talents Needed for Success: Programming skills using Python, Unix shell scripting is required Knowledge of CI/CD tools sets is required (Jenkins, Ansible, Terraform, ArgoCD) Familiarity with multi-cloud or hybrid-cloud environments Fosters a culture where honesty and transparency are expected. Stays current on changes in their own specialist area and seeks out learning opportunities to ensure knowledge is up-to-date. Collaborates well within and across teams. Communicates openly with team members and others. Open to learn and adapt to new technologies and tools. Strong problem-solving and communication skills.

Posted 1 month ago

Apply

2 - 4 years

4 - 6 Lacs

Pune

Work from Office

Naukri logo

We are looking for a DevOps Engineer Youll make a difference by: Very good knowledge and experience with containerization and cluster management infrastructure setup and maintenance (Kubernetes, vCluster, Docker, Helm, KubeVela) Very good knowledge and experience with cloud (AWS preferred with VPC, Subnet, ELB, Secrets manager, EBS Snapshots, EC2 Security groups, ECS, Cloudwatch and SQS) Very good knowledge and experience in administrating Linux, clients and servers Experience working with data storage using DynamoDB, RDS PostgreSQL and S3 Good experience and confidence with code versioning (GIT Preferred) Experience in automation with programming and IaC scripts (Python / Shell / Terraform) Experience with SSO setup and user management with Keycloak and / or Okta SSO Experience in service mesh monitoring with Istio, Kiali, Grafana, Loki and Prometheus Experience with GitOps setup and management for ArgoCD / FluxCD Good team player Desired Skills: 2 to 4 years of experience is required. Great Communication skills. Analytical and problem-solving skills CKA preferred Make your mark in our exciting world at Siemens. This role is based in Pune and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come.

Posted 2 months ago

Apply

4 - 6 years

12 - 20 Lacs

Chennai

Hybrid

Naukri logo

About the Role We are hiring a DevOps / Production Support Engineer to take full ownership of the production infrastructure. We're looking for a technically sharp and strategically minded engineer who can quickly understand the existing functions. Responsibilities Design and manage production infrastructure across Vercel, AWS, and Kinde. Build and maintain CI/CD pipelines to enable automated, zero-downtime production deployments. Review and replicate UAT setups to create a robust and resilient production environment. Implement best practices for infrastructure security, secrets management, and access control. Set up monitoring, alerting, and logging to ensure platform reliability and performance. Manage and back up MySQL/PostgreSQL databases with clear recovery procedures. Own incident response processes, including triage, root cause analysis, and post-incident automation. Must-Have Skills & Experience CI/CD Pipelines: Experience with GitHub Actions, GitLab CI/CD, or equivalent. AWS: Hands-on with services like EC2, Lambda, ECS, RDS (MySQL/PostgreSQL), IAM, and networking. Vercel: Strong understanding of deploying modern frontends (e.g., React/Next.js) on Vercel. Frontend CI/CD Lifecycle: Experience building and automating the deployment of frontend apps. Authentication: Knowledge of Kinde, Auth0, or similar OAuth-based identity providers. Database Ops: Experience managing and backing up MySQL or PostgreSQL in production environments. Infrastructure-as-Code (IaC): Proven experience using Terraform, AWS CDK, or CloudFormation. Secrets Management: Familiarity with AWS Secrets Manager, SSM Parameter Store, or HashiCorp Vault. Automation: Proficiency in Bash, Python, or Node.js for scripting and automation. Monitoring/Observability: Comfortable setting up tools like CloudWatch, Datadog, or Prometheus/Grafana. Who You Are Honest: You own up to mistakes, communicate transparently, and value integrity above all. Humble: You work with others without ego, respect different viewpoints, and always stay curious. Hungry: You're self-driven, eager to learn new systems, and motivated to deliver the right solutions-not just Collaborative: You can work across vendors, product teams, and internal stakeholders without friction. Detail-Oriented: You don't leave loose ends- you make sure things are done properly the first time. Reliable: You follow through on commitments and take pride in your craftsmanship. Preferred Qualifications DevSecOps approach to security, automation, and compliance. Prior experience in fast-paced or startup-like environments. What You'll Love Full ownership of a modern production infrastructure. Opportunity to shape scalable and secure DevOps practices from the ground up. Work on a high-impact product in the sports tech space. Collaborative and innovative culture focused on delivery and quality.

Posted 2 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

The core infrastructure team is responsible for this infrastructure, spread across 10 production deployments across the globe, 24/7, with 4 nines of uptime. Our infrastructure is managed using Terraform (for IaC), GitLab CI and monitored using Prometheus and Datadog. We're looking for you if: You are strong infrastructure engineer with specialty in networking and site reliability. You have strong networking fundamentals (DNS, subnets, VPN, VPCs, security groups, NATs, Transit Gateway etc) You have extensive and deep experience (~4 years) with IaaS Cloud Providers. AWS is ideal, but GCP/Azure would be fine too. You have experience with running cloud orchestration technologies like Kubernetes and/or Cloud Foundry, and designing highly resilient architectures for these. You have strong knowledge of Unix/Linux fundamentals You have experience with infrastructure as code tools. Ideally Terraform, OpenTofu but CloudFormation or Pulumi are fine too. You have experience designing cross Cloud/on-prem connectivity and observability You have a DevOps mindset: you build it, you run it. You care about code quality, and know how to lead by example: from a clean Git history, to well thought-out unit and integration tests. Even better (but not essential!) if you have experience with: Monitoring tools that we use, such as Datadog and Prometheus CI/CD tooling such as GitLab CI You have programming experience with (ideally) Golang or Python You are willing and able to use your technical expertise to mentor, train, and lead other engineers Youll help drive digital innovation by: Continually improving our security + operational excellence. Work directly with customers to set up connectivity between Mendix Cloud platform and customers backend infrastructure. Rapidly scaling our infrastructure to match our rapidly increasing customer base. Continuously improving the observability of our platform, so that we can fix problems before they occur. Improving our automation and surrounding tooling to further streamline deployments + platform upgrade. Improving the way we use AWS resources, and defining cost optimization strategies Here are many of the tools we make use of: Amazon Web Services (EC2, Fargate, RDS, S3, ELB, VPC, CloudWatch, Lambda, IAM, and more !) PaaS: (Open Source) Kubernetes, Docker, Open Service Broker API Eventing: AWS MSK and Confluent Warpstream BYOK Monitoring: Prometheus, InfluxDB, Grafana, Datadog CI/CD: GitLab CI, ArgoCD Automation: Terraform, Helm Programming languages: mostly Golang and Python, with a sprinkling of Ruby and Lua Scripting: Bash, Python Version Control: Git + GitLab Database: PostgreSQL

Posted 2 months ago

Apply

12 - 16 years

40 - 45 Lacs

Pune

Work from Office

Naukri logo

We seek an outstanding software architect with a can-do attitude to join us on our SaaS transformation journey. As part of this journey, you will develop applications and tools required to build, deploy, and operate Teamcenter on AWS. You will contribute as an architect and a full-stack developer, contributing to back-end, front-end, IaaS, DevOps, etc. We are looking for Strong architect who has experience in SaaS transformation Strong software developer SRE and/or DevOps approach Master trouble-shooter Good communication Lifelong learner Can-do attitude. Required Skills: 12+ years of software development experience (Java or Python or similar) Can create, understand, communicate and critique architectures and software design Can conceive, publish and provide governance with principles, guidelines and guardrails of the architecture Can approach decisions and trade-offs with transparency Efficiently developed fully automated systems for operating in cloud using k8s Experience with infrastructure as code tools, ideally with Terraform, Ansible Proficient in networking components (subnets, VPN, VPCs, security groups, NATs, etc). Have worked with CI/CD tooling such as GitLab CI, ArgoCD, etc Can do system design for solutions in the cloud (Can you build an MVP of Dropbox?) You understand System Design principles and micro-services architecture You have familiarity with Linux internals You possess expertise in AWS and Azure services, as well as container architecture. Can interpret and critique distributed system design You are an advanced-level programmer Can build full-stack/backend for applications and deploy on cloud Can write clear, legible, maintainable code Aware of code smells and can carry out it through code reviews Have followed TDD/BDD principles and built automated tests Proven understanding of testing strategies and tools Good design background Good in object-oriented design, development Developed skills in choosing efficient data structures and algorithms Understanding of design patterns/smells Willingness to use your technical expertise to mentor, train, and lead other specialists. Even better if: You are proficient in cybersecurity architecture and processes You have hands on experience in Teamcenter platform.

Posted 2 months ago

Apply

5 - 8 years

25 - 35 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

We are looking for candidates who have experience in the below. 5+ years experience in Proficient in python and shell scripting Proficient in Github actions, Jenkins, Argocd, Argorollouts. Proficient in docker, kubenertes, openshift, istio Experience in AWS , Terraform Experience in monitoring solutions like prometheus/Grafana Experience in logging solutions like ElasticSearch , Splunk Knowledge on Temporal is a nice to have.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies