Jobs
Interviews

966 Gitops Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Andhra Pradesh

On-site

Position: Tech Lead API Platform Team (Azure, Terraform, Kubernetes) Overview: We are seeking a highly skilled Tech Lead to spearhead our API Platform Team. This role demands a profound expertise in Azure services and advanced skills in Terraform and Kubernetes. Key Responsibilities and Requirements: Proven experience as a Tech Lead or Senior role in Azure, Kubernetes and Terraform. Extensive Expert-level knowledge and experience with Azure services such as AKS, APIM, Application Gateway, Front Door, Load Balancers, Azure SQL, Event Hub, Application Insights, ACR, Key Vault, VNet, Prometheous, Grafana, Storage Account, Monitoring, Notification Hub, VMs, DNS and more. Extensive Expert-level knowledge and hands-on experience in design and implement complex Terraform modules for Azure and Kubernetes environments, incorporating various providers such as azurerm, azapi, kubernetes, and helm. Extensive Expert-level knowledge and hands-on experience in deploy and manage Kubernetes clusters (AKS) with a deep understanding of Helm chart writing, Helm deployments, and AKS addons, Application Troubleshooting, monitoring with Prometheus and Grafana, GitOps and more. Lead application troubleshooting, performance tuning, and ensure high availability and resilience of APIs deployed in AKS and exposed internally and externally through APIM. Drive GitOps, APIOps practices for continuous integration and deployment strategies. Strong analytical and problem-solving skills with a keen attention to detail. Excellent leadership skills and ability to take ownership to deliver platform requirements in a faster pace. Qualifications: Bachelors or masters degree in Computer Science, Engineering, or a related field. Certifications in Azure, Kubernetes and Terraform are highly preferred. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. Required Skills and Experience: With 4+ years of solid background as a Kubernetes Engineer (preferably experience with Azure Kubernetes Service) In-depth knowledge of Kubernetes concepts, architecture, components, and APIs Practical experience with container technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes, Helm, Helmfile) Hands-on experience with monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK stack) Strong programming abilities in at least one language (e.g., Shell scripting, YAML, Python, Java) Familiarity with CI/CD pipelines and GitOps methodologies Excellent problem-solving abilities and strong communication skills Education: BE/MCA Why PTC? Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? Website: https://www.ptc.com LinkedIn: https://www.linkedin.com/company/ptcinc/ Facebook Page: https://www.facebook.com/ptc.inc/ Twitter Handle: @LifeatPTC '@PTC Instagram: ptc_inc Hashtag: Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here."

Posted 1 month ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 12+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 month ago

Apply

7.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 7-12 years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 3+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 7-12 years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About SAI Group SAI Group is a private investment firm that has committed $1 billion to incubate and scale revolutionary AI-powered enterprise software application companies. Our portfolio, a testament to our success, comprises rapidly growing AI companies that collectively cater to over 2,000+ major global customers, approaching $600 million in annual revenue, and employing a global workforce of over 4,000 individuals. SAI Group invests in new ventures based on breakthrough AI-based products that have the potential to disrupt existing enterprise software markets. SAI Group’s latest investment, JazzX AI, is a pioneering technology company which is building a platform that will not only shape the future of enterprise AI applications but also offer practical solutions to real-world challenges. Job Summary We are seeking a seasoned DevOps / Cloud Architect with deep expertise in Azure and/or AWS to define and implement cloud architecture, DevOps automation, and operational excellence strategies for enterprise-scale applications. The ideal candidate will lead cloud infrastructure design, CI/CD frameworks, and guide best practices across reliability, security, and cost-efficiency. Essential Skills Terraform, Azure, I/CD Pipeline, IAC, Docker, Kubernetes Very Strong Linux Knowledge & Troubleshooting Skills; Scripting using – Bash, Python, PowerShell, Ansible; Windows Terminal Services, AD, LDAP; Change, Problem & Incident Management; Implementation awareness of Vulnerability/Penetration Testing, Security; Tools and frameworks used for monitoring, performance management, logging; CI/CD pipeline; SRE – Including Datadog Desired Skills Hands-on experience in cloud technology – Azure, AWS – Azure preferred; Strong networking skills Key Responsibilities Provide technical expertise and leadership when needed to SaaS Operations Production Operations teams. Help Implement the Cloud Operations team's goals and deliverables as determined by JazzX Leadership Ensure smooth operations of Jazzx SaaS products. Take Complete ownership of Customer Implementations, including SLA and SLO. Automate, enhance and maintain critical processes in Cloud Operations, such as Change Control, Monitoring & Alerting Drive critical processes in SaaS Operations such as Change Control, Problem & Incident Management, and Reporting, as well as key tools for Monitoring & Alerting Drive Disaster Recovery and failover procedures, training, testing, and team readiness Coordinate focus groups across all teams on process improvements and technical improvements that lead to better stability and reliability Lead and mentor a high-performing team of DevOps engineers across Azure and AWS cloud platforms. Design and manage CI/CD pipelines using Azure DevOps, GitHub Actions, or AWS CodePipeline/CodeBuild. Automate infrastructure using Terraform, CloudFormation, or Bicep/ARM templates. Manage container orchestration using Kubernetes (AKS/EKS) and implement GitOps workflows. Define and implement monitoring, alerting, and logging solutions using CloudWatch, Azure Monitor, Prometheus, Grafana, or Datadog. Optimize cloud costs and resource usage through governance policies, tagging strategies, and FinOps practices. Implement cloud security best practices, identity and access management, secrets management, and policy-as-code. Drive operational excellence by setting up proactive alerting, incident response, RCA, and continuous improvement. Collaborate cross-functionally to embed DevOps and SRE principles into the software development lifecycle. Stay current with cloud platform enhancements and recommend strategic improvements. Required Skills & Qualifications Bachelor's or master's degree in computer science, Engineering, or a related discipline. 12+ years of experience in DevOps, Cloud, or Platform Engineering roles. 3+ years in technical leadership or DevOps lead role. Strong expertise in either Azure or AWS, with working knowledge of the other. Azure: Azure DevOps, AKS, App Services, Azure Monitor, Key Vault, ARM/Bicep. AWS: EC2, ECS/EKS, S3, Lambda, CloudFormation, IAM, CloudWatch. Expertise in Infrastructure as Code using Terraform or native tools. Experience with containerization (Docker) and orchestration (Kubernetes). Proficiency in scripting languages (Bash, PowerShell, Python). Hands-on experience with observability, CI/CD automation, and deployment strategies (blue/green, canary). Deep knowledge of IAM, networking (VNet/VPC, DNS, firewalls), and secrets management. Strong understanding of DevSecOps and cloud compliance (SOC2, HIPAA, ISO27001). Why Join Us At JazzX AI, you have the opportunity to become an integral part of a pioneering team that is pushing the envelope of AI capabilities to create an autonomous intelligence driven future. We champion bold innovation, continuous learning, and embrace the challenges and rewards of crafting something genuinely groundbreaking. Your work will directly contribute to pioneering solutions that have the potential to transform industries and redefine how we interact with technology. As an early member of our team, your voice will be pivotal in steering the direction of our projects and culture, offering a unique chance to leave your mark on the future of AI. We offer a competitive salary, equity options, and an attractive benefits package, including health, dental, and vision insurance, flexible working arrangements, and more. We are an equal opportunity employer and celebrate diversity at our company. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Req ID: 322342 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Engineer - Azure to join our team in Bangalore, Karn?taka (IN-KA), India (IN). NTT DATA, Inc. is seeking a talented DevOps Engineer to join our dynamic team. As a leading solution provider company, we are committed to delivering exceptional solutions to our clients. Our success is driven by the dedication and expertise of our employees, who play a vital role in shaping our growth and staying ahead of the competition. By joining our team, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our clients success. Primary Responsibilities of this role: As a DevOps Engineer you will be responsible for the smooth operation of our customers infrastructure. You will collaborate closely with internal teams and client organizations focusing on automation, continuous integration/delivery (CI/CD), infrastructure management, and collaboration to improve software delivery speed and quality for our clients. What you will do: Supporting GCP environment. Engage in Azure DevOps administration Implement Grafana for visualization and monitoring, including Prometheus and Loki for metrics and logs management. Respond to platform to performance and availability issues Open and follows tickets with Vendor product owners Manages license assignment and allocations Install approved Azure Market Place plugins Provide general supports to app teams for supported DevOps tools Troubleshoot Azure DevOps issues and related to DevOps toolsets and deployment capabilities Work in general backlog of support tickets Manage and support Artifact Management (Jfrog) Managing and support Artifact Management (SonarQube) Operate as a member of global, distributed teams that deliver quality services Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Rapidly gain knowledge of emerging technologies in cloud and their potential application in the customer environment / impact analysis What you will bring: 5+ years of experience in IT 2-3 years of experience in GCP, GKE and Azure DevOps as well as general DevOps toolsets. Azure Devops Administration Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Ability to work independently and as a productive team member, actively participating in agile ceremonies Ability to identify potential issues and take ownership of resolving existing issues Strong analytical skills, Curious nature, Strong communication skills written and verbal Team-oriented, motivated, and enthusiastic with the willingness to take initiative and maintain a positive approach Ability to work with a virtual team Excellent communication, presentation, and relationship-building skills Solid understanding of enterprise operational processes, planning, and operations Preferred Skills - Good to have: Grafana experience is a plus Jira - Ticketing tool - Good to have About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Req ID: 322341 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Engineer - Azure to join our team in Bangalore, Karn?taka (IN-KA), India (IN). NTT DATA, Inc. is seeking a talented DevOps Engineer to join our dynamic team. As a leading solution provider company, we are committed to delivering exceptional solutions to our clients. Our success is driven by the dedication and expertise of our employees, who play a vital role in shaping our growth and staying ahead of the competition. By joining our team, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our clients success. Primary Responsibilities of this role: As a DevOps Engineer you will be responsible for the smooth operation of our customers infrastructure. You will collaborate closely with internal teams and client organizations focusing on automation, continuous integration/delivery (CI/CD), infrastructure management, and collaboration to improve software delivery speed and quality for our clients. What you will do: Supporting GCP environment. Engage in Azure DevOps administration Implement Grafana for visualization and monitoring, including Prometheus and Loki for metrics and logs management. Respond to platform to performance and availability issues Open and follows tickets with Vendor product owners Manages license assignment and allocations Install approved Azure Market Place plugins Provide general supports to app teams for supported DevOps tools Troubleshoot Azure DevOps issues and related to DevOps toolsets and deployment capabilities Work in general backlog of support tickets Manage and support Artifact Management (Jfrog) Managing and support Artifact Management (SonarQube) Operate as a member of global, distributed teams that deliver quality services Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Rapidly gain knowledge of emerging technologies in cloud and their potential application in the customer environment / impact analysis What you will bring: 5+ years of experience in IT 2-3 years of experience in GCP, GKE and Azure DevOps as well as general DevOps toolsets. Azure Devops Administration Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Ability to work independently and as a productive team member, actively participating in agile ceremonies Ability to identify potential issues and take ownership of resolving existing issues Strong analytical skills, Curious nature, Strong communication skills written and verbal Team-oriented, motivated, and enthusiastic with the willingness to take initiative and maintain a positive approach Ability to work with a virtual team Excellent communication, presentation, and relationship-building skills Solid understanding of enterprise operational processes, planning, and operations Preferred Skills - Good to have: Grafana experience is a plus Jira - Ticketing tool - Good to have About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 12+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description We are seeking a skilled and proactive DevOps Engineer to join our team. The ideal candidate will have hands-on experience with CI/CD pipelines, container orchestration using Kubernetes, and infrastructure monitoring using tools such as Grafana and Prometheus. You will play a key role in automating and optimizing our development and production environments to ensure high availability, scalability, and performance. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Manage and maintain Kubernetes clusters in cloud (e.g., GCP GKE) and/or on-prem environments. Monitor system performance and availability using Grafana, Prometheus, and other observability tools. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like Terraform or Helm. Collaborate with development, QA, and operations teams to ensure smooth and reliable software releases. Troubleshoot and resolve issues in dev, test, and production environments. Enforce security and compliance best practices across infrastructure and deployment pipelines. Participate in on-call rotations and incident response. Requirements Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). 3+ years of experience in a DevOps, Site Reliability Engineering (SRE), or related role. Proficient in building and maintaining CI/CD pipelines. Strong experience with Kubernetes and containerization (Docker). Hands-on experience with monitoring and alerting tools such as Grafana, Prometheus, Loki, or ELK Stack. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Solid scripting skills (e.g., Bash, Python, or Go). Familiarity with configuration management tools such as Ansible, Chef, or Puppet is a plus. Excellent problem-solving and communication skills. Preferred Qualifications Certifications in Kubernetes (CKA/CKAD) or cloud providers (AWS/GCP/Azure). Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux). Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Requisition ID # 25WD89379 Position Overview Join Autodesk as a Senior Observability Engineer driving the architecture, scale, and evolution of our global observability platform engineering team. You will lead technical strategy, platform innovation, and cross-functional collaboration to elevate telemetry across the engineering org. Responsibilities Architect scalable, secure, and resilient logging platforms across hybrid/multi-cloud Lead OpenTelemetry adoption with standardized instrumentation and deployment models Define robust onboarding strategies for cloud, hybrid, and edge telemetry sources Integrate observability tooling (Splunk, Dynatrace) to deliver unified insights Evaluate emerging technologies develop custom observability solutions as needed Contribute to open-source or internal observability tooling and standards Drive documentation, training, and cross-team knowledge sharing Collaborate with app teams to embed observability into new service architectures Work with security teams on logging compliance, threat detection, and governance Partner with platform teams on CI/CD observability integration and enterprise telemetry architecture Minimum Qualifications Bachelor's in CS, Engineering, or related field 5-8 years in Observability, SRE, or DevOps with deep logging platform expertise Advanced skills in Splunk (admin, dev, architect), and OpenTelemetry in production Strong Python or Go development background expert in Linux and networking Hands-on with AWS (EC2, ECS, Lambda, S3) and containerized environments (Kubernetes, service mesh) Proven experience in designing large-scale, secure observability systems Preferred Qualifications Splunk Admin, Architect, or Developer certifications Contributions to OpenTelemetry or other open-source observability projects Experience with multiple platforms (Datadog, Elastic, Prometheus, New Relic) Exposure to machine learning in observability (e.g., predictive alerting) Familiarity with logging-related compliance standards (SOC2, GDPR, PCI) Proficient with IaC and GitOps-based deployments Background in multi-cloud and hybrid cloud telemetry strategies Technical leadership or project ownership experience Strong communicator with writing/speaking experience in observability forums #LI-MR2 Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk - our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you're an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future Join us! Salary transparency Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: Are you an existing contractor or consultant with Autodesk Please search for open jobs and apply internally (not on this external site).

Posted 1 month ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Location: Bangalore / Hyderabad / Chennai / Pune / Gurgaon Mode: Hybrid (3 days/week from office) Relevant Experience: 7+ years must Role Type: Individual Contributor Client: US-based multinational banking institution Role Summary We are hiring a seasoned DevOps Engineer (IC) to drive infrastructure automation, deployment reliability, and engineering velocity for AWS-hosted platforms. You’ll play a hands-on role in building robust CI/CD pipelines, managing Kubernetes (EKS or equivalent), and implementing GitOps, infrastructure as code, and monitoring systems. Must-Have Skills & Required Depth AWS Cloud Infrastructure Independently provisioned core AWS services — EC2, VPC, S3, RDS, Lambda, SNS, ECR — using CLI and Terraform. Configured IAM roles, security groups, tagging standards, and cost monitoring dashboards. Familiar with basic networking and serverless deployment models. Containerization (EKS / Kubernetes) Deployed containerized services to Amazon EKS or equivalent. Authored Helm charts, configured ingress controllers, pod autoscaling, resource quotas, and health probes. Troubleshot deployment rollouts, service routing, and network policies. Infrastructure as Code (Terraform / Ansible / AWS SAM) Created modular Terraform configurations with remote state, reusable modules, and drift detection. Implemented Ansible playbooks for provisioning and patching. Used AWS SAM for packaging and deploying serverless workloads. GitOps (Argo CD / Equivalent) Built and managed GitOps pipelines using Argo CD or similar tools. Configured application sync policies, rollback strategies, and RBAC for deployment automation. CI/CD (Bitbucket / Jenkins / Jira) Developed multi-stage pipelines covering build, test, scan, and deploy workflows. Used YAML-based pipeline-as-code and integrated Jira workflows for traceability. Scripting (Bash / Python) Written scripts for log rotation, backups, service restarts, and automated validations. Experienced in handling conditional logic, error management, and parameterization. Operating Systems (Linux) Proficient in Ubuntu/CentOS system management, package installations, and performance tuning. Configured Apache or NGINX for reverse proxy, SSL, and redirects. Datastores (MySQL / PostgreSQL / Redis) Managed relational and in-memory databases for application integration, backup handling, and basic performance tuning. Monitoring & Alerting (Tool-Agnostic) Configured metrics collection, alert rules, and dashboards using tools like CloudWatch, Prometheus, or equivalent. Experience in designing actionable alerts and telemetry pipelines. Incident Management & RCA Participated in on-call rotations. Handled incident bridges, triaged failures, communicated status updates, and contributed to root cause analysis and postmortems. Nice-to-Have Skills Skill Skill Depth Kustomize / FluxCD Exposure to declarative deployment strategies using Kustomize overlays or FluxCD for GitOps workflows. Kafka Familiarity with event-streaming architecture and basic integration/configuration of Kafka clusters in application environments. Datadog (or equivalent) Experience with Datadog for monitoring, logging, and alerting. Configured custom dashboards, monitors, and anomaly detection. Chaos Engineering Participated in fault-injection or resilience testing exercises. Familiar with chaos tools or simulations for validating system durability. DevSecOps & Compliance Exposure to integrating security scans in pipelines, secrets management, and contributing to compliance audit readiness. Build Tools (Maven / Gradle / NPM) Experience integrating build tools with CI systems. Managed dependency resolution, artifact versioning, and caching strategies. Backup / DR Tooling (Veeam / Commvault) Familiar with backup scheduling, data restore processes, and supporting DR drills or RPO/RTO planning. Certifications (AWS / Terraform) Possession of certifications like AWS Certified DevOps Engineer, Developer Associate, or HashiCorp Certified Terraform Associate is preferred.

Posted 1 month ago

Apply

1.0 - 6.0 years

2 - 7 Lacs

Gurugram

Work from Office

Lead Cloud Strategy Job Summary: We are seeking a motivated Lead Cloud Strategy to support and drive cloud transformation initiatives within the telecommunications sector. This role involves formulating cloud strategies , supporting deployment architectures , and guiding cloud-native application modernization using OpenStack , OpenShift , and public cloud platforms (AWS, Azure, or GCP). You will work closely with cross-functional teams to enable scalable, secure, and cost-efficient cloud solutions tailored to telecom environments. ________________________________________ Key Responsibilities: Collaborate with senior architects and business stakeholders to define and refine the organization’s cloud strategy in the telecom domain. Lead small to mid-sized cloud transformation initiatives from a technical strategy perspective. Evaluate current telecom IT and network infrastructure for cloud migration opportunities. Support the design and deployment of hybrid/multi-cloud architectures integrating OpenStack, OpenShift, and public cloud solutions. Assist in building cloud-native platforms for telecom workloads including VNF/CNF orchestration and management. Ensure cloud solutions meet compliance, security, and performance standards aligned with telecom regulations. Stay current on industry trends and emerging cloud technologies relevant to telco environments. Contribute to the development of proof-of-concepts (PoCs), solution demos, and customer proposals. ________________________________________ Required Skills & Qualifications: Bachelor's degree in Electronics and Communication, Engineering, or related field. 5+ years of experience in cloud infrastructure and/or cloud strategy roles . Hands-on experience with OpenStack (deployment, orchestration, lifecycle management). Working knowledge of OpenShift for containerized workloads and DevOps pipelines. Experience with at least one major public cloud platform (AWS, Azure, GCP). Proficiency in networking technologies, IP networking, VPN’s, DNS, load balancing and firewalling. Understanding of telecom architecture (5G, 4G, NFV, SDN) and cloud-native evolution. Understanding of Telco MPBN Architecture (IP Networking). Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. ________________________________________ Preferred Qualifications: Certifications: Red Hat Certified Specialist (OpenShift), Certified Kubernetes Administrator (CKA), AWS/GCP/Azure cloud certifications. Experience in telco network transformation (e.g., 5G core, edge computing, MEC). Proficiency in networking technologies, IP networking, VPN’s, DNS, load balancing and firewalling. Exposure to CI/CD, GitOps, and SRE practices in telecom environments.

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are seeking a Senior Cloud Developer with 6 to 10 years of experience to join our team. The ideal candidate will have expertise in AzureIAC-ARM Bicep Azure Functions Azure policies Azure DevOps Pipeline GitOps Azure AKS Azure Firewall and Azure DevOps. Experience in Retail Banking is a plus. This is a hybrid role with day shifts and no travel required. Responsibilities Develop and maintain cloud infrastructure using AzureIAC-ARM and Bicep to ensure scalable and reliable solutions. Implement and manage Azure Functions to support serverless computing needs. Enforce and manage Azure policies to ensure compliance and governance across cloud resources. Design and maintain Azure DevOps Pipelines to streamline CI/CD processes and improve deployment efficiency. Utilize GitOps practices to manage and deploy applications in a consistent and automated manner. Oversee the deployment and management of Azure AKS clusters to support containerized applications. Configure and manage Azure Firewall to ensure secure and controlled access to cloud resources. Collaborate with cross-functional teams to integrate Azure DevOps practices into development workflows. Provide technical guidance and support to team members on cloud development best practices. Monitor and optimize cloud infrastructure performance to ensure high availability and cost-effectiveness. Troubleshoot and resolve issues related to cloud infrastructure and services. Stay updated with the latest Azure technologies and industry trends to continuously improve cloud solutions. Contribute to the companys purpose by developing secure and efficient cloud solutions that enhance business operations and impact society positively. Qualifications Must have strong experience with AzureIAC-ARM Bicep Azure Functions Azure policies Azure DevOps Pipeline GitOps Azure AKS Azure Firewall and Azure DevOps. Nice to have experience in Retail Banking domain to better understand industry-specific requirements. Must have excellent problem-solving skills and the ability to troubleshoot complex issues. Must have strong communication skills to effectively collaborate with cross-functional teams. Must have the ability to work independently and manage multiple tasks simultaneously. Nice to have experience with hybrid work models and the ability to adapt to changing work environments. Certifications Required Azure Solutions Architect Expert Azure DevOps Engineer Expert

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Driven by transformative digital technologies and trends, we are RIB and we’ve made it our purpose to propel the industry forward and make engineering and construction more efficient and sustainable. Built on deep industry expertise and best practice, and with our people at the heart of everything we do, we deliver the world's leading end-to-end lifecycle solutions that empower our industry to build better. With a steadfast commitment to innovation and a keen eye on the future, RIB comprises over 2,500 talented individuals who extend our software’s reach to over 100 countries worldwide. We are experienced experts and professionals from different cultures and backgrounds and we collaborate closely to provide transformative software products, innovative thinking and professional services to our global market. Our strong teams across the globe enable sustainable product investment and enhancements, to keep our clients at the cutting-edge of engineering, infrastructure and construction technology. We know our people are our success – join us to be part of a global force that uses innovation to enhance the way the world builds. Find out more at RIB Careers. Job Description Job Summary: As a Cloud Platform Architect, you will work under the guidance of the Cloud Infrastructure Architect to design, enhance, and maintain our Azure-based platform architecture. You’ll contribute to the platform’s reliability, scalability, and security, and support a DevSecOps culture through strong collaboration, infrastructure-as-code practices, and continuous improvement of the tooling and automation landscape. Key Responsibilities Assist in the design and implementation of platform components within Azure, including AKS, network configurations, and supporting cloud-native services. Maintain and enhance Kubernetes infrastructure and GitOps tooling (e.g., Flux). Collaborate with DevOps, Site Reliability Engineers, and Cloud Operations teams to ensure seamless delivery and support of platform capabilities. Implement infrastructure-as-code (IaC) using tools such as Bicep or Terraform. Participate in architectural reviews and contribute to technical documentation and standards. Monitor platform performance and recommend optimizations for cost and reliability. Ensure that platform deployments align with security, governance, and compliance frameworks. Support incident response, troubleshooting, and root cause analysis for platform-related issues. Skills And Qualifications 3–5 years of experience in cloud engineering or architecture roles. Hands-on experience with Azure services, especially AKS, networking, and security configurations. Familiarity with GitOps practices using tools like Flux or ArgoCD. Experience with IaC tools like Terraform or Bicep. Proficiency in scripting languages such as PowerShell, Bash, or Python. Understanding of containerization and orchestration (Docker, Kubernetes). Basic familiarity with CI/CD pipelines and DevOps workflows. Strong problem-solving and communication skills. Able to work in a collaborative, fast-paced environment. RIB may require all successful applicants to undergo and pass a comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third party personal data may involve additional background check criteria. RIB is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. Come and join RIB to create the transformative technology that enables our customers to build a better world.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Driven by transformative digital technologies and trends, we are RIB and we’ve made it our purpose to propel the industry forward and make engineering and construction more efficient and sustainable. Built on deep industry expertise and best practice, and with our people at the heart of everything we do, we deliver the world's leading end-to-end lifecycle solutions that empower our industry to build better. With a steadfast commitment to innovation and a keen eye on the future, RIB comprises over 2,500 talented individuals who extend our software’s reach to over 100 countries worldwide. We are experienced experts and professionals from different cultures and backgrounds and we collaborate closely to provide transformative software products, innovative thinking and professional services to our global market. Our strong teams across the globe enable sustainable product investment and enhancements, to keep our clients at the cutting-edge of engineering, infrastructure and construction technology. We know our people are our success – join us to be part of a global force that uses innovation to enhance the way the world builds. Find out more at RIB Careers. Job Description Job Summary: As a Cloud Platform Architect, you will work under the guidance of the Cloud Infrastructure Architect to design, enhance, and maintain our Azure-based platform architecture. You’ll contribute to the platform’s reliability, scalability, and security, and support a DevSecOps culture through strong collaboration, infrastructure-as-code practices, and continuous improvement of the tooling and automation landscape. Key Responsibilities Assist in the design and implementation of platform components within Azure, including AKS, network configurations, and supporting cloud-native services. Maintain and enhance Kubernetes infrastructure and GitOps tooling (e.g., Flux). Collaborate with DevOps, Site Reliability Engineers, and Cloud Operations teams to ensure seamless delivery and support of platform capabilities. Implement infrastructure-as-code (IaC) using tools such as Bicep or Terraform. Participate in architectural reviews and contribute to technical documentation and standards. Monitor platform performance and recommend optimizations for cost and reliability. Ensure that platform deployments align with security, governance, and compliance frameworks. Support incident response, troubleshooting, and root cause analysis for platform-related issues. Skills And Qualifications 3–5 years of experience in cloud engineering or architecture roles. Hands-on experience with Azure services, especially AKS, networking, and security configurations. Familiarity with GitOps practices using tools like Flux or ArgoCD. Experience with IaC tools like Terraform or Bicep. Proficiency in scripting languages such as PowerShell, Bash, or Python. Understanding of containerization and orchestration (Docker, Kubernetes). Basic familiarity with CI/CD pipelines and DevOps workflows. Strong problem-solving and communication skills. Able to work in a collaborative, fast-paced environment. RIB may require all successful applicants to undergo and pass a comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third party personal data may involve additional background check criteria. RIB is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. Come and join RIB to create the transformative technology that enables our customers to build a better world.

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title - Cloud Platform Engineer Specialist ACS Song Management Level: Level 9 Specialist Location: Kochi, Coimbatore, Trivandrum Must have skills: AWS, Terraform Good to have skills: Hybrid Cloud Experience: 8-12 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture) Job Summary Within our Cloud Platforms & Managed Services Solution Line, we apply an agile approach to provide true on-demand cloud platforms. We implement and operate secure cloud and hybrid global infrastructures using automation techniques for our clients business critical application landscape. As a Cloud Platform Engineer you are responsible for implementing on cloud and hybrid global infrastructures using infrastructure-as-code. Roles And Responsibilities Implement Cloud and Hybrid Infrastructures using Infrastructure-as-Code. Automate Provisioning and Maintenance for streamlined operations. Design and Estimate Infrastructure with an emphasis on observability and security. Establish CI/CD Pipelines for seamless application deployment. Ensure Data Integrity and Security through robust mechanisms. Implement Backup and Recovery Procedures for data protection. Build Self-Service Systems for enhanced developer autonomy. Collaborate with Development and Operations Teams for platform optimization. Professional And Technical Skills Customer-Focused Communicator adept at engaging cross-functional teams. Cloud Infrastructure Expert in AWS, Azure, or GCP. Proficient in Infrastructure as Code with tools like Terraform. Experienced in Container Orchestration (Kubernetes, Openshift, Docker Swarm). Skilled in Observability Tools like Prometheus, Grafana, etc., as well as Competent in Log Aggregation tools (Loki, ELK, Graylog) and Familiar with Tracing Systems such as Tempo. CI/CD and GitOps Savvy with potential knowledge of Argo-CD or Flux. Automation Proficiency in Bash and high-level languages (Python, Golang). Linux, Networking, and Database Knowledge for robust infrastructure management. Hybrid Cloud Experience a plus Additional Information About Our Company | Accenture (do not remove the hyperlink) Experience: 3-5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

As a Lead DevOps Engineer at Ameriprise India, you will have the opportunity to be an advocate for DevOps best practices and build scalable infrastructure to provide world class experience to our clients. In this role, you will also be responsible for influencing DevOps roadmap to increase the speed to market. Key Responsibilities Implement and adopt best practices on DevSecOps, Continuous Integration, Continuous Deployment and Continuous Testing (Both for server side and client side). Design NextGen application strategy using Cloud-native architectures. Build scalable, efficient cloud, and on-premise infrastructure. Implement monitoring for automated system health checks. Build CI/CD pipelines, train and guide teams on DevSecOps best practices. Work with engineers to drive issues to resolution during application instability. Maintain and implement change management control procedures and processes for UAT/QA and production releases. Implement test automation(ui/api) integration with CI/CD for extensive test coverage and enable metrics collection. Work with multiple/distributed teams following Agile practices. Required Qualifications 8+ years overall industry experience building infrastructure and release management activities. 4+ years of experience in DevOps (Continuous Integration, Continuous Deployment and Continuous Testing). Strong code/scripting skills for IaaS automation. Linux, Unix, and Windows operating systems. Configuration Management tools such as Ansible, Terraform. Containerization tools such as Vagrant, Kubernetes, Docker. Container orchestration tools like Marathon, Kubernetes, EKS or ECS etc. Cloud/IaaS environment like AWS/GCP Monitoring/Alerting tools such as Sumologic, Cloud Watch, Prometheus. SCM tools like BitBucket/Git and any productivity plugins. Code Quality and security tools like SonarQube/Blackduck /Veracode. Performance tools like Page speed, Google lighthouse. Test and build systems such as Jenkins, Maven, JFrog/Nexus Artifactory. Understanding of network topologies and hardware Knowledge of load balancers (F5, ngnix) and firewalls Preferred Qualifications CNCF experience with GitOps principles. Package and deploy (CI/CD) single page apps as well as in mono-repo following best practices. Knowledge of CDNs. Experience in cloud migration. Knowledge of load balancer, reverse proxy About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title - Cloud Platform Engineer Specialist ACS Song Management Level: Level 9 Specialist Location: Kochi, Coimbatore, Trivandrum Must have skills: AWS, Terraform Good to have skills: Hybrid Cloud Experience: 8-12 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture) Job Summary Within our Cloud Platforms & Managed Services Solution Line, we apply an agile approach to provide true on-demand cloud platforms. We implement and operate secure cloud and hybrid global infrastructures using automation techniques for our clients business critical application landscape. As a Cloud Platform Engineer you are responsible for implementing on cloud and hybrid global infrastructures using infrastructure-as-code. Roles And Responsibilities Implement Cloud and Hybrid Infrastructures using Infrastructure-as-Code. Automate Provisioning and Maintenance for streamlined operations. Design and Estimate Infrastructure with an emphasis on observability and security. Establish CI/CD Pipelines for seamless application deployment. Ensure Data Integrity and Security through robust mechanisms. Implement Backup and Recovery Procedures for data protection. Build Self-Service Systems for enhanced developer autonomy. Collaborate with Development and Operations Teams for platform optimization. Professional And Technical Skills Customer-Focused Communicator adept at engaging cross-functional teams. Cloud Infrastructure Expert in AWS, Azure, or GCP. Proficient in Infrastructure as Code with tools like Terraform. Experienced in Container Orchestration (Kubernetes, Openshift, Docker Swarm). Skilled in Observability Tools like Prometheus, Grafana, etc., as well as Competent in Log Aggregation tools (Loki, ELK, Graylog) and Familiar with Tracing Systems such as Tempo. CI/CD and GitOps Savvy with potential knowledge of Argo-CD or Flux. Automation Proficiency in Bash and high-level languages (Python, Golang). Linux, Networking, and Database Knowledge for robust infrastructure management. Hybrid Cloud Experience a plus Additional Information About Our Company | Accenture (do not remove the hyperlink)

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Who we are. Newfold Digital (with over $1b in revenue) is a leading web technology company serving nearly seven million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, our portfolio of brands includes: Bluehost, Crazy Domains, HostGator, Network Solutions, Register.com, Web.com and many others. We help customers of all sizes build a digital presence that delivers results. With our extensive product offerings and personalized support, we take pride in collaborating with our customers to serve their online presence needs. We’re hiring for our Developer Platform team at Newfold Digital — a team focused on building the internal tools, infrastructure, and systems that improve how our engineers develop, test, and deploy software. In this role, you’ll help design and manage CI/CD pipelines, scale Kubernetes-based infrastructure, and drive adoption of modern DevOps and GitOps practices. You’ll work closely with engineering teams across the company to improve automation, deployment velocity, and overall developer experience. We’re looking for someone who can take ownership, move fast, and contribute to a platform that supports thousands of deployments across multiple environments. What you'll do & how you'll make your mark. Build and maintain scalable CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI Manage and improve Kubernetes clusters (Helm, Kustomize) used across environments Implement GitOps workflows using Argo CD or Argo Workflows Automate infrastructure provisioning and configuration with Terraform and Ansible Develop scripts and tooling in Bash, Python, or Go to reduce manual effort and improve reliability Work with engineering teams to streamline and secure the software delivery process Deploy and manage services across cloud platforms (AWS, GCP, Azure, OCI). Who you are & what you'll need to succeed. Strong understanding of core DevOps concepts including CI/CD, GitOps, and Infrastructure as Code Hands-on experience with Docker, Kubernetes, and container orchestration Proficiency with at least one major cloud provider (AWS, Azure, GCP, or OCI) Experience writing and managing Jenkins pipelines or similar CI/CD tools Comfortable working with Terraform, Ansible, or other configuration management tools Strong scripting skills (Bash, Python, Go) and a mindset for automation Familiarity with Linux-based systems and cloud-native infrastructure Ability to work independently and collaboratively across engineering and platform teams Good to Have Experience with build tools like Gradle or Maven Familiarity with Bitbucket or Git-based workflows Prior experience with Argo CD or other GitOps tooling Understanding of internal developer platforms and shared libraries Prior experience with agile development and project management. Why you’ll love us. We’ve evolved; we provide three work environment scenarios. You can feel like a Newfolder in a work-from-home, hybrid, or work-from-the-office environment. Work-life balance. Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences. We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20 + affinity groups where you can network and connect with Newfolders globally. We care about you. . At Newfold, taking care of our employees is our top priority. we make sure that cutting edge benefits are in place to for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold!

Posted 1 month ago

Apply

10.0 - 15.0 years

0 Lacs

India

On-site

Company Description Extreme Compute is India's first secure cloud service provider that combines high-speed computing with banking-grade security, storage solutions, multi-cloud, disaster recovery, and APM. Our integrated solutions eliminate the need for separate procurement and integration of each element. At Extreme Compute, we address the growing need for enterprise-grade cloud solutions that offer flexibility, scalability, simplicity, and vendor independence. Role Description We are looking to hire a seasoned Ansible Automation Expert with 10-15 years of experience in Linux infrastructure and automation. The ideal candidate will have deep command over Ansible, strong scripting ability, and a practical understanding of OpenSCAP or related compliance automation tools. This is a strategic role driving automation-first infrastructure and ensuring secure, scalable deployment practices. Key Responsibilities Design, build, and maintain Ansible playbooks, roles, and modules for infrastructure provisioning, configuration management, and application deployment. Lead infrastructure-as-code (IaC) practices across multi-environment setups (dev/test/prod). Collaborate with security teams to integrate OpenSCAP or other SCAP tools for compliance automation. Ensure idempotency, scalability, and reusability in all automation artifacts. Automate Linux system hardening, patching, and audit logging as per industry benchmarks. Provide technical leadership and mentorship to junior engineers on Ansible and automation best practices. Work closely with DevOps, Security, and Cloud teams for continuous integration and delivery. Required Skills: 10-15 years of overall IT infrastructure experience with a focus on Linux (RHEL/SLES/Ubuntu). Proven expertise in Ansible Core & Tower/AWX , including dynamic inventories, roles, conditionals, Jinja2 templating, and error handling. Experience in automating compliance/security policies using tools like OpenSCAP, SCAP Security Guide, or CIS Benchmarks. Strong knowledge of YAML, Bash/Shell scripting , and secure secrets handling (Ansible Vault, HashiCorp Vault). Deep understanding of CI/CD pipelines , GitOps workflows, and integration with Jenkins, GitLab, or similar tools. Solid grasp of RBAC, MFA, audit trails , and secure SSH practices in automation. Preferred skills (Good-to-have): Certifications like Red Hat Certified Specialist in Ansible Automation or RHCE . Hands-on with cloud platforms (AWS, Azure, GCP) and hybrid deployments. Familiarity with containerized environments (Docker, Kubernetes) and automating them with Ansible. Experience with monitoring, logging, and backup tools (Zabbix, ELK, Restic, etc.). Why Join Us: Work with cutting-edge automation and compliance tooling. Be part of a security-first, innovation-led infrastructure team. Opportunity to lead and shape automation practices in a high-impact role.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Sr DevOps Engineer (Minimum Eight Years Of Total Experience) EXL Digital is seeking a skilled DevOps Engineer dedicated to advancing operational excellence through automation and optimized engineering processes. In this role, you will ensure the seamless integration of development and operational processes, enabling highly available, scalable, and secure systems. Collaborating with application development teams, you will help define infrastructure requirements and foster efficient, reliable software lifecycle management. Your expertise in cloud platforms, automation, and infrastructure management will be pivotal in achieving our objectives. As Senior DevOps Engineer, you will … Design and build engineering platforms and tools to streamline application development, deployment, and operations. Develop automation for routine development and operational tasks to enhance productivity and reduce manual intervention. Monitor system performance and availability, proactively addressing any issues. Work closely with software and data engineering teams to implement operationally sustainable solutions to engineering challenges. Other duties as assigned. Qualifications Over 8 years of professional experience in engineering roles. Extensive hands-on experience with AWS services, including Lambda, EC2, EKS, S3, CloudFront, RDS. Proficient in automation and infrastructure management using Terraform. Experience in operational roles (e.g., Cloud Operations, Site Reliability, DevOps). Proficiency in GitOps-based deployments, CI/CD pipelines (e.g., GitHub, Jenkins, BitBucket Pipeline, etc) Skilled in logging and observability technologies Expertise in public cloud environments (AWS). Solid background in maintaining and troubleshooting Linux-based systems. Strong analytical and problem-solving skills, with excellent communication and collaboration abilities. Strong communication, consultation, analytical and leadership skills Experience working in Agile methodologies (SCRUM) environment and familiar with iterative development cycles Ability to work with various stakeholders across various geographies Excellent Team player as well as an Individual Contributor if required Master's or bachelor's degree, preferably from an Engineering Background

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Java Developer with expertise in Prompt Engineering to join our AI-driven development team. The ideal candidate will combine robust Java backend development capabilities with hands-on experience in integrating and fine-tuning LLMs (e.g., OpenAI, Cohere, Mistral, or Anthropic), designing effective prompts, and embedding AI functionality into enterprise applications. This role is ideal for candidates passionate about merging traditional enterprise development with cutting-edge AI technologies. Key Responsibilities Design, develop, and maintain scalable backend systems using Java (Spring Boot) and integrate AI/LLM services. Collaborate with AI/ML engineers and product teams to design prompt templates, test prompt effectiveness, and iterate for accuracy, performance, and safety. Build and manage RESTful APIs that interface with LLM services and microservices in production-grade environments. Fine-tune prompt formats for various AI tasks (e.g., summarization, extraction, Q&A, chatbots) and optimize for performance and cost. Apply RAG (Retrieval-Augmented Generation) patterns to retrieve relevant context from data stores for LLM input. Ensure secure, efficient, and scalable communication between LLM APIs (OpenAI, Google Gemini, Azure OpenAI, etc.) and internal systems. Develop reusable tools and frameworks to support prompt evaluation, logging, and improvement cycles. Write high-quality unit tests, conduct code reviews, and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab. Work in Agile/Scrum teams and contribute to sprint planning, estimation, and retrospectives. Must-Have Technical Skills Java & Backend Development : Core Java 8/11/17 Spring Boot, Spring MVC, Spring Data JPA RESTful APIs, JSON, Swagger/OpenAPI Hibernate or other ORM tools Microservices architecture Prompt Engineering / LLM Integration : Experience working with OpenAI (GPT-4, GPT-3.5), Claude, Llama, Gemini, or Mistral models. Designing effective prompts for various tasks (classification, summarization, Q&A, etc.) Familiarity with prompt chaining, zero-shot/few-shot learning Understanding of token limits, temperature, top_up, and stop sequences Prompt evaluation methods and frameworks (e.g., LangChain, LlamaIndex, Guidance, PromptLayer) AI Integration Tools : LangChain or LlamaIndex for building LLM applications API integration with AI platforms (OpenAI, Azure AI, Hugging Face, etc.) Vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB) DevOps / Deployment : Docker, Kubernetes (preferred) CI/CD tools (Jenkins, GitHub Actions) AWS/GCP/Azure cloud environments Monitoring : Prometheus, Grafana, ELK Stack Good-to-Have Skills Python for prototyping AI workflows Chatbot development using LLMs Experience with RAG pipelines and semantic search Hands-on with GitOps, IaC (Terraform), or serverless functions Experience integrating LLMs into enterprise SaaS products Knowledge of Responsible AI and bias mitigation strategies Soft Skills Strong problem-solving and analytical thinking Excellent written and verbal communication skills Willingness to learn and adapt in a fast-paced, AI-evolving environment Ability to mentor junior developers and contribute to tech strategy Education Bachelors or Masters degree in Computer Science, Engineering, or related field Preferred Certifications (Not Mandatory) : OpenAI Developer or Azure AI Certification Oracle Certified Java Professional AWS/GCP Cloud Certifications (ref:hirist.tech)

Posted 1 month ago

Apply

15.0 years

0 Lacs

Chandigarh, India

On-site

Job Description Job Summary We are seeking a seasoned Observability Architect to define and lead our end-to-end observability strategy across highly distributed, cloud-native, and hybrid environments. This role requires a visionary leader with deep hands-on experience in New Relic and a strong working knowledge of other modern observability platforms like Datadog, Prometheus/Grafana, Splunk, OpenTelemetry, and more. You will design scalable, resilient, and intelligent observability solutions that empower engineering, SRE, and DevOps teams to proactively detect issues, optimize performance, and ensure system reliability. This is a senior leadership role with significant influence over platform architecture, monitoring practices, and cultural transformation across global teams. Key Responsibilities Architect and implement full-stack observability platforms, covering metrics, logs, traces, synthetics, real user monitoring (RUM), and business-level telemetry using New Relic and other tools like Datadog, Prometheus, ELK, or AppDynamics. Design and enforce observability standards and instrumentation guidelines for microservices, APIs, front-end applications, and legacy systems across hybrid cloud environments. Experience in OpenTelemetry adoption, ensuring vendor-neutral, portable observability implementations where appropriate. Build multi-tool dashboards, health scorecards, SLOs/SLIs, and integrated alerting systems tailored for engineering, operations, and executive consumption. Collaborate with engineering and DevOps teams to integrate observability into CI/CD pipelines, GitOps, and progressive delivery workflows. Partner with platform, cloud, and security teams to provide end-to-end visibility across AWS, Azure, GCP, and on-prem systems. Lead root cause analysis, system-wide incident reviews, and reliability engineering initiatives to reduce MTTR and improve MTBF. Evaluate, pilot, and implement new observability tools/technologies aligned with enterprise architecture and scalability requirements. Deliver technical mentorship and enablement, evangelizing observability best practices and nurturing a culture of ownership and data-driven decision-making. Drive observability governance and maturity models, ensuring compliance, consistency, and alignment with business SLAs and customer experience goals. Required Qualifications 15+ years of overall IT experience, hands-on with application development, system architecture, operations in complex distributed environments, troubleshooting and integration for applications and other cloud technology with observability tools. 5+ years of hands-on experience with observability tools such as New relic, Datadog, Prometeus, etc. including APM, infrastructure monitoring, logs, synthetics, alerting, and dashboard creation. Proven experience and willingness to work with multiple observability stacks, such as: Datadog, Dynatrace, AppDynamics Prometheus, Grafana, etc. Elasticsearch, Fluentd, Kibana (EFK/ELK) Splunk, OpenTelemetry, Solid knowledge of Kubernetes, service mesh (e.g., Istio), containerization (Docker) and orchestration strategies. Strong experience with DevOps and SRE disciplines, including CI/CD, IaC (Terraform, Ansible), and incident response workflows. Fluency in one or more programming/scripting languages: Java, Python, Go, Node.js, Bash. Hands-on expertise in cloud-native observability services (e.g., CloudWatch, Azure Monitor, GCP Operations Suite). Excellent communication and stakeholder management skills, with the ability to align technical strategies with business goals. Preferred Qualifications Architect level Certifications in New Relic, Datadog, Kubernetes, AWS/Azure/GCP, or SRE/DevOps practices. Experience with enterprise observability rollouts, including organizational change management. Understanding of ITIL, TOGAF, or COBIT frameworks as they relate to monitoring and service management. Familiarity with AI/ML-driven observability, anomaly detection, and predictive alerting. Why Join Us? Lead enterprise-scale observability transformations impacting customer experience, reliability, and operational excellence. Work in a tool-diverse environment, solving complex monitoring challenges across multiple platforms. Collaborate with high-performing teams across development, SRE, platform engineering, and security. Influence strategy, tooling, and architecture decisions at the intersection of engineering, operations, and business. Apply Now

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies