Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Responsibilities Fiche de poste : GCP Experience Strong written and verbal communication skills Knowledge of Azure DevOps as well as general DevOps toolsets. Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Engages in Azure DevOps administration Responds platform to performance and availability issues Opens and follows tickets with Vendor product owners Provides general support to app teams for supported DevOps tools Troubleshooting Azure DevOps issues and related to DevOps toolsets and deployment capabilities Works general backlog of support tickets Managing and supporting Artifact Management (Jfrog) Managing and supporting Artifact Management (SonarQube) Qualifications Bachelor’s Degree or International equivalent in Computer Science or a related field - Preferred Experience managing projects Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.
Posted 3 days ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: Gruve, an innovative software services startup, is looking for an experienced Kubernetes Data Center Administrator to manage and maintain multiple infrastructure systems running Kubernetes across data centers. The ideal candidate will play a crucial role in creating, managing, and debugging Kubernetes clusters and services, ensuring operational excellence through collaboration with IT teams. This position requires deep technical expertise in Kubernetes, virtualization, and data center operations, as well as strong experience in ITSM platforms and compliance management. Key Responsibilities: - Design, deploy, and maintain multiple Kubernetes clusters across data center environments. - Manage and troubleshoot Kubernetes services including MinIO (object storage), Prometheus (monitoring), Istio (service mesh), MongoDB, and PostgreSQL (databases). - Collaborate with IT teams to support operational needs such as change management, patch and software update cycles, data protection, disaster recovery planning, DCIM systems, compliance audits, and reporting. - Diagnose and resolve complex Kubernetes configuration issues. - Modify platform components and scripts to enhance reliability and performance. - Administer and integrate multiple ITSM platforms for asset management, change management, incident management, and problem management. - Maintain detailed documentation of Kubernetes environments and operational procedures. - Ensure systems meet regulatory and organizational compliance standards. Qualifications: - 8-10 years of experience in Kubernetes administration and virtualization technologies. - Proven experience managing production-grade Kubernetes clusters and services. - Strong understanding of data center operations and infrastructure systems. - Hands-on experience with ITSM platforms (e.g., Jira Service Management). - Proficiency in scripting (e.g., Bash, Python) and automation tools. - Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana). - Experience with disaster recovery planning and compliance audits. - At least one CNCF Kubernetes certification (e.g., CKA, CKS, CKAD). - Experience with container security and policy enforcement preferred. - Familiarity with GitOps workflows and tools like ArgoCD or Flux preferred. - Knowledge of infrastructure-as-code tools (e.g., Terraform, Ansible) preferred. Note: The job description also includes information about the company, its culture, and the work environment, which has been omitted for brevity.,
Posted 3 days ago
0 years
0 Lacs
india
Remote
About Juniper Square Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology – all while contributing as a member of a values-driven organization – we want to hear from you. Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time. About Your Role Juniper Square is industry-leading in the transformation of private capital markets through innovation & technology, and we are expanding globally to keep up with the demand for innovation! We are looking for a Staff Site Reliability Engineer to help us grow our domain expertise and provide support in a new global region to enable 24x7 development velocity as a global company. From AWS cloud provisioning as code to improving the developer experience in your working timezone, to acting as a guide to best practices around building and delivering software globally, we need an SRE with the passion, motivation, and great ideas to make everything better. What You’ll Do Automate the provisioning of all of Juniper Square’s infrastructure in code. Everything we do is in code! Partner with our Platform Engineering team on building developer tooling / improving developer experiences via joint initiatives and enhancements. Partner with our Data Engineering team on improving our data posture and driving operational excellence. Evolve our deployment pipelines to automate infrastructure deployments with the latest and greatest (and reliable) technologies. Improve metrics on our main services, and act as a subject matter expert for our global dev teams. Enable observability, SLO/SLI reporting, and respond to business impacting incidents as it pertains to infrastructure. Adopt and drive solutions that align with AWS Well Architected frameworks and Juniper Square’s business objectives. Identify performance bottlenecks and provide recommendations for improvement. Proactively identify and solve problems that we didn’t even know we had. Help build, deploy, and scale a load testing environment that is analogous to production. Enforce security and operational safety controls. Participate in technical roadmap planning and estimation. Participate and contribute in production readiness and architecture review board (ARB) meetings and forums. Train and mentor future engineers in the same region. Contribute to the architectural improvements to meet future scaling and observability requirements Qualifications A profound love for solving hard problems and overcoming challenging obstacles. Putting your customers first, whether they be internal or external, and making them more productive, happy, and successful. Experience with AWS. Other public cloud providers are a bonus. Experience with PostgreSQL is a must. Additional experience with document databases is a nice-to-have. Experience with cloud security best practices (CSPM, CDR, CWPP, SIEM, etc) to keep our customers and cloud posture secure. Experience with containers (builds, registries, vulnerabilities scanning, run-time with docker-compose, run-time with TILT, run-time in schedulers/orchestration systems). Multi-year hands-on experience and fluency with Kubernetes and helm charts are an absolute skill requirement. We live and breathe the k8s ecosystem. Experience with a CI/CD pipeline. We use a combination of Github Actions, ArgoCD, Helm and GitOps in our deployment process, but again, any are fine. Some sort of infrastructure-as-code system: Ansible, Terraform, CloudFormation, CDK, etc. We use Python and Typescript, so knowledge and exposure with either is a strong plus. Experience breaking up monolithic architectures into microservices Experience with service meshes and service discovery solutions. Experience with an observability solution: New Relic, Prometheus, DataDog, etc. Experience with logging systems: CloudWatch, ELK, Splunk, etc. Bachelor’s degree in Computer Science or similar or equivalent experience At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications.
Posted 3 days ago
5.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Role Proficiency: Acts under minimum guidance of DevOps Architect to set up and manage DevOps tools and pipelines. Outcomes Interpret the DevOps Tool/feature/component design and develop/support the same in accordance with specifications Follow and contribute existing SOPs to trouble shoot issues Adapt existing DevOps solutions for new contexts Code debug test and document; and communicate DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Support users onboarding them on existing tools with guidance from DevOps leads Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentor A1 and A2 resources Involved in the Code Review of the team Measures Of Outcomes Schedule adherence Quality of the code Defect injection at various stages of lifecycle # SLA related to level 1 and level 2 support # of domain certification/ product certification obtained Facilitate saving measures through automation Outputs Expected Automated components: Deliver components that automate parts to install components/configure of software/tools in on-premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured Components Configure a CI/CD pipeline that can be used by application development/support teams Scripts Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/ configuration/ build/ deployment tasks Onboard Users Onboard and extend existing tools to new app dev/support teams Mentoring Mentoring and providing guidance to peers Stakeholder Management Guide the team in preparing status updates; keeping management updated regarding the status Data Base Data Insertion Data update Data Delete Data view creations Skill Examples Install configure troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Integrate with code/test quality analysis tools like Sonarqube/Cobertura/Clover Integrate build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Scripting skills (Python Linux/Shell/Perl/Groovy/PowerShell) Repository Management/Migration Automation – GIT/BitBucket/GitHub/Clearcase Build automation scripts – Maven/Ant Artefact repository management – Nexus/Artifactory Dashboard Management & Automation- ELK/Splunk Configuration of cloud infrastructure (AWS/Azure/Google) Migration of applications from on-premises to cloud infrastructures Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration) Strong debugging skill in C#/C Sharp/Dotnet Basic working knowledge of database Knowledge Examples Knowledge of Installation/Config/Build/Deploy tools and knowledge of DevOps processes Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes & tools Knowledge of Agile methodologies Knowledge of security policies and tools Additional Comments 5+ years of experience as an SRE, DevOps Engineer, or similar role. Proficiency in scripting and automation (Bash, Python, Go, etc.). Strong experience with containerization and orchestration (Docker, Kubernetes, Helm). Solid understanding of Linux systems administration and networking fundamentals. Experience with cloud platforms (AWS, Azure, or GCP). Experience with IaC tools like Terraform or CloudFormation. Familiarity with GitOps and modern deployment practices. Hands-on experience with observability tools (e.g., Prometheus, Grafana, Datadog). Strong troubleshooting and incident response skills. Preferred: Experience in a high-traffic, microservices-based architecture. Exposure to service meshes (Istio, Linkerd). Certifications (AWS Certified DevOps Engineer, CKA, etc.) Experience with security automation and compliance (e.g., SOC2, ISO27001). Soft Skills: Strong communication and collaboration abilities. Ability to thrive in a fast-paced, agile environment. Analytical mindset and proactive approach to problem-solving. A passion for automation, performance, and system design. Design, build, and maintain reliable, scalable, and secure cloud-based infrastructure (AWS, Azure, or GCP). Develop and improve observability using monitoring, ing, logging, and tracing tools (e.g., Prometheus, Grafana, ELK, Datadog, etc.). Automate repetitive tasks and infrastructure using Infrastructure-as-Code (Terraform, CloudFormation, Pulumi). Create and maintain CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, ArgoCD, etc.) to support fast and safe delivery. Lead incident response, root cause analysis, and postmortems to ensure high uptime and rapid recovery. Optimize system performance, reliability, and cost-effectiveness through proactive monitoring and tuning. Collaborate with software engineering teams to define SLAs/SLOs and improve service reliability. Implement and maintain security best practices across environments (e.g., secrets management, IAM, firewalls, etc.). Maintain disaster recovery plans, backups, and high-availability strategies. Skills Kubernetes,Cloud Platform,Python Scripting,Sre
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Role Proficiency: Act under guidance of Lead II/Architect understands customer requirements and translate them into design of new DevOps (CI/CD) components. Capable of managing at least 1 Agile Team Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates own DevOps solutions for new contexts Codes debugs tests documents and communicates DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install configure troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentors A1 and A2 resources Involved in the Code Review of the team Measures Of Outcomes: Quality of deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA for onboarding and supporting users and tickets Outputs Expected: Automated components : Deliver components that automat parts to install components/configure of software/tools in on premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured Components: Configure a CI/CD pipeline that can be used by application development/support teams Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Onboard Users: Onboard and extend existing tools to new app dev/support teams Mentoring: Mentor and provide guidance to peers Stakeholder Management: Guide the team in preparing status updates keeping management updated about the status Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and in onboarding users Measure Process Efficiency/Effectiveness: Measure and pay attention to efficiency/effectiveness of current process and make changes to make them more efficiently and effectively Stakeholder Management: Share the status report with higher stakeholder Skill Examples: Experience in the design installation configuration and troubleshooting of CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python/Linux/Shell/Perl/Groovy/PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Powershell) Experience in repository Management/Migration Automation – GIT/BitBucket/GitHub/Clearcase Experience in build automation scripts – Maven/Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS/Azure/Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration)/Strong debugging skill in C#/C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker/Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build Branching/Merging Knowledge about containerization Knowledge on security policies and tools Knowledge of Agile methodologies Additional Comments: Automation Engineer Relevant Experience: 3 to 5 years of hands-on experience with Kubernetes and cloud-native automation, focusing on eliminating repetitive tasks through scripting, IaC, and self-healing mechanisms. Job Summary: The Automation Engineer will play a critical role in reducing operational toil within Kubernetes-based environments by designing, developing, and implementing automation solutions that streamline repetitive tasks and improve system reliability. This role involves close collaboration with SRE and platform engineering teams to build self-healing mechanisms, enhance observability, and integrate automation into CI/CD pipelines, ensuring faster, more resilient deployments and minimal manual intervention. Key Responsibilities: Toil Reduction & Automation Identify repetitive, manual operational tasks and design automation solutions to eliminate them. Develop scripts, tools, and pipelines to automate deployments, scaling, monitoring, and incident response. Kubernetes & Cloud Operations Manage and optimize Kubernetes clusters across multiple environments (dev, staging, production). Implement automated cluster lifecycle management (provisioning, upgrades, scaling). Reliability & Observability Build self-healing mechanisms for common failure scenarios. Enhance observability by automating metrics, logging, and ing integrations. CI/CD & Infrastructure as Code Implement and maintain CI/CD pipelines for application and infrastructure deployments. Use Infrastructure as Code (IaC) tools for consistent environment management. Collaboration & Best Practices Work closely with SREs, developers, and platform teams to improve reliability and reduce MTTR. Advocate for automation-first culture and SRE principles across teams. Required skills Automation & Scripting: Proficiency in Python or Bash for automation tasks. Kubernetes Expertise: Hands-on experience with Kubernetes (deployment, scaling, troubleshooting).; CKA/CKAD certification preferred Cloud Platforms: Experience with AWS CI/CD Tools: Jenkins, GitLab CI, or similar. IaC Tools: Terraform. Observability: Familiarity with Splunk. Version Control: Strong Git skills and experience with GitOps workflows. Problem-Solving: Ability to analyze operational pain points and design automation solutions. Skills Kubernetes,Cloud Platform,Python Scripting,Sre
Posted 3 days ago
5.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Role Proficiency: Act under guidance of DevOps; leading more than 1 Agile team. Outcomes Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource) Measures Of Outcomes Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time delivery Outputs Expected Automated components : Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications Configured Components Configure tools and automation framework into the overall DevOps design Scripts Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Training/SOPs Create Training plans/SOPs to help DevOps Engineers with DevOps activities and to in onboarding users Measure Process Efficiency/Effectiveness Deployment frequency innovation and technology changes. Operations Change lead time/volume Failed deployments Defect volume and escape rate Meantime to detection and recovery Skill Examples Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation – GIT BitBucket GitHub Clearcase Experience in build automation scripts – Maven Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & Kubernetes Knowledge Examples Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments 5+ years of experience as an SRE, DevOps Engineer, or similar role. Proficiency in scripting and automation (Bash, Python, Go, etc.). Strong experience with containerization and orchestration (Docker, Kubernetes, Helm). Solid understanding of Linux systems administration and networking fundamentals. Experience with cloud platforms (AWS, Azure, or GCP). Experience with IaC tools like Terraform or CloudFormation. Familiarity with GitOps and modern deployment practices. Hands-on experience with observability tools (e.g., Prometheus, Grafana, Datadog). Strong troubleshooting and incident response skills. Preferred: Experience in a high-traffic, microservices-based architecture. Exposure to service meshes (Istio, Linkerd). Certifications (AWS Certified DevOps Engineer, CKA, etc.) Experience with security automation and compliance (e.g., SOC2, ISO27001). Soft Skills: Strong communication and collaboration abilities. Ability to thrive in a fast-paced, agile environment. Analytical mindset and proactive approach to problem-solving. A passion for automation, performance, and system design. Design, build, and maintain reliable, scalable, and secure cloud-based infrastructure (AWS, Azure, or GCP). Develop and improve observability using monitoring, ing, logging, and tracing tools (e.g., Prometheus, Grafana, ELK, Datadog, etc.). Automate repetitive tasks and infrastructure using Infrastructure-as-Code (Terraform, CloudFormation, Pulumi). Create and maintain CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, ArgoCD, etc.) to support fast and safe delivery. Lead incident response, root cause analysis, and postmortems to ensure high uptime and rapid recovery. Optimize system performance, reliability, and cost-effectiveness through proactive monitoring and tuning. Collaborate with software engineering teams to define SLAs/SLOs and improve service reliability. Implement and maintain security best practices across environments (e.g., secrets management, IAM, firewalls, etc.). Maintain disaster recovery plans, backups, and high-availability strategies. Skills Kubernetes,Cloud Platform,Python Scripting,Sre
Posted 3 days ago
5.0 years
15 - 20 Lacs
hyderabad, telangana, india
On-site
Job Title: DevOps / Kubernetes Engineer Location: Onsite (Pune, Hyderabad, mumbai, mohali, Panchkula, Bangalore) Shift: CST / PST Shift Cloud: Microsoft Azure Experience - 5+Years Budget- 18-20 lpa Job Description We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows , with hands-on experience in Azure cloud environments . You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale. Key Responsibilities Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications. Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning. Deploy and manage applications using Helm and ArgoCD with GitOps best practices. Work with Podman and Docker as container runtimes for development and production environments. Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations. Optimize infrastructure for cost, performance, and reliability within Azure cloud. Troubleshoot, monitor, and maintain system health, scalability, and performance. Required Skills & Experience Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration. Proficiency in Terraform or OpenTofu for infrastructure as code. Experience with Helm and ArgoCD for application deployment and GitOps. Solid understanding of Docker / Podman container runtimes. Cloud expertise in Azure with experience deploying and scaling workloads. Familiarity with CI/CD pipelines, monitoring, and logging frameworks. Knowledge of best practices around cloud security, scalability, and high availability. Preferred Qualifications Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses. Experience working in global distributed teams across CST/PST time zones. Strong problem-solving skills and ability to work independently in a fast-paced environment. Skills: argocd,podman,apache 2.0 / mpl 2.0,azure,helm,devops,kubernetes,docker
Posted 4 days ago
30.0 years
2 - 6 Lacs
gurgaon
On-site
**About REA Group** In 1995, in a garage in Melbourne, Australia, REA Group was born from a simple question: “Can we change the way the world experiences property?” Could we? Yes. Are we done? Never. Fast forward 30 years, REA Group is a market leader in online real estate in three continents and continuing to grow rapidly across the globe. The secret to our growth is staying true to that ‘day one’ mindset; the hunger to innovate, the ambition to change the world, and the curiosity to reimagine the future. Our new Tech Center in Cyber City is dedicated to accelerating REA Group’s global technology delivery through relentless innovation. We’re looking for the best technologists, inventors and leaders in India to join us on this exciting new journey. If you’re excited by the prospect of creating something magical from scratch, then read on. **What the role is all about:** We’re seeking a Senior Engineer – Cloud (3-6 years’ experience) who will play a pivotal role in shaping the future of REA’s cutting-edge products. You’ll take a multifaceted approach to ensure technical excellence and operational efficiency within the infrastructure domain. By strategically integrating automation, monitoring and incident response, you facilitate the evolution from traditional operations to a more customer-focused and agile approach. This is your chance to work on impactful projects that drive customer satisfaction and company growth. You’ll work with cutting-edge technologies alongside talented individuals from diverse backgrounds, fostering a dynamic and collaborative environment. Being a leader in Australia and India in the property portal space, REA is a large and challenging technical environment, we are multi-cloud at scale with the best-in-class approach to managing at this scale. A place that is both supportive and exciting. **While no two days are likely to be the same, your typical responsibilities will include:** + Design and implement K8s-native compute solutions to deploy applications and workloads. + Develop automation for Kubernetes cluster lifecycle management including zero-downtime upgrades and scaling operations. + Define and track SLIs/SLOs for critical platform components and implement strategies to meet them. + Collaborate with development teams to build platform capabilities. + Conduct capacity planning and performance testing to ensure platform scalability. + Actively participate in pairing, code reviews, unit testing, and secure deployments to deliver secure and quality code. + Stay updated on the latest Kubernetes and platform engineering trends and apply them to solve complex challenges. **Who we are looking for:** + Proficient in Go (or another programming language) with a strong track record of building scalable applications. + Proven experience in creating and deploying custom Kubernetes operators and CRDs. + Deep understanding and hands-on experience with major cloud platforms (e.g. AWS/GCP/Azure). + Good experience in managing and deploying workloads on production-grade Kubernetes clusters. + Experience with Argo CD and GitOps methodologies to automate and streamline continuous delivery of applications deployed in Kubernetes environments. + Experience in using Kubernetes ecosystem tools like Cilium, Kyverno, and Keda to build and maintain robust, scalable, and secure platforms. + Experience with monitoring and incident management tools + Effectively communicate complex solutions to audiences with varying technical backgrounds, fostering consensus and collaboration. **Bonus Points for:** + Certified Kubernetes Administrator (CKA) or Kubernetes Application Developer (CKAD) certification. **What we offer:** + A hybrid and flexible approach to working. + Transport options to help you get to and from work, including home pick-up and drop-off. + Meals provided on site in our office. + Flexible leave options including parental leave, family care leave and celebration leave. + Insurances for you and your immediate family members. + Programs to support mental, emotional, financial and physical health & wellbeing. + Continuous learning and development opportunities to further your technical expertise. **The values we live by:** Our values are at the core of how we operate, treat each other, and make decisions. We believe that how we work is equally important as what we do to achieve our goals. This commitment is at the heart of everything we do, from the way we interact with colleagues to the way we serve our customers and communities. **Our commitment to Diversity, Equity, and Inclusion:** We are committed to providing a working environment that embraces and values diversity, equity and inclusion. We believe teams with diverse ideas and experiences are more creative, more e?ective and fuel disruptive thinking – be it cultural and ethnic backgrounds, gender identity, disability, age, sexual orientation, or any other identity or lived experience. We know diverse teams are critical to maintaining our success and driving new business opportunities. If you’ve got the skills, dedication and enthusiasm to learn but don’t necessarily meet every single point on the job description, please still get in touch.
Posted 4 days ago
30.0 years
0 Lacs
gurgaon
On-site
**About REA Group** In 1995, in a garage in Melbourne, Australia, REA Group was born from a simple question: “Can we change the way the world experiences property?” Could we? Yes. Are we done? Never. Fast forward 30 years, REA Group is a market leader in online real estate in three continents and continuing to grow rapidly across the globe. The secret to our growth is staying true to that ‘day one’ mindset; the hunger to innovate, the ambition to change the world, and the curiosity to reimagine the future. Our new Tech Center in Cyber City is dedicated to accelerating REA Group’s global technology delivery through relentless innovation. We’re looking for the best technologists, inventors and leaders in India to join us on this exciting new journey. If you’re excited by the prospect of creating something magical from scratch, then read on. **What the role is all about:** We’re seeking a Lead Engineer (6-8 years’ experience) who will play a pivotal role in shaping the future of REA’s cutting-edge products. You’ll collaborate closely with cross-functional teams across the globe, leading the design, development, and optimization of our Kubernetes based IDP. You’ll work with cutting-edge technologies alongside talented individuals from diverse backgrounds, fostering a dynamic and collaborative environment. **While no two days are likely to be the same, your typical responsibilities will include:** + Enable teams to transition their applications to Kubernetes by architecting an automated, repeatable migration pipeline. + Develop automation for Kubernetes cluster lifecycle management including zero-downtime upgrades and scaling operations. + Define and track SLIs/SLOs for critical platform components and implement strategies to meet them. + Collaborate with development teams to build platform capabilities. + Conduct capacity planning and performance testing to ensure platform scalability. + Actively participate in pairing, code reviews, unit testing, and secure deployments to deliver secure and quality code. + Stay updated on the latest Kubernetes and platform engineering trends and apply them to solve complex challenges. [ DC1] + Take ownership and accountability of deliverables while mentoring other team members. **Who we are looking for:** + Deep understanding and hands-on experience with major cloud platforms (e.g. AWS / GCP / Azure). [ DC2] + Experience writing developer tooling in a general-purpose programming language such as Go or Python with a focus on Kubernetes, migration automation, and improving developer user experience. + Extensive experience in managing and deploying workloads on production-grade Kubernetes clusters. + Experience with Argo CD and GitOps methodologies to automate and streamline continuous delivery of applications deployed in Kubernetes environments. + Experience in using Kubernetes ecosystem tools like Cilium, Kyverno, and Keda [ DC3] [VS4 ] to build and maintain robust, scalable, and secure platforms. + Experience with monitoring and incident management tools **Bonus Points for:** + Certified Kubernetes Administrator (CKA) or Kubernetes Application Developer (CKAD) certification. **What we offer:** + A hybrid and flexible approach to working. + Transport options to help you get to and from work, including home pick-up and drop-off. + Meals provided on site in our office. + Flexible leave options including parental leave, family care leave and celebration leave. + Insurances for you and your immediate family members. + Programs to support mental, emotional, financial and physical health & wellbeing. + Continuous learning and development opportunities to further your technical expertise. **The values we live by:** Our values are at the core of how we operate, treat each other, and make decisions. We believe that how we work is equally important as what we do to achieve our goals. This commitment is at the heart of everything we do, from the way we interact with colleagues to the way we serve our customers and communities. **Our commitment to Diversity, Equity, and Inclusion:** We are committed to providing a working environment that embraces and values diversity, equity and inclusion. We believe teams with diverse ideas and experiences are more creative, more e?ective and fuel disruptive thinking – be it cultural and ethnic backgrounds, gender identity, disability, age, sexual orientation, or any other identity or lived experience. We know diverse teams are critical to maintaining our success and driving new business opportunities. If you’ve got the skills, dedication and enthusiasm to learn but don’t necessarily meet every single point on the job description, please still get in touch.
Posted 4 days ago
5.0 - 10.0 years
5 - 10 Lacs
gurgaon
Remote
Lead Assistant Manager EXL/LAM/1476962 ServicesGurgaon Posted On 11 Sep 2025 End Date 26 Oct 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band B2 Band Name Lead Assistant Manager Cost Code D011774 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1000000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Retail Media & Hi-Tech Organization Services LOB Services SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill Minimum Qualification B.TECH/B.E Certification No data available Job Description Job Title: Junior/Mid Cloud Solutions Architect & DevOps Engineer Location: [Remote / On-site / Hybrid] About the Role: We are looking for a curious and driven Junior to Mid-level Cloud Solutions Architect & DevOps Engineer to join our growing team. This role is a unique hybrid that blends cloud architecture and hands-on DevOps, with a focus on building scalable data lakehouse solutions across major cloud platforms (AWS, GCP, Azure). You will architect and implement infrastructure from the ground up, leveraging cloud storage, databases, container orchestration, serverless services, and more. You’ll work individually initially but with clear pathways to grow into a leadership role, guiding and mentoring future team members. Responsibilities: Design, build, and maintain cloud-based data lakehouse architectures using a combination of cloud storage, databases, VMs, Kubernetes, Docker, and serverless technologies. Implement SaaS-style application hosting via web and serverless platforms for end-user accessibility. Manage infrastructure configurations including secrets, environment variables, and secure access controls. Build and maintain CI/CD pipelines and adopt GitOps practices to streamline deployments. Optimize cloud resource usage for cost efficiency without compromising performance. Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud solutions. Continuously learn and stay updated on new cloud services, big data technologies, and best practices. Prepare to take on leadership responsibilities as the team grows. Required Skills & Experience: Practical experience with at least one major cloud platform (AWS, GCP, Azure); willingness and ability to learn others. Strong programming skills in Python and SQL. Experience with PySpark is a bonus. Familiarity with containerization (Docker) and orchestration (Kubernetes). Experience with version control systems (Git) and CI/CD tools. Understanding of cloud services pricing models to help design cost-effective solutions. Solid grasp of DevOps practices, including configuration management, secrets handling, and environment setup. Self-motivated, eager to learn, and able to work independently. Nice to Have (Bonus): Prior experience with big data platforms or streaming data solutions like Kafka. Knowledge of modern analytics and data stack tools (e.g., dbt, DuckDB, Cloudflare R2). Understanding of cloud networking, VPNs, security features, and firewall configurations. What We Offer: Opportunity to shape and lead a growing cloud architecture and DevOps team. Exposure to cutting-edge cloud technologies across multiple providers. Collaborative and supportive work environment that values curiosity and continuous learning. Competitive salary and benefits package. About EXL Sports Analytics: EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world. About EXL: EXL (NASDAQ:EXLS) is a leading data analytics and operations management company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Headquartered in New York, EXL has more than 40,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world. Workflow Workflow Type L&S-DA-Consulting
Posted 4 days ago
6.0 - 8.0 years
0 Lacs
noida
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Required Skills Strong hands-on experience with AWS core services and infrastructure. a. Design, build, and manage cloud-native solutions using AWS services. b. Develop and deploy applications leveraging services such as EC2, ECS, EKS, S3, Lambda, API Gateway, DynamoDB, RDS, CloudFront, Route 53, and ALB/NLB. c. Automate infrastructure provisioning with CloudFormation, CDK, Serverless Framework, or Terraform. d. Implement and manage IAM roles, VPCs, subnets, and security groups for secure access control. e. Configure CloudWatch, CloudTrail, and X-Ray for monitoring, logging, and tracing f. Troubleshoot AWS infrastructure and resolve performance, availability, or networking issues. Proficiency in Infrastructure as Code (IaC) frameworks such as CloudFormation, CDK, Terraform, or Serverless Framework. Solid understanding of compute, storage, networking, databases, and serverless services on AWS. Familiarity with DevOps practices, including CI/CD, automation, and observability. a. Automate build, release, and deployment processes for faster software delivery b. Manage infrastructure version control and enforce GitOps practices where applicable. c. Manage and optimize container orchestration platforms (ECS,EKS, or Kubernetes). d. Ensure backup, disaster recovery, and high availability strategies are in place. Strong problem-solving, debugging, and troubleshooting abilities. Nice to Have High-level understanding of Node.js ecosystem (Ready to learn) Knowledge of Python or JavaScript for AWS CDK. Groovy scripting and Ansible Experience with microservices architecture and containerization (Docker, Kubernetes). Total Experience Expected: 06-08 years Qualifications BTech Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 4 days ago
5.0 years
15 - 20 Lacs
pune, maharashtra, india
On-site
Job Title: DevOps / Kubernetes Engineer Location: Onsite (Pune, Hyderabad, mumbai, mohali, Panchkula, Bangalore) Shift: CST / PST Shift Cloud: Microsoft Azure Experience - 5+Years Budget- 18-20 lpa Job Description We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows , with hands-on experience in Azure cloud environments . You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale. Key Responsibilities Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications. Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning. Deploy and manage applications using Helm and ArgoCD with GitOps best practices. Work with Podman and Docker as container runtimes for development and production environments. Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations. Optimize infrastructure for cost, performance, and reliability within Azure cloud. Troubleshoot, monitor, and maintain system health, scalability, and performance. Required Skills & Experience Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration. Proficiency in Terraform or OpenTofu for infrastructure as code. Experience with Helm and ArgoCD for application deployment and GitOps. Solid understanding of Docker / Podman container runtimes. Cloud expertise in Azure with experience deploying and scaling workloads. Familiarity with CI/CD pipelines, monitoring, and logging frameworks. Knowledge of best practices around cloud security, scalability, and high availability. Preferred Qualifications Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses. Experience working in global distributed teams across CST/PST time zones. Strong problem-solving skills and ability to work independently in a fast-paced environment. Skills: argocd,podman,apache 2.0 / mpl 2.0,azure,helm,devops,kubernetes,docker
Posted 4 days ago
5.0 years
15 - 20 Lacs
mumbai metropolitan region
On-site
Job Title: DevOps / Kubernetes Engineer Location: Onsite (Pune, Hyderabad, mumbai, mohali, Panchkula, Bangalore) Shift: CST / PST Shift Cloud: Microsoft Azure Experience - 5+Years Budget- 18-20 lpa Job Description We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows , with hands-on experience in Azure cloud environments . You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale. Key Responsibilities Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications. Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning. Deploy and manage applications using Helm and ArgoCD with GitOps best practices. Work with Podman and Docker as container runtimes for development and production environments. Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations. Optimize infrastructure for cost, performance, and reliability within Azure cloud. Troubleshoot, monitor, and maintain system health, scalability, and performance. Required Skills & Experience Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration. Proficiency in Terraform or OpenTofu for infrastructure as code. Experience with Helm and ArgoCD for application deployment and GitOps. Solid understanding of Docker / Podman container runtimes. Cloud expertise in Azure with experience deploying and scaling workloads. Familiarity with CI/CD pipelines, monitoring, and logging frameworks. Knowledge of best practices around cloud security, scalability, and high availability. Preferred Qualifications Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses. Experience working in global distributed teams across CST/PST time zones. Strong problem-solving skills and ability to work independently in a fast-paced environment. Skills: argocd,podman,apache 2.0 / mpl 2.0,azure,helm,devops,kubernetes,docker
Posted 4 days ago
5.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Experience : 5+ Years Work Mode : Work from office Job Description: 1. CI/CD & Release Management Design, implement, and maintain robust CI/CD pipelines using Harness and Jenkins. Optimize build and release processes to reduce deployment time and improve reliability. Automate rollback, blue-green, and canary deployment strategies for safe releases. 2. Infrastructure as Code (IaC) Define, provision, and manage cloud infrastructure using Terraform and AWS CloudFormation. Ensure all infrastructure changes are version-controlled, tested, and compliant with security standards. Implement modular and reusable Terraform configurations. 3. Cloud & Platform Engineering Architect and manage AWS environments (EC2, EKS, RDS, S3, VPC, IAM, Lambda, CloudWatch). Ensure cloud infrastructure is highly available, scalable, cost-optimized, and secure. Implement monitoring, alerting, and logging solutions using CloudWatch, Prometheus, Grafana, or similar. 4. Containerization & Orchestration Deploy, manage, and scale workloads on Kubernetes clusters (EKS or self-managed). Package and deploy applications using Helm Charts with proper versioning and dependency management. Implement Kubernetes best practices including RBAC, pod security, and network policies. 5. Security & Compliance Integrate DevSecOps practices into CI/CD pipelines (static code analysis, vulnerability scans, secrets management). Apply least-privilege principles with AWS IAM roles and Kubernetes RBAC. Ensure compliance with organizational and industry standards. 6. Automation & Scripting Automate repetitive operational tasks using Python, Bash, or Go scripting. Build self-service automation for developers (infrastructure provisioning, environment setup). 7. DevOps Best Practices Promote GitOps principles with tools like ArgoCD or FluxCD (if applicable). Drive standardization of branching strategies, code reviews, and release processes. Foster a culture of observability, monitoring, and continuous improvement. 8. Collaboration & Mentorship Work closely with developers, QA, and SREs to accelerate delivery while maintaining stability. Mentor junior DevOps engineers on modern tooling and best practices. Participate in on-call rotations, incident response, and root cause analysis. 9. Performance & Reliability Engineering Conduct capacity planning, performance tuning, and cost optimization. Implement resilience testing, chaos engineering, and disaster recovery drills. 10. Continuous Improvement Evaluate emerging DevOps tools and practices (e.g., service mesh, progressive delivery). Contribute to documentation, runbooks, and knowledge sharing.
Posted 4 days ago
6.0 - 8.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Required Skills Strong hands-on experience with AWS core services and infrastructure. Design, build, and manage cloud-native solutions using AWS services. Develop and deploy applications leveraging services such as EC2, ECS, EKS, S3, Lambda, API Gateway, DynamoDB, RDS, CloudFront, Route 53, and ALB/NLB. Automate infrastructure provisioning with CloudFormation, CDK, Serverless Framework, or Terraform. Implement and manage IAM roles, VPCs, subnets, and security groups for secure access control. Configure CloudWatch, CloudTrail, and X-Ray for monitoring, logging, and tracing Troubleshoot AWS infrastructure and resolve performance, availability, or networking issues. Proficiency in Infrastructure as Code (IaC) frameworks such as CloudFormation, CDK, Terraform, or Serverless Framework. Solid understanding of compute, storage, networking, databases, and serverless services on AWS. Familiarity with DevOps practices, including CI/CD, automation, and observability. Automate build, release, and deployment processes for faster software delivery Manage infrastructure version control and enforce GitOps practices where applicable. Manage and optimize container orchestration platforms (ECS,EKS, or Kubernetes). Ensure backup, disaster recovery, and high availability strategies are in place. Strong problem-solving, debugging, and troubleshooting abilities. Nice to Have High-level understanding of Node.js ecosystem (Ready to learn) Knowledge of Python or JavaScript for AWS CDK. Groovy scripting and Ansible Experience with microservices architecture and containerization (Docker, Kubernetes). Total Experience Expected: 06-08 years Qualifications BTech Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an Observability Developer at GlobalLogic, you will play a crucial role in alert configuration, workflow automation, and AI-driven solutions within the observability stack. Your responsibilities will include designing and implementing alerting rules, configuring alert routing and escalation policies, building workflow integrations, developing AI-based solutions, collaborating with cross-functional teams, and automating alert lifecycle management. **Key Responsibilities:** - Design and implement alerting rules for metrics, logs, and traces using tools like Grafana, Prometheus, or similar. - Configure alert routing and escalation policies integrated with collaboration and incident management platforms (e.g., Slack, PagerDuty, ServiceNow, Opsgenie). - Build and maintain workflow integrations between observability platforms and ticketing systems, CMDBs, and automation tools. - Develop or integrate AI-based solutions for: - Mapping telemetry signals to service/application components. - Porting or translating existing configurations across environments/tools. - Reducing alert fatigue through intelligent correlation and suppression. - Collaborate with DevOps, SRE, and development teams to ensure alerts are meaningful and well-contextualized. - Automate alert lifecycle management via CI/CD and GitOps pipelines. - Maintain observability integration documentation and provide support to teams using alerting and workflows. In this role, you will be part of a culture of caring at GlobalLogic, where people come first. You will experience an inclusive environment that prioritizes learning and development, interesting and meaningful work, balance, flexibility, and a high-trust organization. Join GlobalLogic, a Hitachi Group Company, and be part of a team that is at the forefront of the digital revolution, collaborating with clients to transform businesses and redefine industries through intelligent products, platforms, and services.,
Posted 4 days ago
0 years
0 Lacs
gurgaon, haryana, india
Remote
Job Title Junior/Mid Cloud Solutions Architect & DevOps Engineer Location: [Remote / On-site / Hybrid] About The Role We are looking for a curious and driven Junior to Mid-level Cloud Solutions Architect & DevOps Engineer to join our growing team. This role is a unique hybrid that blends cloud architecture and hands-on DevOps, with a focus on building scalable data lakehouse solutions across major cloud platforms (AWS, GCP, Azure). You will architect and implement infrastructure from the ground up, leveraging cloud storage, databases, container orchestration, serverless services, and more. You’ll work individually initially but with clear pathways to grow into a leadership role, guiding and mentoring future team members. Responsibilities Design, build, and maintain cloud-based data lakehouse architectures using a combination of cloud storage, databases, VMs, Kubernetes, Docker, and serverless technologies. Implement SaaS-style application hosting via web and serverless platforms for end-user accessibility. Manage infrastructure configurations including secrets, environment variables, and secure access controls. Build and maintain CI/CD pipelines and adopt GitOps practices to streamline deployments. Optimize cloud resource usage for cost efficiency without compromising performance. Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud solutions. Continuously learn and stay updated on new cloud services, big data technologies, and best practices. Prepare to take on leadership responsibilities as the team grows. Required Skills & Experience Practical experience with at least one major cloud platform (AWS, GCP, Azure); willingness and ability to learn others. Strong programming skills in Python and SQL. Experience with PySpark is a bonus. Familiarity with containerization (Docker) and orchestration (Kubernetes). Experience with version control systems (Git) and CI/CD tools. Understanding of cloud services pricing models to help design cost-effective solutions. Solid grasp of DevOps practices, including configuration management, secrets handling, and environment setup. Self-motivated, eager to learn, and able to work independently. Nice To Have (Bonus) Prior experience with big data platforms or streaming data solutions like Kafka. Knowledge of modern analytics and data stack tools (e.g., dbt, DuckDB, Cloudflare R2). Understanding of cloud networking, VPNs, security features, and firewall configurations. What We Offer Opportunity to shape and lead a growing cloud architecture and DevOps team. Exposure to cutting-edge cloud technologies across multiple providers. Collaborative and supportive work environment that values curiosity and continuous learning. Competitive salary and benefits package. About EXL Sports Analytics EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world. About EXL EXL (NASDAQ:EXLS) is a leading data analytics and operations management company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Headquartered in New York, EXL has more than 40,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world.
Posted 4 days ago
0 years
0 Lacs
pune, maharashtra, india
On-site
We at Innovecture are hiring for a DevSecOps Engineer to expand our team, this will be in Pune/Mumbai. You will work across various Innovecture and client teams and apply your technical expertise to some of the most complex and challenging technology problems. About Innovecture: Founded in 2007 under the leadership of CEO Shreyas Kamat, Innovecture LLC, began as a U.S.-based Information Technology and Management Consulting Company focusing on technology consulting and services. With international development centers located in Salt Lake City, USA, and Pune, India, Innovecture leverages its Global Agile Delivery Model to effectively deliver client projects within budget scope and project deadline. The primary focus of Innovecture is to provide a unique wealth of expertise and experience to the IT and Management Consulting realm by utilizing various technologies across multiple industry domains. Innovecture uses best-in-class design processes and top-quality talent to ensure the highest quality deliverables. With innovation embedded in its consulting and services approach, Innovecture will continue to deliver outstanding results for its Fortune 500 clients and employees. Job Description Architect, design, and implement robust and scalable CI/CD pipelines using Jenkins and ArgoCD for continuous integration, delivery and deployment. Drive the adoption and implementation of GitOps principles using ArgoCD for managing Kubernetes deployments and infrastructure as code. Develop and maintain automation scripts (e.g., Python, Bash, Shell) for various DevOps tasks, including infrastructure provisioning, configuration management, and deployment. Manage and optimize Kubernetes clusters, including deployment, monitoring, troubleshooting, and resource management (pods, services, deployments, etc.). Work closely with development, QA, and operations teams to ensure smooth and efficient software delivery, troubleshoot issues, and enhance system reliability. Ensure CI/CD pipelines and infrastructure adhere to security standards and compliance requirements. Proven experience in designing, implementing, and managing complex CI/CD pipelines and DevOps practices. In-depth knowledge and hands-on experience with Jenkins, including pipeline creation (Declarative Pipelines), plugin management, and job configuration. Strong understanding and practical experience with ArgoCD for GitOps-based continuous delivery to Kubernetes. Experience with IaC tools like Terraform or Ansible. Proficiency in scripting languages such as Python, Bash, or Shell scripting for automation. Familiarity with major cloud platforms (e.g., AWS, Azure) and their relevant services (e.g., Rancher, EKS, Api Gateway, AWS KMS, Config Maps etc.. Experience with monitoring and logging tools (e.g., Fluent bit, Cloud Watch, Prometheus, Grafana, ELK stack). Strong experience with Git and platforms like GitHub, GitLab, or Azure DevOps. Excellent problem-solving, communication, and collaboration skills. Experience in configuring and using SAST and DAST Tools like SonarQube, SNYK, Git Gaurdian, Burp Suite. Experience with Automation testing scripts in pipeline. Artifactory integrations with the JFrog DevOps Platform, including JFrog Xray, for comprehensive security scanning, vulnerability detection, and policy enforcement. Familiarity with Github Actions.
Posted 4 days ago
5.0 years
0 Lacs
chennai, tamil nadu, india
On-site
TransUnion's Job Applicant Privacy Notice What We'll Bring We are seeking a highly motivated and experienced Senior DevOps Engineer to lead the design and implementation of scalable infrastructure and automated deployment pipelines. This role emphasizes expertise in Infrastructure as Code (IaC), CI/CD, and Harness, with a strong focus on cloud-native automation and reliability engineering. What You'll Bring Key Responsibilities: Architect and maintain CI/CD pipelines using Harness, GitLab CI, Jenkins, or similar tools. Develop and manage Infrastructure as Code using Terraform, ensuring modular, reusable, and secure configurations. Automate provisioning, configuration, and deployment processes across cloud environments (AWS, GCP, Azure). Collaborate with development teams to integrate DevOps practices into the software lifecycle. Implement monitoring, logging, and alerting solutions to ensure system reliability and performance. Optimize build and release processes to reduce deployment time and improve system uptime. Maintain documentation for infrastructure, automation scripts, and operational procedures. Drive continuous improvement in automation, scalability, and security across environments. Required Skills & Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps or Site Reliability Engineering roles. Strong hands-on experience with Terraform, Python, and CI/CD tools (especially Harness). Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with containerization (Docker, Kubernetes) and orchestration. Solid understanding of automation, configuration management, and version control systems (Git). Excellent problem-solving and communication skills. Impact You'll Make Preferred Qualifications: Experience with GitOps workflows and policy-as-code tools (e.g., OPA). Exposure to security automation and DevSecOps practices. Certifications in cloud platforms or DevOps tools. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Sr Engineer, InfoSec Engineering
Posted 5 days ago
12.0 years
0 Lacs
trivandrum, kerala, india
On-site
Senior Site Reliability Engineer (SRE II) Own availability, latency, performance, and efficiency for Zafin’s SaaS on Azure. You’ll define and enforce reliability standards, lead high-impact projects, mentor engineers, and eliminate toil at scale. Reports to the Director of SRE. What you’ll do SLIs/SLOs & contracts: Define customer-centric SLIs/SLOs for Tier-0/Tier-1 services. Publish, review quarterly, and align teams to them. Error budgeting (policy & tooling): Run the error-budget policy with multi-window, multi-burn-rate alerts; clear runbooks and paging thresholds. Gate changes by budget status (freeze/relax rules) wired into CI/CD. Maintain SLO/EB dashboards (Azure Monitor, Grafana/Prometheus, App Insights). Run weekly SLO reviews with engineering/product. Drive roadmap tradeoffs when budgets are at risk; land reliability epics. Incidents without drama: Lead SEV1/SEV2, own comms, run blameless postmortems, and make corrective actions stick. Engineer reliability in: Multi-AZ/region patterns (active-active/DR), PDBs/Pod Topology Spread, HPA/VPA/KEDA, resilient rollout/rollback. AKS at scale: Harden clusters (network, identity, policy), optimize node/pod density, ingress (AGIC/Nginx); mesh optional. Observability that works: Metrics/traces/logs with Azure Monitor/App Insights, Log Analytics, Prometheus/Grafana, OpenTelemetry. Alert on symptoms, not noise. IaC & policy: Terraform/Bicep modules, GitOps (Flux/Argo), policy-as-code (Azure Policy/OPA Gatekeeper). No snowflakes. CI/CD reliability: Azure DevOps/GitHub Actions with canary/blue-green, progressive delivery, auto-rollback, Key Vault-backed secrets. Capacity & performance: Load testing, right-sizing, autoscaling; partner with FinOps to reduce spend without hurting SLOs. DR you can trust: Define RTO/RPO, test backups/restore, run game days/chaos drills, validate ASR and multi-region failover. Secure by default: Entra ID (Azure AD), managed identities, Key Vault rotation, VNets/NSGs/Private Link, shift-left checks in CI. Reduce toil: Automate recurring ops, build self-service runbooks/chatops, publish golden paths for product teams. Customer escalations: Be the technical owner on calls; communicate tradeoffs and recovery plans with authority. Document to scale: Architectures, runbooks, postmortems, SLIs/SLOs—kept current and discoverable. (If applicable) Streaming/ETL reliability: Apply SRE practices (SLOs, backpressure, idempotency, replay) to NiFi/Flink/Kafka/Redpanda data flows. Minimum qualifications Bachelor’s in CS/Engineering (or equivalent experience). 12+ years in production ops/platform/SRE, including 5+ years on Azure . PostgreSQL (must-have): Deep operational expertise incl. HA/DR, logical/physical replication, performance tuning (indexes/EXPLAIN/ANALYZE, pg_stat_statements), autovacuum strategy, partitioning, backup/restore testing, and connection pooling (pgBouncer). Prefer experience with Azure Database for PostgreSQL – Flexible Server . Azure core: AKS (must-have) ; Front Door/App Gateway, API Management, VNets/NSGs/Private Link, Storage, Key Vault, Redis, Service Bus/Event Hubs. Observability: Azure Monitor/App Insights, Log Analytics, Prometheus/Grafana; SLO design and error-budget operations. IaC/automation: Terraform and/or Bicep; PowerShell and Python; GitOps (Flux/Argo). Pipelines in Azure DevOps or GitHub Actions. Proven incident leadership at scale, blameless postmortems, and SLO/error-budget governance with change gating. Mentorship and crisp written/verbal communication. Preferred (nice to have) Apache NiFi , Apache Flink , Apache Kafka or Redpanda (self-managed on AKS or managed equivalents); schema management, exactly-once semantics, backpressure, dead-letter/replay patterns. Azure Solutions Architect Expert , CKA/CKAD. ITSM (ServiceNow), on-call tooling (PagerDuty/Opsgenie). Compliance/SecOps (SOC 2, ISO 27001), policy-as-code, workload identity. OpenTelemetry, eBPF tooling, or service mesh. Multi-tenant SaaS and cost optimization at scale.
Posted 5 days ago
4.0 - 6.0 years
8 - 18 Lacs
pune
Hybrid
Responsibilities: - Design, implement, and maintain AWS infrastructure - Automate infrastructure creation with Terraform / Terragrunt - Perform application configuration management, and application-deployment tool enabling infrastructure as code - Implement auto-scaling strategies and load balancing solutions - Optimize CDN configurations with AWS Cloudfront - Harden infrastructure and application security across all layers - Implement security best practices for AWS configurations - Manage IAM policies, security groups, and network ACLs - Build and maintain robust CI/CD pipelines with GitHub Actions - Automate deployment, monitoring, and management processes - Set up comprehensive monitoring, logging and alerting systems - Create and maintain operational best practices - Managing and resolving DevSecOps-related incidents, service requests, and operational issues in a timely manner - Monitoring system health and performance of DevSecOps platforms, investigating anomalies, and mitigating risks Technical Skills: - Atleast 4 years of extensive relevant experience in AWS DevOps - Expert knowledge of AWS Cloud services e.g. (VPC, EC2, S3, ELB, RDS, ECS/EKS, IAM, Vault, CloudFront, CloudWatch, SQS/SNS, Lambda, Load Balancer, Security Groups, Redis) - Strong background in infrastructure and application security - Proficiency with containerization and orchestration (Docker, Kubernetes, Helm, Istio, EKS, K9s) - Hands on experince with GitOps using tools like Flux CD, Argo CD - Experience with Continuous Integration and Continuous Deployment Pipelines and tooling (Github, Jenkins, Jira) - Proficient in scripting languages (e.g., Python, Bash) and automation tools - Good exposure to Monitoring, Logging and Alerting tools / systems - Prometheus, Grafana, AWS CloudWatch, New Relic, Splunk, Fluentd, Fluent-Bit - Solid understanding of networking concepts and protocols - Hands on experience with Linux system administration - Proficient in using Terraform / Terragrunt for Infrastructure as Code (IaC) to manage and provision infrastructure - Exposure to OpenSearch/ElasticSearch Kibana (ELK) - Methodological knowledge in SAFe, Agile product development with Scrum, ITIL-processes, DevSecOps - AWS certifications (e.g., AWS Certified DevOps Engineer, AWS Certified Solutions Architect,) Softskills: - Strong analytical & logical thinking skills. Ability to think and act rationally when faced with challenges - Keen eye for details - Sense of ownership and accountability - Fluent communication skills (verbal and written). Should be able to present ideas and thoughts clearly. - Committed team player and good interpersonal skills - Zeal and enthusiasm for learning and exploring new avenues
Posted 5 days ago
2.0 - 5.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Inviting applications for the role of Senior Principal Consultant - ML Engineers! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities . Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) . Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) . Any cloud platform- Azure or AWS . Package and deploy AI/GenAI models on (SageMaker, Lambda, API Gateway). . Write Python scripts for automation, deployment, and monitoring. . Engaging in the design, development and maintenance of data pipelines for various AI use cases . Active contribution to key deliverables as part of an agile development team . Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). . Ensure model governance, versioning, and traceability across environments. . Collaborating with others to source, analyse, test and deploy data processes . Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. . Degree/qualification in Computer Science or a related field, or equivalent work experience . Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. . Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills . Experience with Docker-based deployments. . Exposure to model monitoring tools (Evidently, CloudWatch). . Familiarity with RAG stacks or fine-tuning LLMs. . Understanding of GitOps practices. . Knowledge of governance and compliance policies, standards, and procedures . . . . .
Posted 5 days ago
10.0 years
2 - 10 Lacs
hyderābād
On-site
AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. YOUR IMPACT The OpenText UFT Family of integrated functional testing solutions enables customers to test earlier and faster by delivering AI-driven test automation across an unparalleled range of technologies; on the most popular browsers, mobile devices, operating systems, and form-factors; from the cloud or on-premises; to deliver the speed and resiliency required to achieve automation at scale that is tightly integrated with an organization’s current DevOps toolchain. What The Role Offers We are seeking a seasoned and proactive Lead DevOps Engineer to drive automation, deployment, and monitoring across multiple projects and teams. The ideal candidate will have deep expertise in cloud infrastructure, CI/CD pipelines, containerization, infrastructure as code, and security integrations. This role demands strong leadership, strategic thinking, and the ability to guide engineering teams in adopting DevOps best practices across the organization. Who Are We? OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. What do we have? The OpenText UFT Family of integrated functional testing solutions enables customers to test earlier and faster by delivering AI-driven test automation across an unparalleled range of technologies; on the most popular browsers, mobile devices, operating systems, and form-factors; from the cloud or on-premises; to deliver the speed and resiliency required to achieve automation at scale that is tightly integrated with an organization’s current DevOps toolchain. Architect, implement, and maintain scalable CI/CD pipelines using GitLab and Jenkins Lead DevOps initiatives across multiple projects, ensuring consistency and quality Manage and optimize cloud infrastructure on AWS using best practices Develop infrastructure as code using Terraform and Terragrunt Package applications using Docker and Wix Installer Integrate and manage security tools such as Fortify, SentinelOne, CodeSeal, and Qualys Establish and enforce DevOps standards, including security and compliance practices Collaborate with cross-functional teams (Development, QA, Security) and company-wide governance or platform groups to align on strategic direction and ensure compliance with organizational standards. Work together to streamline delivery across projects. Mentor and guide junior DevOps engineers and developers on tooling and processes Monitor system performance and troubleshoot issues across environments Drive adoption of GitOps, automated testing, and release strategies Stay current with industry trends and continuously improve DevOps practices Required Skills& Qualifications Strong experience with GitLab and Jenkins for CI/CD automation Deep understanding of AWS services and architecture Proficiency in Terraform/Terragrunt for infrastructure provisioning Experience with Docker containerization and orchestration (e.g., ECS, EKS) Familiarity with Wix Installer for packaging Windows applications Knowledge of security tools: Fortify, SentinelOne, CodeSeal, Qualys Solid scripting skills (e.g., Bash, Python, PowerShell) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK) Strong understanding of DevOps and security best practices Excellent problem-solving, communication, and team leadership skills AWS certifications (e.g., Solutions Architect, DevOps Engineer) Exposure to Agile/Scrum methodologies Familiarity with GitOps practices and tools (e.g., ArgoCD, Flux) Experience with container orchestration platforms (e.g., Kubernetes) Knowledge of software development lifecycle and release management Experience working in regulated environments Experience Level: 10+ years OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.
Posted 5 days ago
3.0 - 7.0 years
10 - 15 Lacs
gurugram
Work from Office
5-8 years of experience in deploy, manage, and operate Red Hat OpenShift clusters in on-prem, hybrid, or public cloud environments. Administer Persistent Volumes (PV) and Persistent Volume Claims (PVC) using CephFS, RBD, NFS, and cloud-native storage solutions. Build and maintain CI/CD pipelines using OpenShift Pipelines (Tekton), Jenkins, or GitOps tools like Argo CD. Design and troubleshoot persistent storage provisioning using StorageClasses, AccessModes (RWO, RWX). Manage Ingress Controllers, Routes, Services, and NetworkPolicies for secure traffic flow. Implement and troubleshoot Service Mesh (Istio/OSM) and API Gateways (e.g., 3scale) where applicable. Implement observability using Prometheus, Grafana, Alertmanager. Set up centralized logging via Loki. Strong understanding of containerization technologies
Posted 5 days ago
3.0 years
6 - 7 Lacs
noida
Remote
We are seeking a talented individual to join our MMC Corporate team at GIS This role will be based in Noida/Gurgaon. This is a hybrid role that has a requirement of working at least three days a week in the office. Marsh McLennan is the industry leader in helping companies create dynamic solutions that make a difference in the moments that matter. We are seeking an experienced to join our growing IT team. The ideal candidate will have a strong background in cloud computing and architecture, with a focus on designing and deploying scalable, secure, and cost-effective cloud-based solutions. What can you expect? You will be involved in implementing innovative technologies to engineer cloud platform services as well as CI/CD automation. This is a great opportunity for someone who has the desire to work with teams that are passionate about innovation and creating amazing client experiences. What is in it for you? A company with a strong Brand and strong results to match. A competitive compensation package, full benefits package starting on day one including medical, dental, vision, STI, LTI, life insurance and a generous employer matched pension plan. We also allow staff the ability to participate in our employee stock purchase plan. Generous time off allowances including vacation days, floating holidays and personal days. We also allow colleagues the ability to get a jump start on their long weekends with early close days. Access to our employee resource groups which provides career growth and mentoring opportunities with counterparts in industry groups and client organizations. You can be assured that there is always an opportunity to learn and grow with Marsh. Hybrid positions which provides the opportunity to work from both in the office and your home. We will count on you to: Design, implement, and maintain highly available, scalable, and secure cloud infrastructure primarily in AWS and/or Oracle Cloud Infrastructure (OCI). Candidates with experience in both platforms are highly preferred. Develop and execute cloud migration strategies to transition existing on-premises workloads to cloud environments. Create and maintain comprehensive documentation of cloud architecture, including architecture diagrams, process workflows, and standard operating procedures. Collaborate with development teams to design, develop, and deploy cloud-native applications and services. Implement automation and Infrastructure as Code (IaC) solutions to streamline deployment and management of cloud resources. Work closely with security and compliance teams to ensure cloud solutions adhere to industry standards and regulatory requirements. Troubleshoot, diagnose, and resolve cloud infrastructure issues efficiently. Stay current with AWS and OCI service offerings, industry trends, and best practices to optimize cloud performance, cost-efficiency, and reliability. Mentor and train team members on OCI and AWS best practices, cloud architecture, and emerging technologies. What you need to have: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent work experience combined with relevant certifications. 3+ years of hands-on experience working with AWS and/or Oracle Cloud Infrastructure (OCI). At least an Associate-level cloud certification (e.g., AWS Certified Solutions Architect Associate, OCI Cloud Infrastructure Associate); professional or specialty certifications are a plus. Strong understanding of cloud computing concepts and principles, including Infrastructure as Code (IaC), DevOps practices, and serverless computing. Experience with IaaS and PaaS services across cloud platforms. Solid understanding of network technologies such as TCP/IP, DNS, routing, and security best practices. Security-focused mindset with experience implementing security controls and compliance measures. Excellent troubleshooting and problem-solving skills. Strong written and verbal communication abilities. Ability to collaborate effectively with global, cross-functional teams and adapt to evolving project requirements. What makes you stand out? Strong experiencing designing Cloud Solutions within large enterprise environments. Knowledge and experience working in AWS is a plus. Coding and DevOps experience: Python, Infrastructure as Code, GitHub, GitOps, Pipelines Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |