Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
0 Lacs
mumbai, maharashtra, india
On-site
12+ Years of Experience. Looking for SRE Role ? Help define, drive and implement the SRE strategy ? Promote an Automate-first culture in operating services, through the reduction of toil ? Develop methodologies and strategies for identification of toil-heavy and inefficient processes, and for the automation and elimination of toil, delay and redundancy in such processes. ? Assist in developing engineering and operational service metrics with actionable plans to improve operational efficiency, enhance service quality/SLA, and optimize delivery ? Working with all parties, develop and implement SLOs for critical services ? Define monitoring strategy with Engineering and implement appropriate capabilities ? Design and implement reliability improvements ? Conduct capacity planning ? Perform chaos engineering exercises ? Lead architectural reviews for reliability ? Drive continuous improvement from incidents ? Contribute to the Test and Deployment processes, ensuring that they are as reliable and automated as possible Skills and qualifications ? A bachelors degree or higher in computer science, information systems, or a related field, or equivalent work experience ? Hands on SRE Practitioner with 5+ years working experience in SRE role. ? Practical experience defining and implementing Service Level Objectives, and operating to Error Budgets ? Have implemented and operated monitoring and observability technologies for a wide range of enterprise-grade Production systems. ? Experience in a corporate software development lifecycle methodology. Some experience implementing gitops a plus ? Demonstrates a strong understanding of how technical systems work and interact ? Strong analytical skills and a solid understanding of all critical Production Support processes ? 2+ years of experience with one/more public/private cloud platforms (e.g. AWS, Azure etc.). Knowledge Required ? Comprehensive understanding of SRE principles and ability to evangelise ? Working knowledge of modern observability tooling, including OpenTelemetry, Prometheus, Grafana, and associated projects ? Experience of Infrastructure as Code (IaC) principles and design ? Extensive knowledge of Configuration Management Solutions such as Ansible, Chef or Puppet Show more Show less
Posted 3 days ago
15.0 years
0 Lacs
chennai, tamil nadu, india
On-site
AKS DevOps Architect Positions: 1 Experience: 15+ years Shift Timing: Flexible (willing to work in shifts) Notice Period: Immediate to 15 days Job Description: DevOps Architect – Subject Matter Expert (SME) in AKS & Azure Position Summary We are seeking a DevOps Architect with Subject Matter Expertise in Azure Kubernetes Service (AKS) to design, implement, and optimize our enterprise-grade cloud-native infrastructure. This role will be hands-on with architecture and automation while also providing strategic direction on AKS adoption, DevOps tooling, and cloud-native best practices. Key Responsibilities Architecture & Strategy • Define the end-to-end architecture for AKS-based workloads in Azure. • Design multi-environment, multi-region AKS architectures for high availability and disaster recovery. • Establish best practices for container orchestration, scaling, and governance in AKS. • Advise on cost optimization strategies for AKS and related Azure resources. Azure & AKS Expertise • Architect and provision AKS clusters with features like: - Cluster and node pool autoscaling - Private cluster configuration - Network isolation using Azure CNI / Kubenet - Managed identities and Azure AD integration • Integrate AKS with supporting Azure services: - Azure Monitor - Managed Prometheus & Grafana - Azure Container Registry (ACR) - Azure Key Vault DevOps & Automation • Design Infrastructure as Code solutions with Terraform, Bicep, or ARM templates for AKS provisioning. • Implement GitOps workflows (e.g., ArgoCD, Flux) for automated deployments. • Define CI/CD architecture using Azure DevOps, GitHub Actions, or Jenkins for AKS workloads. Security & Compliance • Architect RBAC and Azure AD security models for AKS access. • Integrate security scanning for container images and Kubernetes workloads. • Ensure alignment with compliance frameworks and governance policies. Monitoring, Logging, & Observability • Architect observability solutions using: - Azure Monitor - Managed Prometheus - Container Insights - Grafana • Establish automated alerting, anomaly detection, and incident response playbooks. Required Qualifications • 10+ years in DevOps, Cloud Architecture, or related roles. • 5+ years hands-on Azure experience, with deep AKS expertise. • Strong technical expertise in: - Kubernetes architecture & troubleshooting - Azure networking (VNet, NSG, Private Link) - IaC tools (Terraform, Bicep, ARM) - Container security and governance • Proven ability to design and deliver scalable, secure, and cost-effective AKS solutions. Preferred Qualifications • Azure Solutions Architect Expert or Certified Kubernetes Administrator (CKA). • Experience with service mesh architectures (Istio, Linkerd, OSM). • Familiarity with multi-cluster management tools (Azure Arc, Fleet). • Experience in workload migration to AKS. Soft Skills • Strong communication and presentation skills for both technical and business audiences. • Ability to lead design workshops and architecture reviews. • Strategic problem-solver with hands-on implementation skills. Location • Chennai • Travel: Limited, as required for project delivery.
Posted 3 days ago
10.0 years
0 Lacs
hyderabad, telangana, india
On-site
About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Client : LTIMINDTREE Job Type : C2H Role: Senior Infrastructure Security & Compliance Engineer Experience: 8-12y Work Location:Bangalore Payroll on : People Prime World Wide Notice :0-15days Job Description: Senior Infrastructure Security & Compliance Engineer (Zero-Touch GPU Cloud – GitOps-Driven Compliance & Resilience) We are seeking a Senior Infrastructure Security & Compliance Engineer with 10+ years of experience in infrastructure and platform automation to drive the Zero-Touch Build, Upgrade, and Certification pipeline for our on-prem GPU cloud environment. This role is focused on integrating security scanning, policy enforcement, compliance validation, and backup automation into a fully GitOps-managed GPU cloud stack, spanning hardware → OS → Kubernetes → platform layers. Key Responsibilities Design and implement GitOps-native workflows to automate security, compliance, and backup validation as part of the GPU cloud lifecycle. Integrate Trivy into CI/CD pipelines for container and system image vulnerability scanning. Automate kube-bench execution and remediation workflows to enforce Kubernetes security benchmarks (CIS/STIG). Define and enforce policy-as-code using OPA/Gatekeeper to validate cluster and workload configurations. Deploy and manage Velero to automate backup and disaster recovery operations for Kubernetes workloads. Ensure that all compliance, scanning, and backup logic is declarative and auditable through Git-backed repositories. Collaborate with infrastructure, platform, and security teams to define security baselines, enforce drift detection, and integrate automated guardrails. Drive remediation automation and post-validation gates across build, upgrade, and certification pipelines. Monitor evolving security threats and ensure tooling is regularly updated to detect vulnerabilities, misconfigurations, and compliance drift. Required Skills & Experience 10+ years of hands-on experience in infrastructure, platform automation, and systems security. Primary key skills required are Python/Go/Bash scripting, OPA Rego policy writing, CI integration for Trivy & kube-bench, GitOps Strong knowledge and practical experience with: Trivy for container, filesystem, and configuration scanning kube-bench for Kubernetes CIS benchmark compliance Velero for Kubernetes-native backup and disaster recovery OPA/Gatekeeper for policy-as-code and admission control Deep understanding of GitOps workflows (e.g., Argo CD, Flux) and how to integrate security tools declaratively. Proven experience automating security, compliance, and backup validation in CI/CD pipelines. Solid foundation in Kubernetes internals, RBAC, pod security, and multi-tenant best practices. Familiarity with vulnerability management lifecycles and security risk remediation strategies. Experience with Linux systems administration, OS hardening, and secure bootstrapping. Proficiency in scripting languages such as Python, Go, or Bash for automation and tooling integration. Bonus: Experience with SBOMs, image signing, or container supply chain security Exposure to regulated environments (e.g., PCI-DSS, HIPAA, FedRAMP) Contributions to open-source security/compliance projects Seniority Level Mid-Senior level Industry IT Services and IT Consulting Software Development Employment Type Contract Job Functions Information Technology Skills Infrastructure Security Compliance Engineering
Posted 3 days ago
10.0 years
0 Lacs
hyderabad, telangana, india
Remote
About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Client : LTIMINDTREE Job Type : C2H Role: Senior Infrastructure Automation Engineer Experience: 8-15 yrs Work Location: Bangalore Payroll on : People Prime World Wide Notice : 0-15days Job Description: Senior Infrastructure Automation Engineer (Zero-Touch GPU Cloud Build & Upgrade) We are looking for a Senior Infrastructure Automation Engineer with 10+ years of hands on experience in building and scaling infrastructure automation systems to lead the design and implementation of a Zero-Touch Build, Upgrade, and Certification framework for our on-prem GPU cloud environment. This role demands deep technical expertise across bare-metal provisioning, configuration management, and full-stack automation—from hardware to Kubernetes—built entirely on GitOps principles. Key Responsibilities · Architect, lead, and implement a fully automated, zero-touch deployment pipeline for GPU cloud infrastructure spanning hardware → OS → Kubernetes → platform layers. · Build robust GitOps-based workflows to manage end-to-end infrastructure lifecycle—from provisioning to continuous compliance. · Design and maintain automation for: o Bare-metal control: Power cycling, provisioning, remote installs o Firmware and configuration flashing: BIOS, NIC, RAID, etc. o Hardware inventory management o Configuration drift detection and remediation · Develop and extend internal automation frameworks using Ansible, Python, and related infrastructure tooling. · Serve as a technical authority and mentor, guiding junior engineers and collaborating cross-functionally with hardware, SRE, and platform engineering teams. · Lead architectural and design reviews for infrastructure automation systems. · Define and implement best practices for infrastructure as code, compliance, and operational resilience. · Champion automation-driven operational models and reduce manual intervention to near-zero. · Bonus: Familiarity with Terraform, Chef, and Cloud Automation Platforms. Required Skills & Experience · 10+ years of hands-on experience in infrastructure engineering, automation, and systems design, with a strong track record of delivering scalable and maintainable solutions. · Primary key skills required are Ansible, Python, ipmitool, firmware scripting, Linux shell scripting · Deep expertise in: o Ansible for automation and configuration management o Python for scripting, integration, and automation logic o ipmitool and related tools for low-level hardware management (e.g., IPMI, Redfish) · Proven experience with bare-metal automation in data center environments, including: o Power control and PXE booting o BIOS/NIC/RAID firmware upgrades o Hardware and platform inventory systems · Strong foundation in Linux systems, networking, and Kubernetes infrastructure. · Fluency with GitOps workflows and tools. · Experience with CI/CD systems and managing Git-based pipelines for infrastructure. · Familiarity with infrastructure monitoring, logging, and drift detection. · Strong cross-team collaboration and communication skills, especially across hardware, platform, and SRE teams. · Bonus: o Prior leadership or mentorship roles o Experience contributing to or maintaining open-source infrastructure projects o Exposure to GPU-based compute stacks and high-performance workloads
Posted 3 days ago
0 years
0 Lacs
mumbai metropolitan region
On-site
Were looking for a passionate Cloud & DevOps Engineer who thrives in building secure, scalable, and resilient platforms. Youll be responsible for architecting cloud-native solutions, driving automation, and collaborating across product teams to deliver next-generation systems. Primary Responsibilities Contribute to product development with a strong focus on distributed systems & microservices. Architect and deliver cloud solutions that are highly available, secure, and performance-driven. Hands-on experience working with at least two major cloud providers (AWS, Azure, or GCP). Multi-cloud knowledge will be considered a big plus. In-depth understanding of cloud services such as networking, IAM, compute, storage, managed databases, orchestration services (EKS/GKE), and KMS. Set up and manage infrastructure using Infrastructure as Code (IaC) with tools like Terraform, Helm (mandatory), and Kustomize. Deploy, monitor, and scale applications across Kubernetes environments. Build and manage CI/CD pipelines using GitLab CI, Jenkins, GitHub Actions, and leverage GitOps tools (ArgoCD, FluxCD) for deployment workflows. Establish observability with Prometheus, Grafana, and Elastic Stack. Deliver solutions for both PaaS and on-premises environments. Bonus points for experience in k3s, OpenShift, or Were Looking For : Someone who can quickly adapt, architect solutions, and execute hands-on. Strong technical foundation with proven experience in cloud-native ecosystems. A delivery-focused mindset that balances speed and quality. Ability to design, implement, and run cloud platforms & DevOps pipelines end-to-end. Strong inclination toward open-source technologies. Comfortable working with multiple product teams (5+) in a collaborative setting. Advocates for the GitOps philosophy and automation-first approach. Familiar with DevSecOps practices, ensuring security is baked into every layer. (ref:hirist.tech)
Posted 3 days ago
10.0 years
0 Lacs
pune, maharashtra, india
Remote
About Prismforce Prismforce is a Vertical SaaS company revolutionizing the Talent Supply Chain for global Technology, R&D/Engineering, and IT Services companies. Our AI-powered product suite enhances business performance by enabling operational flexibility, accelerating decision-making, and boosting profitability. Our mission is to become the leading industry cloud/SaaS platform for tech services and talent organizations worldwide. Description Were hiring a DevOps Lead with 10+ years of experience to lead and scale our DevOps/Platform Engineering team (currently 5 engineers). This is a hands-on leadership role where youll drive infrastructure strategy, build a secure and scalable platform, and enable rapid product Description : Role : Devops Lead. Reporting to : Sr VP Technology. Location : Youll Do : Lead the design and evolution of our AWS-first cloud infrastructure (EKS, VPC, IAM, Terraform, etc.) Build and operate Kubernetes platforms, CI/CD pipelines, observability tooling, and internal developer workflows. Drive SRE practices : define SLAs/SLOs, set up alerting/on-call, manage incidents. Champion automation, security, and reliability across the stack. Collaborate with engineering, security, and product to support scale and delivery velocity. Contribute to our multi-cloud readiness as we Looking For : 10+ years in DevOps/SRE/Infra roles, with 3+ years leading teams or projects. Deep experience with AWS, Kubernetes, CI/CD systems, and Terraform. Strong focus on security, automation, observability, and scalability. Familiarity with other clouds (GCP, Azure), GitOps (ArgoCD), FinOps, or platform engineering is a big plus. Excellent communicator and mentor - you thrive in fast-paced, collaborative Join Us : High-impact leadership role at a pivotal growth stage. Influence our infra and engineering culture from the ground up. Work with a sharp, driven team on real-world scaling challenges. Competitive salary, meaningful equity, remote-first Makes Us Unique : First-Mover Advantage : We are the only Vertical SaaS product company addressing Talent Supply Chain challenges in the IT services industry. Innovative Product Suite : Our solutions offer forward-thinking features that outshine traditional ERP systems. Strategic Expertise : Guided by an advisory board of ex-CXOs from top global IT firms, providing unmatched industry insights. Experienced Leadership : Our founding team brings deep expertise from leading firms like McKinsey, Deloitte, Amazon, Infosys, TCS, and Uber. Diverse and Growing Team : We have grown to 160+ employees across India, with hubs in Mumbai, Pune, Bangalore, and Kolkata. Strong Financial Backing : Series A-funded by Sequoia, with global IT companies using our product as a core solution. Why Join Prismforce Competitive Compensation : We offer an attractive salary and benefits package that rewards your contributions. Innovative Projects : Work on pioneering projects with cutting-edge technologies transforming the Talent Supply Chain. Collaborative Environment : Thrive in a dynamic, inclusive culture that values teamwork and innovation. Growth Opportunities : Continuous learning and development are core to our philosophy, helping you advance your career. Flexible Work : Enjoy flexible work arrangements that balance your work-life needs. By joining Prismforce, you'll become part of a rapidly expanding, innovative company that's reshaping the future of tech services and talent management. Perks & Benefits Work with the best in the industry : Work with a high-pedigree leadership team that will challenge you, build on your strengths and invest in your personal development. Insurance Coverage-Group Mediclaim cover for self, spouse, kids and parents & Group Term Life Insurance Policy for self. Flexible Policies. Retiral Benefits. Hybrid Work Model. Self-driven career progression tool. (ref:hirist.tech)
Posted 3 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Responsibilities Job Description: GCP Experience Strong written and verbal communication skills Knowledge of Azure DevOps as well as general DevOps toolsets. Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Engages in Azure DevOps administration Responds platform to performance and availability issues Opens and follows tickets with Vendor product owners Provides general support to app teams for supported DevOps tools Troubleshooting Azure DevOps issues and related to DevOps toolsets and deployment capabilities Works general backlog of support tickets Managing and supporting Artifact Management (Jfrog) Managing and supporting Artifact Management (SonarQube) Qualifications Bachelor’s Degree or International equivalent in Computer Science or a related field - Preferred Experience managing projects Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.
Posted 3 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Responsibilities Job Description: GCP Experience Strong written and verbal communication skills Knowledge of Azure DevOps as well as general DevOps toolsets. Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Engages in Azure DevOps administration Responds platform to performance and availability issues Opens and follows tickets with Vendor product owners Provides general support to app teams for supported DevOps tools Troubleshooting Azure DevOps issues and related to DevOps toolsets and deployment capabilities Works general backlog of support tickets Managing and supporting Artifact Management (Jfrog) Managing and supporting Artifact Management (SonarQube) Qualifications Bachelor’s Degree or International equivalent in Computer Science or a related field - Preferred Experience managing projects Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.
Posted 3 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Responsibilities Fiche de poste : GCP Experience Strong written and verbal communication skills Knowledge of Azure DevOps as well as general DevOps toolsets. Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Engages in Azure DevOps administration Responds platform to performance and availability issues Opens and follows tickets with Vendor product owners Provides general support to app teams for supported DevOps tools Troubleshooting Azure DevOps issues and related to DevOps toolsets and deployment capabilities Works general backlog of support tickets Managing and supporting Artifact Management (Jfrog) Managing and supporting Artifact Management (SonarQube) Qualifications Bachelor’s Degree or International equivalent in Computer Science or a related field - Preferred Experience managing projects Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.
Posted 3 days ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: Gruve, an innovative software services startup, is looking for an experienced Kubernetes Data Center Administrator to manage and maintain multiple infrastructure systems running Kubernetes across data centers. The ideal candidate will play a crucial role in creating, managing, and debugging Kubernetes clusters and services, ensuring operational excellence through collaboration with IT teams. This position requires deep technical expertise in Kubernetes, virtualization, and data center operations, as well as strong experience in ITSM platforms and compliance management. Key Responsibilities: - Design, deploy, and maintain multiple Kubernetes clusters across data center environments. - Manage and troubleshoot Kubernetes services including MinIO (object storage), Prometheus (monitoring), Istio (service mesh), MongoDB, and PostgreSQL (databases). - Collaborate with IT teams to support operational needs such as change management, patch and software update cycles, data protection, disaster recovery planning, DCIM systems, compliance audits, and reporting. - Diagnose and resolve complex Kubernetes configuration issues. - Modify platform components and scripts to enhance reliability and performance. - Administer and integrate multiple ITSM platforms for asset management, change management, incident management, and problem management. - Maintain detailed documentation of Kubernetes environments and operational procedures. - Ensure systems meet regulatory and organizational compliance standards. Qualifications: - 8-10 years of experience in Kubernetes administration and virtualization technologies. - Proven experience managing production-grade Kubernetes clusters and services. - Strong understanding of data center operations and infrastructure systems. - Hands-on experience with ITSM platforms (e.g., Jira Service Management). - Proficiency in scripting (e.g., Bash, Python) and automation tools. - Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana). - Experience with disaster recovery planning and compliance audits. - At least one CNCF Kubernetes certification (e.g., CKA, CKS, CKAD). - Experience with container security and policy enforcement preferred. - Familiarity with GitOps workflows and tools like ArgoCD or Flux preferred. - Knowledge of infrastructure-as-code tools (e.g., Terraform, Ansible) preferred. Note: The job description also includes information about the company, its culture, and the work environment, which has been omitted for brevity.,
Posted 3 days ago
0 years
0 Lacs
india
Remote
About Juniper Square Our mission is to unlock the full potential of private markets. Privately owned assets like commercial real estate, private equity, and venture capital make up half of our financial ecosystem yet remain inaccessible to most people. We are digitizing these markets, and as a result, bringing efficiency, transparency, and access to one of the most productive corners of our financial ecosystem. If you care about making the world a better place by making markets work better through technology – all while contributing as a member of a values-driven organization – we want to hear from you. Juniper Square offers employees a variety of ways to work, ranging from a fully remote experience to working full-time in one of our physical offices. We invest heavily in digital-first operations, allowing our teams to collaborate effectively across 27 U.S. states, 2 Canadian Provinces, India, Luxembourg, and England. We also have physical offices in San Francisco, New York City, Mumbai and Bangalore for employees who prefer to work in an office some or all of the time. About Your Role Juniper Square is industry-leading in the transformation of private capital markets through innovation & technology, and we are expanding globally to keep up with the demand for innovation! We are looking for a Staff Site Reliability Engineer to help us grow our domain expertise and provide support in a new global region to enable 24x7 development velocity as a global company. From AWS cloud provisioning as code to improving the developer experience in your working timezone, to acting as a guide to best practices around building and delivering software globally, we need an SRE with the passion, motivation, and great ideas to make everything better. What You’ll Do Automate the provisioning of all of Juniper Square’s infrastructure in code. Everything we do is in code! Partner with our Platform Engineering team on building developer tooling / improving developer experiences via joint initiatives and enhancements. Partner with our Data Engineering team on improving our data posture and driving operational excellence. Evolve our deployment pipelines to automate infrastructure deployments with the latest and greatest (and reliable) technologies. Improve metrics on our main services, and act as a subject matter expert for our global dev teams. Enable observability, SLO/SLI reporting, and respond to business impacting incidents as it pertains to infrastructure. Adopt and drive solutions that align with AWS Well Architected frameworks and Juniper Square’s business objectives. Identify performance bottlenecks and provide recommendations for improvement. Proactively identify and solve problems that we didn’t even know we had. Help build, deploy, and scale a load testing environment that is analogous to production. Enforce security and operational safety controls. Participate in technical roadmap planning and estimation. Participate and contribute in production readiness and architecture review board (ARB) meetings and forums. Train and mentor future engineers in the same region. Contribute to the architectural improvements to meet future scaling and observability requirements Qualifications A profound love for solving hard problems and overcoming challenging obstacles. Putting your customers first, whether they be internal or external, and making them more productive, happy, and successful. Experience with AWS. Other public cloud providers are a bonus. Experience with PostgreSQL is a must. Additional experience with document databases is a nice-to-have. Experience with cloud security best practices (CSPM, CDR, CWPP, SIEM, etc) to keep our customers and cloud posture secure. Experience with containers (builds, registries, vulnerabilities scanning, run-time with docker-compose, run-time with TILT, run-time in schedulers/orchestration systems). Multi-year hands-on experience and fluency with Kubernetes and helm charts are an absolute skill requirement. We live and breathe the k8s ecosystem. Experience with a CI/CD pipeline. We use a combination of Github Actions, ArgoCD, Helm and GitOps in our deployment process, but again, any are fine. Some sort of infrastructure-as-code system: Ansible, Terraform, CloudFormation, CDK, etc. We use Python and Typescript, so knowledge and exposure with either is a strong plus. Experience breaking up monolithic architectures into microservices Experience with service meshes and service discovery solutions. Experience with an observability solution: New Relic, Prometheus, DataDog, etc. Experience with logging systems: CloudWatch, ELK, Splunk, etc. Bachelor’s degree in Computer Science or similar or equivalent experience At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications.
Posted 3 days ago
5.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Role Proficiency: Acts under minimum guidance of DevOps Architect to set up and manage DevOps tools and pipelines. Outcomes Interpret the DevOps Tool/feature/component design and develop/support the same in accordance with specifications Follow and contribute existing SOPs to trouble shoot issues Adapt existing DevOps solutions for new contexts Code debug test and document; and communicate DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Support users onboarding them on existing tools with guidance from DevOps leads Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentor A1 and A2 resources Involved in the Code Review of the team Measures Of Outcomes Schedule adherence Quality of the code Defect injection at various stages of lifecycle # SLA related to level 1 and level 2 support # of domain certification/ product certification obtained Facilitate saving measures through automation Outputs Expected Automated components: Deliver components that automate parts to install components/configure of software/tools in on-premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured Components Configure a CI/CD pipeline that can be used by application development/support teams Scripts Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/ configuration/ build/ deployment tasks Onboard Users Onboard and extend existing tools to new app dev/support teams Mentoring Mentoring and providing guidance to peers Stakeholder Management Guide the team in preparing status updates; keeping management updated regarding the status Data Base Data Insertion Data update Data Delete Data view creations Skill Examples Install configure troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Integrate with code/test quality analysis tools like Sonarqube/Cobertura/Clover Integrate build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Scripting skills (Python Linux/Shell/Perl/Groovy/PowerShell) Repository Management/Migration Automation – GIT/BitBucket/GitHub/Clearcase Build automation scripts – Maven/Ant Artefact repository management – Nexus/Artifactory Dashboard Management & Automation- ELK/Splunk Configuration of cloud infrastructure (AWS/Azure/Google) Migration of applications from on-premises to cloud infrastructures Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration) Strong debugging skill in C#/C Sharp/Dotnet Basic working knowledge of database Knowledge Examples Knowledge of Installation/Config/Build/Deploy tools and knowledge of DevOps processes Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes & tools Knowledge of Agile methodologies Knowledge of security policies and tools Additional Comments 5+ years of experience as an SRE, DevOps Engineer, or similar role. Proficiency in scripting and automation (Bash, Python, Go, etc.). Strong experience with containerization and orchestration (Docker, Kubernetes, Helm). Solid understanding of Linux systems administration and networking fundamentals. Experience with cloud platforms (AWS, Azure, or GCP). Experience with IaC tools like Terraform or CloudFormation. Familiarity with GitOps and modern deployment practices. Hands-on experience with observability tools (e.g., Prometheus, Grafana, Datadog). Strong troubleshooting and incident response skills. Preferred: Experience in a high-traffic, microservices-based architecture. Exposure to service meshes (Istio, Linkerd). Certifications (AWS Certified DevOps Engineer, CKA, etc.) Experience with security automation and compliance (e.g., SOC2, ISO27001). Soft Skills: Strong communication and collaboration abilities. Ability to thrive in a fast-paced, agile environment. Analytical mindset and proactive approach to problem-solving. A passion for automation, performance, and system design. Design, build, and maintain reliable, scalable, and secure cloud-based infrastructure (AWS, Azure, or GCP). Develop and improve observability using monitoring, ing, logging, and tracing tools (e.g., Prometheus, Grafana, ELK, Datadog, etc.). Automate repetitive tasks and infrastructure using Infrastructure-as-Code (Terraform, CloudFormation, Pulumi). Create and maintain CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, ArgoCD, etc.) to support fast and safe delivery. Lead incident response, root cause analysis, and postmortems to ensure high uptime and rapid recovery. Optimize system performance, reliability, and cost-effectiveness through proactive monitoring and tuning. Collaborate with software engineering teams to define SLAs/SLOs and improve service reliability. Implement and maintain security best practices across environments (e.g., secrets management, IAM, firewalls, etc.). Maintain disaster recovery plans, backups, and high-availability strategies. Skills Kubernetes,Cloud Platform,Python Scripting,Sre
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Role Proficiency: Act under guidance of Lead II/Architect understands customer requirements and translate them into design of new DevOps (CI/CD) components. Capable of managing at least 1 Agile Team Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates own DevOps solutions for new contexts Codes debugs tests documents and communicates DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install configure troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentors A1 and A2 resources Involved in the Code Review of the team Measures Of Outcomes: Quality of deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA for onboarding and supporting users and tickets Outputs Expected: Automated components : Deliver components that automat parts to install components/configure of software/tools in on premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured Components: Configure a CI/CD pipeline that can be used by application development/support teams Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Onboard Users: Onboard and extend existing tools to new app dev/support teams Mentoring: Mentor and provide guidance to peers Stakeholder Management: Guide the team in preparing status updates keeping management updated about the status Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and in onboarding users Measure Process Efficiency/Effectiveness: Measure and pay attention to efficiency/effectiveness of current process and make changes to make them more efficiently and effectively Stakeholder Management: Share the status report with higher stakeholder Skill Examples: Experience in the design installation configuration and troubleshooting of CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python/Linux/Shell/Perl/Groovy/PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Powershell) Experience in repository Management/Migration Automation – GIT/BitBucket/GitHub/Clearcase Experience in build automation scripts – Maven/Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS/Azure/Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration)/Strong debugging skill in C#/C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker/Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build Branching/Merging Knowledge about containerization Knowledge on security policies and tools Knowledge of Agile methodologies Additional Comments: Automation Engineer Relevant Experience: 3 to 5 years of hands-on experience with Kubernetes and cloud-native automation, focusing on eliminating repetitive tasks through scripting, IaC, and self-healing mechanisms. Job Summary: The Automation Engineer will play a critical role in reducing operational toil within Kubernetes-based environments by designing, developing, and implementing automation solutions that streamline repetitive tasks and improve system reliability. This role involves close collaboration with SRE and platform engineering teams to build self-healing mechanisms, enhance observability, and integrate automation into CI/CD pipelines, ensuring faster, more resilient deployments and minimal manual intervention. Key Responsibilities: Toil Reduction & Automation Identify repetitive, manual operational tasks and design automation solutions to eliminate them. Develop scripts, tools, and pipelines to automate deployments, scaling, monitoring, and incident response. Kubernetes & Cloud Operations Manage and optimize Kubernetes clusters across multiple environments (dev, staging, production). Implement automated cluster lifecycle management (provisioning, upgrades, scaling). Reliability & Observability Build self-healing mechanisms for common failure scenarios. Enhance observability by automating metrics, logging, and ing integrations. CI/CD & Infrastructure as Code Implement and maintain CI/CD pipelines for application and infrastructure deployments. Use Infrastructure as Code (IaC) tools for consistent environment management. Collaboration & Best Practices Work closely with SREs, developers, and platform teams to improve reliability and reduce MTTR. Advocate for automation-first culture and SRE principles across teams. Required skills Automation & Scripting: Proficiency in Python or Bash for automation tasks. Kubernetes Expertise: Hands-on experience with Kubernetes (deployment, scaling, troubleshooting).; CKA/CKAD certification preferred Cloud Platforms: Experience with AWS CI/CD Tools: Jenkins, GitLab CI, or similar. IaC Tools: Terraform. Observability: Familiarity with Splunk. Version Control: Strong Git skills and experience with GitOps workflows. Problem-Solving: Ability to analyze operational pain points and design automation solutions. Skills Kubernetes,Cloud Platform,Python Scripting,Sre
Posted 3 days ago
5.0 years
0 Lacs
trivandrum, kerala, india
On-site
Role Description Role Proficiency: Act under guidance of DevOps; leading more than 1 Agile team. Outcomes Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource) Measures Of Outcomes Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time delivery Outputs Expected Automated components : Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications Configured Components Configure tools and automation framework into the overall DevOps design Scripts Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Training/SOPs Create Training plans/SOPs to help DevOps Engineers with DevOps activities and to in onboarding users Measure Process Efficiency/Effectiveness Deployment frequency innovation and technology changes. Operations Change lead time/volume Failed deployments Defect volume and escape rate Meantime to detection and recovery Skill Examples Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation – GIT BitBucket GitHub Clearcase Experience in build automation scripts – Maven Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & Kubernetes Knowledge Examples Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments 5+ years of experience as an SRE, DevOps Engineer, or similar role. Proficiency in scripting and automation (Bash, Python, Go, etc.). Strong experience with containerization and orchestration (Docker, Kubernetes, Helm). Solid understanding of Linux systems administration and networking fundamentals. Experience with cloud platforms (AWS, Azure, or GCP). Experience with IaC tools like Terraform or CloudFormation. Familiarity with GitOps and modern deployment practices. Hands-on experience with observability tools (e.g., Prometheus, Grafana, Datadog). Strong troubleshooting and incident response skills. Preferred: Experience in a high-traffic, microservices-based architecture. Exposure to service meshes (Istio, Linkerd). Certifications (AWS Certified DevOps Engineer, CKA, etc.) Experience with security automation and compliance (e.g., SOC2, ISO27001). Soft Skills: Strong communication and collaboration abilities. Ability to thrive in a fast-paced, agile environment. Analytical mindset and proactive approach to problem-solving. A passion for automation, performance, and system design. Design, build, and maintain reliable, scalable, and secure cloud-based infrastructure (AWS, Azure, or GCP). Develop and improve observability using monitoring, ing, logging, and tracing tools (e.g., Prometheus, Grafana, ELK, Datadog, etc.). Automate repetitive tasks and infrastructure using Infrastructure-as-Code (Terraform, CloudFormation, Pulumi). Create and maintain CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, ArgoCD, etc.) to support fast and safe delivery. Lead incident response, root cause analysis, and postmortems to ensure high uptime and rapid recovery. Optimize system performance, reliability, and cost-effectiveness through proactive monitoring and tuning. Collaborate with software engineering teams to define SLAs/SLOs and improve service reliability. Implement and maintain security best practices across environments (e.g., secrets management, IAM, firewalls, etc.). Maintain disaster recovery plans, backups, and high-availability strategies. Skills Kubernetes,Cloud Platform,Python Scripting,Sre
Posted 3 days ago
5.0 years
15 - 20 Lacs
hyderabad, telangana, india
On-site
Job Title: DevOps / Kubernetes Engineer Location: Onsite (Pune, Hyderabad, mumbai, mohali, Panchkula, Bangalore) Shift: CST / PST Shift Cloud: Microsoft Azure Experience - 5+Years Budget- 18-20 lpa Job Description We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows , with hands-on experience in Azure cloud environments . You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale. Key Responsibilities Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications. Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning. Deploy and manage applications using Helm and ArgoCD with GitOps best practices. Work with Podman and Docker as container runtimes for development and production environments. Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations. Optimize infrastructure for cost, performance, and reliability within Azure cloud. Troubleshoot, monitor, and maintain system health, scalability, and performance. Required Skills & Experience Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration. Proficiency in Terraform or OpenTofu for infrastructure as code. Experience with Helm and ArgoCD for application deployment and GitOps. Solid understanding of Docker / Podman container runtimes. Cloud expertise in Azure with experience deploying and scaling workloads. Familiarity with CI/CD pipelines, monitoring, and logging frameworks. Knowledge of best practices around cloud security, scalability, and high availability. Preferred Qualifications Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses. Experience working in global distributed teams across CST/PST time zones. Strong problem-solving skills and ability to work independently in a fast-paced environment. Skills: argocd,podman,apache 2.0 / mpl 2.0,azure,helm,devops,kubernetes,docker
Posted 3 days ago
30.0 years
2 - 6 Lacs
gurgaon
On-site
**About REA Group** In 1995, in a garage in Melbourne, Australia, REA Group was born from a simple question: “Can we change the way the world experiences property?” Could we? Yes. Are we done? Never. Fast forward 30 years, REA Group is a market leader in online real estate in three continents and continuing to grow rapidly across the globe. The secret to our growth is staying true to that ‘day one’ mindset; the hunger to innovate, the ambition to change the world, and the curiosity to reimagine the future. Our new Tech Center in Cyber City is dedicated to accelerating REA Group’s global technology delivery through relentless innovation. We’re looking for the best technologists, inventors and leaders in India to join us on this exciting new journey. If you’re excited by the prospect of creating something magical from scratch, then read on. **What the role is all about:** We’re seeking a Senior Engineer – Cloud (3-6 years’ experience) who will play a pivotal role in shaping the future of REA’s cutting-edge products. You’ll take a multifaceted approach to ensure technical excellence and operational efficiency within the infrastructure domain. By strategically integrating automation, monitoring and incident response, you facilitate the evolution from traditional operations to a more customer-focused and agile approach. This is your chance to work on impactful projects that drive customer satisfaction and company growth. You’ll work with cutting-edge technologies alongside talented individuals from diverse backgrounds, fostering a dynamic and collaborative environment. Being a leader in Australia and India in the property portal space, REA is a large and challenging technical environment, we are multi-cloud at scale with the best-in-class approach to managing at this scale. A place that is both supportive and exciting. **While no two days are likely to be the same, your typical responsibilities will include:** + Design and implement K8s-native compute solutions to deploy applications and workloads. + Develop automation for Kubernetes cluster lifecycle management including zero-downtime upgrades and scaling operations. + Define and track SLIs/SLOs for critical platform components and implement strategies to meet them. + Collaborate with development teams to build platform capabilities. + Conduct capacity planning and performance testing to ensure platform scalability. + Actively participate in pairing, code reviews, unit testing, and secure deployments to deliver secure and quality code. + Stay updated on the latest Kubernetes and platform engineering trends and apply them to solve complex challenges. **Who we are looking for:** + Proficient in Go (or another programming language) with a strong track record of building scalable applications. + Proven experience in creating and deploying custom Kubernetes operators and CRDs. + Deep understanding and hands-on experience with major cloud platforms (e.g. AWS/GCP/Azure). + Good experience in managing and deploying workloads on production-grade Kubernetes clusters. + Experience with Argo CD and GitOps methodologies to automate and streamline continuous delivery of applications deployed in Kubernetes environments. + Experience in using Kubernetes ecosystem tools like Cilium, Kyverno, and Keda to build and maintain robust, scalable, and secure platforms. + Experience with monitoring and incident management tools + Effectively communicate complex solutions to audiences with varying technical backgrounds, fostering consensus and collaboration. **Bonus Points for:** + Certified Kubernetes Administrator (CKA) or Kubernetes Application Developer (CKAD) certification. **What we offer:** + A hybrid and flexible approach to working. + Transport options to help you get to and from work, including home pick-up and drop-off. + Meals provided on site in our office. + Flexible leave options including parental leave, family care leave and celebration leave. + Insurances for you and your immediate family members. + Programs to support mental, emotional, financial and physical health & wellbeing. + Continuous learning and development opportunities to further your technical expertise. **The values we live by:** Our values are at the core of how we operate, treat each other, and make decisions. We believe that how we work is equally important as what we do to achieve our goals. This commitment is at the heart of everything we do, from the way we interact with colleagues to the way we serve our customers and communities. **Our commitment to Diversity, Equity, and Inclusion:** We are committed to providing a working environment that embraces and values diversity, equity and inclusion. We believe teams with diverse ideas and experiences are more creative, more e?ective and fuel disruptive thinking – be it cultural and ethnic backgrounds, gender identity, disability, age, sexual orientation, or any other identity or lived experience. We know diverse teams are critical to maintaining our success and driving new business opportunities. If you’ve got the skills, dedication and enthusiasm to learn but don’t necessarily meet every single point on the job description, please still get in touch.
Posted 3 days ago
30.0 years
0 Lacs
gurgaon
On-site
**About REA Group** In 1995, in a garage in Melbourne, Australia, REA Group was born from a simple question: “Can we change the way the world experiences property?” Could we? Yes. Are we done? Never. Fast forward 30 years, REA Group is a market leader in online real estate in three continents and continuing to grow rapidly across the globe. The secret to our growth is staying true to that ‘day one’ mindset; the hunger to innovate, the ambition to change the world, and the curiosity to reimagine the future. Our new Tech Center in Cyber City is dedicated to accelerating REA Group’s global technology delivery through relentless innovation. We’re looking for the best technologists, inventors and leaders in India to join us on this exciting new journey. If you’re excited by the prospect of creating something magical from scratch, then read on. **What the role is all about:** We’re seeking a Lead Engineer (6-8 years’ experience) who will play a pivotal role in shaping the future of REA’s cutting-edge products. You’ll collaborate closely with cross-functional teams across the globe, leading the design, development, and optimization of our Kubernetes based IDP. You’ll work with cutting-edge technologies alongside talented individuals from diverse backgrounds, fostering a dynamic and collaborative environment. **While no two days are likely to be the same, your typical responsibilities will include:** + Enable teams to transition their applications to Kubernetes by architecting an automated, repeatable migration pipeline. + Develop automation for Kubernetes cluster lifecycle management including zero-downtime upgrades and scaling operations. + Define and track SLIs/SLOs for critical platform components and implement strategies to meet them. + Collaborate with development teams to build platform capabilities. + Conduct capacity planning and performance testing to ensure platform scalability. + Actively participate in pairing, code reviews, unit testing, and secure deployments to deliver secure and quality code. + Stay updated on the latest Kubernetes and platform engineering trends and apply them to solve complex challenges. [ DC1] + Take ownership and accountability of deliverables while mentoring other team members. **Who we are looking for:** + Deep understanding and hands-on experience with major cloud platforms (e.g. AWS / GCP / Azure). [ DC2] + Experience writing developer tooling in a general-purpose programming language such as Go or Python with a focus on Kubernetes, migration automation, and improving developer user experience. + Extensive experience in managing and deploying workloads on production-grade Kubernetes clusters. + Experience with Argo CD and GitOps methodologies to automate and streamline continuous delivery of applications deployed in Kubernetes environments. + Experience in using Kubernetes ecosystem tools like Cilium, Kyverno, and Keda [ DC3] [VS4 ] to build and maintain robust, scalable, and secure platforms. + Experience with monitoring and incident management tools **Bonus Points for:** + Certified Kubernetes Administrator (CKA) or Kubernetes Application Developer (CKAD) certification. **What we offer:** + A hybrid and flexible approach to working. + Transport options to help you get to and from work, including home pick-up and drop-off. + Meals provided on site in our office. + Flexible leave options including parental leave, family care leave and celebration leave. + Insurances for you and your immediate family members. + Programs to support mental, emotional, financial and physical health & wellbeing. + Continuous learning and development opportunities to further your technical expertise. **The values we live by:** Our values are at the core of how we operate, treat each other, and make decisions. We believe that how we work is equally important as what we do to achieve our goals. This commitment is at the heart of everything we do, from the way we interact with colleagues to the way we serve our customers and communities. **Our commitment to Diversity, Equity, and Inclusion:** We are committed to providing a working environment that embraces and values diversity, equity and inclusion. We believe teams with diverse ideas and experiences are more creative, more e?ective and fuel disruptive thinking – be it cultural and ethnic backgrounds, gender identity, disability, age, sexual orientation, or any other identity or lived experience. We know diverse teams are critical to maintaining our success and driving new business opportunities. If you’ve got the skills, dedication and enthusiasm to learn but don’t necessarily meet every single point on the job description, please still get in touch.
Posted 3 days ago
5.0 - 10.0 years
5 - 10 Lacs
gurgaon
Remote
Lead Assistant Manager EXL/LAM/1476962 ServicesGurgaon Posted On 11 Sep 2025 End Date 26 Oct 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band B2 Band Name Lead Assistant Manager Cost Code D011774 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1000000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Retail Media & Hi-Tech Organization Services LOB Services SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill Minimum Qualification B.TECH/B.E Certification No data available Job Description Job Title: Junior/Mid Cloud Solutions Architect & DevOps Engineer Location: [Remote / On-site / Hybrid] About the Role: We are looking for a curious and driven Junior to Mid-level Cloud Solutions Architect & DevOps Engineer to join our growing team. This role is a unique hybrid that blends cloud architecture and hands-on DevOps, with a focus on building scalable data lakehouse solutions across major cloud platforms (AWS, GCP, Azure). You will architect and implement infrastructure from the ground up, leveraging cloud storage, databases, container orchestration, serverless services, and more. You’ll work individually initially but with clear pathways to grow into a leadership role, guiding and mentoring future team members. Responsibilities: Design, build, and maintain cloud-based data lakehouse architectures using a combination of cloud storage, databases, VMs, Kubernetes, Docker, and serverless technologies. Implement SaaS-style application hosting via web and serverless platforms for end-user accessibility. Manage infrastructure configurations including secrets, environment variables, and secure access controls. Build and maintain CI/CD pipelines and adopt GitOps practices to streamline deployments. Optimize cloud resource usage for cost efficiency without compromising performance. Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud solutions. Continuously learn and stay updated on new cloud services, big data technologies, and best practices. Prepare to take on leadership responsibilities as the team grows. Required Skills & Experience: Practical experience with at least one major cloud platform (AWS, GCP, Azure); willingness and ability to learn others. Strong programming skills in Python and SQL. Experience with PySpark is a bonus. Familiarity with containerization (Docker) and orchestration (Kubernetes). Experience with version control systems (Git) and CI/CD tools. Understanding of cloud services pricing models to help design cost-effective solutions. Solid grasp of DevOps practices, including configuration management, secrets handling, and environment setup. Self-motivated, eager to learn, and able to work independently. Nice to Have (Bonus): Prior experience with big data platforms or streaming data solutions like Kafka. Knowledge of modern analytics and data stack tools (e.g., dbt, DuckDB, Cloudflare R2). Understanding of cloud networking, VPNs, security features, and firewall configurations. What We Offer: Opportunity to shape and lead a growing cloud architecture and DevOps team. Exposure to cutting-edge cloud technologies across multiple providers. Collaborative and supportive work environment that values curiosity and continuous learning. Competitive salary and benefits package. About EXL Sports Analytics: EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world. About EXL: EXL (NASDAQ:EXLS) is a leading data analytics and operations management company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Headquartered in New York, EXL has more than 40,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world. Workflow Workflow Type L&S-DA-Consulting
Posted 3 days ago
6.0 - 8.0 years
0 Lacs
noida
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Required Skills Strong hands-on experience with AWS core services and infrastructure. a. Design, build, and manage cloud-native solutions using AWS services. b. Develop and deploy applications leveraging services such as EC2, ECS, EKS, S3, Lambda, API Gateway, DynamoDB, RDS, CloudFront, Route 53, and ALB/NLB. c. Automate infrastructure provisioning with CloudFormation, CDK, Serverless Framework, or Terraform. d. Implement and manage IAM roles, VPCs, subnets, and security groups for secure access control. e. Configure CloudWatch, CloudTrail, and X-Ray for monitoring, logging, and tracing f. Troubleshoot AWS infrastructure and resolve performance, availability, or networking issues. Proficiency in Infrastructure as Code (IaC) frameworks such as CloudFormation, CDK, Terraform, or Serverless Framework. Solid understanding of compute, storage, networking, databases, and serverless services on AWS. Familiarity with DevOps practices, including CI/CD, automation, and observability. a. Automate build, release, and deployment processes for faster software delivery b. Manage infrastructure version control and enforce GitOps practices where applicable. c. Manage and optimize container orchestration platforms (ECS,EKS, or Kubernetes). d. Ensure backup, disaster recovery, and high availability strategies are in place. Strong problem-solving, debugging, and troubleshooting abilities. Nice to Have High-level understanding of Node.js ecosystem (Ready to learn) Knowledge of Python or JavaScript for AWS CDK. Groovy scripting and Ansible Experience with microservices architecture and containerization (Docker, Kubernetes). Total Experience Expected: 06-08 years Qualifications BTech Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 3 days ago
5.0 years
15 - 20 Lacs
pune, maharashtra, india
On-site
Job Title: DevOps / Kubernetes Engineer Location: Onsite (Pune, Hyderabad, mumbai, mohali, Panchkula, Bangalore) Shift: CST / PST Shift Cloud: Microsoft Azure Experience - 5+Years Budget- 18-20 lpa Job Description We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows , with hands-on experience in Azure cloud environments . You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale. Key Responsibilities Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications. Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning. Deploy and manage applications using Helm and ArgoCD with GitOps best practices. Work with Podman and Docker as container runtimes for development and production environments. Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations. Optimize infrastructure for cost, performance, and reliability within Azure cloud. Troubleshoot, monitor, and maintain system health, scalability, and performance. Required Skills & Experience Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration. Proficiency in Terraform or OpenTofu for infrastructure as code. Experience with Helm and ArgoCD for application deployment and GitOps. Solid understanding of Docker / Podman container runtimes. Cloud expertise in Azure with experience deploying and scaling workloads. Familiarity with CI/CD pipelines, monitoring, and logging frameworks. Knowledge of best practices around cloud security, scalability, and high availability. Preferred Qualifications Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses. Experience working in global distributed teams across CST/PST time zones. Strong problem-solving skills and ability to work independently in a fast-paced environment. Skills: argocd,podman,apache 2.0 / mpl 2.0,azure,helm,devops,kubernetes,docker
Posted 3 days ago
5.0 years
15 - 20 Lacs
mumbai metropolitan region
On-site
Job Title: DevOps / Kubernetes Engineer Location: Onsite (Pune, Hyderabad, mumbai, mohali, Panchkula, Bangalore) Shift: CST / PST Shift Cloud: Microsoft Azure Experience - 5+Years Budget- 18-20 lpa Job Description We are seeking a highly skilled DevOps / Kubernetes Engineer. The ideal candidate will have strong expertise in container orchestration, infrastructure as code, and GitOps workflows , with hands-on experience in Azure cloud environments . You will be responsible for designing, deploying, and managing modern cloud-native infrastructure and applications at scale. Key Responsibilities Manage and operate Kubernetes clusters (AKS / K3s) for large-scale applications. Implement infrastructure as code using Terraform or OpenTofu for scalable, reliable, and secure infrastructure provisioning. Deploy and manage applications using Helm and ArgoCD with GitOps best practices. Work with Podman and Docker as container runtimes for development and production environments. Collaborate with cross-functional teams to ensure smooth deployment pipelines and CI/CD integrations. Optimize infrastructure for cost, performance, and reliability within Azure cloud. Troubleshoot, monitor, and maintain system health, scalability, and performance. Required Skills & Experience Strong hands-on experience with Kubernetes (AKS / K3s) cluster orchestration. Proficiency in Terraform or OpenTofu for infrastructure as code. Experience with Helm and ArgoCD for application deployment and GitOps. Solid understanding of Docker / Podman container runtimes. Cloud expertise in Azure with experience deploying and scaling workloads. Familiarity with CI/CD pipelines, monitoring, and logging frameworks. Knowledge of best practices around cloud security, scalability, and high availability. Preferred Qualifications Contributions to open-source projects under Apache 2.0 / MPL 2.0 licenses. Experience working in global distributed teams across CST/PST time zones. Strong problem-solving skills and ability to work independently in a fast-paced environment. Skills: argocd,podman,apache 2.0 / mpl 2.0,azure,helm,devops,kubernetes,docker
Posted 3 days ago
5.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Experience : 5+ Years Work Mode : Work from office Job Description: 1. CI/CD & Release Management Design, implement, and maintain robust CI/CD pipelines using Harness and Jenkins. Optimize build and release processes to reduce deployment time and improve reliability. Automate rollback, blue-green, and canary deployment strategies for safe releases. 2. Infrastructure as Code (IaC) Define, provision, and manage cloud infrastructure using Terraform and AWS CloudFormation. Ensure all infrastructure changes are version-controlled, tested, and compliant with security standards. Implement modular and reusable Terraform configurations. 3. Cloud & Platform Engineering Architect and manage AWS environments (EC2, EKS, RDS, S3, VPC, IAM, Lambda, CloudWatch). Ensure cloud infrastructure is highly available, scalable, cost-optimized, and secure. Implement monitoring, alerting, and logging solutions using CloudWatch, Prometheus, Grafana, or similar. 4. Containerization & Orchestration Deploy, manage, and scale workloads on Kubernetes clusters (EKS or self-managed). Package and deploy applications using Helm Charts with proper versioning and dependency management. Implement Kubernetes best practices including RBAC, pod security, and network policies. 5. Security & Compliance Integrate DevSecOps practices into CI/CD pipelines (static code analysis, vulnerability scans, secrets management). Apply least-privilege principles with AWS IAM roles and Kubernetes RBAC. Ensure compliance with organizational and industry standards. 6. Automation & Scripting Automate repetitive operational tasks using Python, Bash, or Go scripting. Build self-service automation for developers (infrastructure provisioning, environment setup). 7. DevOps Best Practices Promote GitOps principles with tools like ArgoCD or FluxCD (if applicable). Drive standardization of branching strategies, code reviews, and release processes. Foster a culture of observability, monitoring, and continuous improvement. 8. Collaboration & Mentorship Work closely with developers, QA, and SREs to accelerate delivery while maintaining stability. Mentor junior DevOps engineers on modern tooling and best practices. Participate in on-call rotations, incident response, and root cause analysis. 9. Performance & Reliability Engineering Conduct capacity planning, performance tuning, and cost optimization. Implement resilience testing, chaos engineering, and disaster recovery drills. 10. Continuous Improvement Evaluate emerging DevOps tools and practices (e.g., service mesh, progressive delivery). Contribute to documentation, runbooks, and knowledge sharing.
Posted 3 days ago
6.0 - 8.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Required Skills Strong hands-on experience with AWS core services and infrastructure. Design, build, and manage cloud-native solutions using AWS services. Develop and deploy applications leveraging services such as EC2, ECS, EKS, S3, Lambda, API Gateway, DynamoDB, RDS, CloudFront, Route 53, and ALB/NLB. Automate infrastructure provisioning with CloudFormation, CDK, Serverless Framework, or Terraform. Implement and manage IAM roles, VPCs, subnets, and security groups for secure access control. Configure CloudWatch, CloudTrail, and X-Ray for monitoring, logging, and tracing Troubleshoot AWS infrastructure and resolve performance, availability, or networking issues. Proficiency in Infrastructure as Code (IaC) frameworks such as CloudFormation, CDK, Terraform, or Serverless Framework. Solid understanding of compute, storage, networking, databases, and serverless services on AWS. Familiarity with DevOps practices, including CI/CD, automation, and observability. Automate build, release, and deployment processes for faster software delivery Manage infrastructure version control and enforce GitOps practices where applicable. Manage and optimize container orchestration platforms (ECS,EKS, or Kubernetes). Ensure backup, disaster recovery, and high availability strategies are in place. Strong problem-solving, debugging, and troubleshooting abilities. Nice to Have High-level understanding of Node.js ecosystem (Ready to learn) Knowledge of Python or JavaScript for AWS CDK. Groovy scripting and Ansible Experience with microservices architecture and containerization (Docker, Kubernetes). Total Experience Expected: 06-08 years Qualifications BTech Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an Observability Developer at GlobalLogic, you will play a crucial role in alert configuration, workflow automation, and AI-driven solutions within the observability stack. Your responsibilities will include designing and implementing alerting rules, configuring alert routing and escalation policies, building workflow integrations, developing AI-based solutions, collaborating with cross-functional teams, and automating alert lifecycle management. **Key Responsibilities:** - Design and implement alerting rules for metrics, logs, and traces using tools like Grafana, Prometheus, or similar. - Configure alert routing and escalation policies integrated with collaboration and incident management platforms (e.g., Slack, PagerDuty, ServiceNow, Opsgenie). - Build and maintain workflow integrations between observability platforms and ticketing systems, CMDBs, and automation tools. - Develop or integrate AI-based solutions for: - Mapping telemetry signals to service/application components. - Porting or translating existing configurations across environments/tools. - Reducing alert fatigue through intelligent correlation and suppression. - Collaborate with DevOps, SRE, and development teams to ensure alerts are meaningful and well-contextualized. - Automate alert lifecycle management via CI/CD and GitOps pipelines. - Maintain observability integration documentation and provide support to teams using alerting and workflows. In this role, you will be part of a culture of caring at GlobalLogic, where people come first. You will experience an inclusive environment that prioritizes learning and development, interesting and meaningful work, balance, flexibility, and a high-trust organization. Join GlobalLogic, a Hitachi Group Company, and be part of a team that is at the forefront of the digital revolution, collaborating with clients to transform businesses and redefine industries through intelligent products, platforms, and services.,
Posted 4 days ago
0 years
0 Lacs
gurgaon, haryana, india
Remote
Job Title Junior/Mid Cloud Solutions Architect & DevOps Engineer Location: [Remote / On-site / Hybrid] About The Role We are looking for a curious and driven Junior to Mid-level Cloud Solutions Architect & DevOps Engineer to join our growing team. This role is a unique hybrid that blends cloud architecture and hands-on DevOps, with a focus on building scalable data lakehouse solutions across major cloud platforms (AWS, GCP, Azure). You will architect and implement infrastructure from the ground up, leveraging cloud storage, databases, container orchestration, serverless services, and more. You’ll work individually initially but with clear pathways to grow into a leadership role, guiding and mentoring future team members. Responsibilities Design, build, and maintain cloud-based data lakehouse architectures using a combination of cloud storage, databases, VMs, Kubernetes, Docker, and serverless technologies. Implement SaaS-style application hosting via web and serverless platforms for end-user accessibility. Manage infrastructure configurations including secrets, environment variables, and secure access controls. Build and maintain CI/CD pipelines and adopt GitOps practices to streamline deployments. Optimize cloud resource usage for cost efficiency without compromising performance. Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud solutions. Continuously learn and stay updated on new cloud services, big data technologies, and best practices. Prepare to take on leadership responsibilities as the team grows. Required Skills & Experience Practical experience with at least one major cloud platform (AWS, GCP, Azure); willingness and ability to learn others. Strong programming skills in Python and SQL. Experience with PySpark is a bonus. Familiarity with containerization (Docker) and orchestration (Kubernetes). Experience with version control systems (Git) and CI/CD tools. Understanding of cloud services pricing models to help design cost-effective solutions. Solid grasp of DevOps practices, including configuration management, secrets handling, and environment setup. Self-motivated, eager to learn, and able to work independently. Nice To Have (Bonus) Prior experience with big data platforms or streaming data solutions like Kafka. Knowledge of modern analytics and data stack tools (e.g., dbt, DuckDB, Cloudflare R2). Understanding of cloud networking, VPNs, security features, and firewall configurations. What We Offer Opportunity to shape and lead a growing cloud architecture and DevOps team. Exposure to cutting-edge cloud technologies across multiple providers. Collaborative and supportive work environment that values curiosity and continuous learning. Competitive salary and benefits package. About EXL Sports Analytics EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world. About EXL EXL (NASDAQ:EXLS) is a leading data analytics and operations management company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Headquartered in New York, EXL has more than 40,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Sports Analytics team provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach to clients in the sports industry. We work with some of the topmost sports organizations in the world.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |