Jobs
Interviews

127 Kustomize Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job description Job Description Senior DevOps Engineer Location: [Chennai] - Work From Office Employment Type: Full-time Experience: 4+ years About the Role We are looking for a Senior DevOps Engineer who can drive our cloud infrastructure, automation, and platform engineering initiatives. The ideal candidate will have deep expertise across cloud platforms, containers, CI/CD, and modern DevSecOps practices, while also mentoring the team and influencing architecture and strategy. Key Responsibilities Design, implement, and manage multi-cloud and hybrid infrastructures (AWS, Azure, GCP) with a focus on cost efficiency. Build, scale, and secure containerized environments using Docker, Podman, Kubernetes (EKS/AKS/GKE), Helm, Kustomize, and service meshes (Istio/Linkerd). Drive CI/CD pipelines with GitHub Actions, GitLab CI, ArgoCD, Tekton, and Jenkins X; implement progressive delivery (blue/green, canary, feature flags). Implement Infrastructure as Code (IaC) using Terraform, Pulumi, Crossplane, and Ansible with GitOps (ArgoCD/FluxCD). Establish strong observability and monitoring practices using Prometheus, Grafana, Loki, Tempo, ELK/EFK, OpenTelemetry, and enterprise tools (Datadog, Dynatrace, AIOps). Champion DevSecOps practices : IAM hardening, secrets management (Vault, SOPS), vulnerability scanning (Trivy, Checkov), runtime security (Falco, Kyverno, OPA), and compliance (CIS, SOC2, GDPR). Automate workflows using Python, Go, Bash , and integrate REST/GraphQL APIs, serverless, and event-driven automation. Leverage AI/ML in DevOps for log analysis, anomaly detection, auto-remediation, and productivity tools (GitHub Copilot, ChatGPT). Contribute to architecture & strategy : scalable, resilient, and cost-efficient platforms; platform engineering (Backstage, Crossplane); GreenOps initiatives. Lead by example: drive blameless postmortems, mentor junior engineers, and evangelize DevOps best practices across teams. Qualifications Proven experience in multi-cloud architectures (AWS/Azure/GCP). Strong background in Kubernetes ecosystem and container orchestration at scale. Advanced knowledge of CI/CD pipelines and progressive delivery techniques. Hands-on expertise with Infrastructure as Code and GitOps workflows. Experience in observability, monitoring, and incident management . Solid understanding of security, compliance, and DevSecOps frameworks . Strong programming/scripting skills (Python, Go, Bash). Exposure to AI/ML applications in DevOps is a plus. Excellent communication, mentoring, and leadership skills.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Kubernetes Platform Engineer/Consultant Specialist. In this role, you will: . Build and manage the HSBC GKE Kubernetes Platform to easily let application teams deploy to Kubernetes. Mentor and guide support engineers, represent the platform technically through talks, blog posts and discussions . Engineer solutions on HSBC GKE Kubernetes Platform using Coding, Automation and Infrastructure as Code methods (e.g. Python, Tekton, Flux, Helm, Terraform, ). Manage a fleet of GKE clusters from a centrally provided solution . Ensure compliance with centrally defined security controls and with operational risk standards (E.g. Network, Firewall, OS, Logging, Monitoring, Availability, Resiliency and Containers). Ensure good Change management practice is implemented as specified by central standards. Provide impact assessments where requested for changes proposed on HSBC GCP core platform. . Build and support continuous integration (CI), continuous delivery (CD) and continuous testing activities. Engineering activities to implement patches for VMs and containers provided centrally . Support non-functional testing . Update support and operational documentation as required . Fault find and support Applications teams . On a rotational on call basis provide out of business hours support as part of our 24 x 7 coverage Requirements To be successful in this role, you should meet the following requirements: . Demonstrable Kubernetes and Cloud Native experience - building, configuring and extending Kubernetes platforms . Automation scripting (using scripting languages such as Terraform, Python etc.) . Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools . Experience of working with Kubernetes resource configuration tooling (Helm, Kustomize, kpt) . Experience working within an Agile environment . Programming experience in one or more of the following languages: Python or Go . Ability to quickly acquire new skills and tools You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

delhi, india

On-site

Multiple roles at all levels: Sr. DevOps Engineer - 8+ years DevOps Engineer - 5+ years Jr. DevOps Engineer - 3+ years Role overview Own, operate, and continuously improve Linux-based platforms and the application runway on virtualisation and Kubernetes. You will independently execute complex work, delegate and monitor assignments across the team, and ensure time-bound completion with clear documentation and communication. Key responsibilities Ownership & leadership Perform tasks independently; delegate, track, and close team assignments in a time-bound manner with status reporting. Plan and define scope; activity sequencing; resource planning; schedule, time and cost estimation ; budget inputs; stakeholder documentation. Drive Agile ceremonies and Sprint/Release lifecycle (Scrum/Kanban). Linux & virtualisation Administer Linux kernel–based OS (RHEL/CentOS) ; tune, harden and patch. Manage Red Hat and VMware virtualisation layers (provisioning, HA/DR, performance). Storage & distribution Provision and manage block/object/file storage; implement storage distribution strategies and quotas. Databases (SQL & NoSQL) Operate and tune databases; performance tuning, clustering, slow-query analysis, query optimisation , export/import and backup/restore. Security & compliance Implement infrastructure security at network, OS, container and application layers. Perform vulnerability assessment remediation , apply OS/app patches, and harden servers to baseline standards. Web/CMS environments Configure and deploy Drupal, WordPress and similar CMS across Dev/Staging/Prod , including pipelines, config management and rollbacks. Open-source enablement Maintain existing open-source tools and evaluate/introduce new OSS where appropriate to improve reliability, cost and developer experience. Containers & orchestration (Mandatory) Administer Docker estates; design and run workloads on Kubernetes (install/upgrade, multi-node operations, ingress, storage, RBAC, network policies). Documentation Produce clear documentation for architecture changes, installations, runbooks , and operational procedures. Automation & scripting Write shell scripts (and related tooling) to automate routine tasks, checks and remediation. Observability & performance Regularly monitor and analyse system logs and metrics (CPU, memory, disk, I/O, network). Identify bottlenecks and fine-tune systems for optimal performance; set alerts and SLOs. Cloud & platforms Operational knowledge of OpenStack , Kubernetes and Ceph clusters (setup, scaling, upgrades, failure handling). Quality & benchmarking Author and execute test cases for functional, performance and benchmark testing of existing applications and platforms. Required skills & experience Deep Linux administration (RHEL/CentOS), kernel-level understanding, systemd, packaging, SELinux. Virtualisation on Red Hat/VMware (resource pools, templates, HA/DRS, vSwitch/vDS). Storage provisioning (LVM, iSCSI/NFS, object stores) and capacity planning. Databases: at least one SQL (MySQL/PostgreSQL/SQL Server) and one NoSQL (MongoDB/Redis/Cassandra) with hands-on tuning and clustering. Security: network segmentation, firewalls, TLS/PKI, CIS hardening, patching, VA/PT fix-through. Containers: Docker image hygiene, registries, Kubernetes operations (helm/kustomize, ingress, CNI/CNI-policies, CSI storage). OpenStack & Ceph: day-2 operations, scaling and troubleshooting. CMS: deployment and operations for Drupal/WordPress across environments. Scripting: Bash (and optionally Python) for automation. Observability: metrics/logging/tracing; capacity & performance analysis. Ways of working: Agile/Scrum/Kanban, sprint planning, release management; strong written and verbal communication. Qualifications & certifications B.Tech/B.E./MCA/M.Sc or equivalent. Highly Preferred: RHCSA/RHCE , CKA/CKAD , VMware VCP , OpenStack , Ceph , MySQL/PostgreSQL admin, AWS/Azure admin, Security (e.g., CompTIA Security+, CISSP associate).

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Company Description R25_0009780 At NIQ, we deliver the clearest understanding of consumer buying behavior, revealing new pathways for growth. Our Enterprise Platform Engineering team is crucial to this mission, ensuring our corporate technologies are best-in-class for over 30,000 global employees. We're seeking a skilled Platform Engineer to join our team in Madrid or Valladolid. As a Senior Engineer , you'll be a key player on a highly skilled team, designing, building, and maintaining the core frameworks and platforms that power NIQ. You'll work with a diverse and cutting-edge tech stack, including Kubernetes, GitHub, Terraform, Argo CD, Datadog, OpenTelemetry, CAST AI, and more. Job Description Design and architect scalable, resilient platforms that empower other engineering teams to confidently deploy and run their services. Collaborate closely with Application Development and SRE teams to deliver effective solutions. Deepen your expertise in core platform technologies like Kubernetes, Helm, Kustomize, GitHub, Terraform, and various GitOps tools. Ensure seamless deployment and operation of platforms by working hand-in-hand with development teams. Proactively monitor, analyze, and optimize system performance and security. Continuously improve platform reliability, scalability, and availability. Create and maintain comprehensive documentation for all platforms and frameworks. Qualifications 5+ years of experience in software development or DevOps, with at least 2 years specifically in platform engineering. Strong hands-on experience with Kubernetes, Helm, Kustomize, GitHub, Terraform, and GitOps tooling (e.g., Argo CD). Proven experience with Docker and Kubernetes. Familiarity with monitoring and observability tools like Datadog, Coralogix, or OpenTelemetry. Exposure to multiple cloud platforms (GCP, Azure, AWS). Proficiency in scripting languages like Go, Python, Bash, or JavaScript. Excellent communication skills, both verbal and written, capable of clearly articulating complex technical concepts. A team-oriented mindset and the ability to work effectively both collaboratively and independently. Strong attention to detail and a proven ability to prioritize tasks in a fast-paced environment. Familiarity with testing frameworks. Bachelor's degree in Computer Science, Computer Engineering, or equivalent practical work experience. Additional Information We offer a flexible working mode in Chennai Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 3 weeks ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Cloud Engineer will be a part of the Engineering team and will require a strong knowledge of application monitoring, infrastructure monitoring, automation, maintenance, and Service Reliability Improvements. Specifically, we are searching for someone who brings fresh ideas, demonstrates a unique and informed viewpoint, and enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences at every interaction. Responsibilities Design, automate and manage a highly available and scalable cloud deployment that allows development teams to deploy and run their services. Collaborating with engineering and Architects teams to evaluate and identify optimal cloud solutions, also leveraging scalability, high-performance and security. Modernise existing on-prem solution and improving existing systems. Extensively automated deployments and managed applications in GCP. Developing and maintaining cloud solutions in accordance with best practices. Ensuring efficient functioning of data storage and processing functions in accordance with company security policies and best practices in cloud security. Collaborate with Engineering teams to identify optimization strategies, help develop self-healing capabilities Designing and architecting middleware solutions that align with the overall system architecture and meet business requirements. This involves selecting the appropriate middleware technologies and patterns for seamless integration. Writing code and configuring middleware components to enable communication and data flow between various systems. This includes developing APIs, message queues, and other middleware services. Integrating different applications and services using middleware technologies, ensuring they can communicate effectively and exchange data in a standardized manner. Identifying and resolving issues related to middleware, such as communication failures, performance bottlenecks, or data inconsistencies. Experience in developing a strong observability capabilities Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues. Regularly reviewing existing systems and making recommendations for improvements. Qualifications Proven work experience in designing, deploying and operating mid to large scale public cloud environments. Proven work experience in provisioning Infrastructure as Code (IaC) using Terraform Enterprise or community edition. Proven work experience in writing custom terraform providers/plug-ins with Sentinel Policy as Code Proven work experience in containerisation via Docker Good to have strong working experience in Virtualisation via Kubernetes (image building, k8s schedule) Experience in package, config and deployment management via Helm, Kustomize, ArgoCD. Strong knowledge in Github, DevOps (Tekton / GCP Cloud Build is an advantage) Should be proficient in scripting and coding, that include traditional languages like Java, Python, GoLang, JS and Node.js. Proven working experience in Messaging Middleware - Apache Kafka, RabbitMQ, Apache ActiveMQ Proven working experience in API gateway, Apigee is an advantage. Proven working experience in API development, REST. Proven working experience in Sec and IAM, SSL/TLS, OAuth and JWT. Extensive knowledge and hands-on experience in Grafana and Prometheus micro libraries. Experience in self hosted private / public cloud setup. Exposure to Cloud Monitoring and logging. Experience with distributed storage technologies like NFS, HDFS, Ceph, S3 as well as dynamic resource management frameworks (Mesos, Kubernetes, Yarn) Experience with automation tools should be a priority Professional Certification is an advantage Public Cloud >> GCP is a good to have. Preferred Qualifications Previous success in Cloud Engineering, DevSecOps. Must have 5+ experience DevSecOPs Must have minimum 3+ experience in cloud Engineering

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

bengaluru, karnataka, india

On-site

At Mindera , we're building a world-class DevOps and Platform Engineering practice to power next-generation software delivery across GCP and Azure. We're looking for a Lead Engineer - DevOps & Platform Engineering who thrives on designing scalable, secure, and developer-friendly platforms. In this high-impact leadership role, you'll own end-to-end platform initiatives and help shape our cloud infrastructure strategy. You'll work closely with cross-functional squads to improve deployment speed, platform resilience, and engineering efficiency. Responsibilities: Lead the vision and implementation of platform engineering strategy across GCP and Azure. Design and scale multi-tenant Kubernetes infrastructure with security and configurability at the core. Own and evolve CI/CD pipelines using ArgoCD, Crossplane, Terraform, Kustomize, and Helm. Implement GitOps workflows for consistent, repeatable deployments. Improve infrastructure reuse, reduce environment sprawl, and optimize cost efficiency. Build observability frameworks using Grafana, Prometheus, Loki, and InfluxDB. Champion DevSecOps by embedding compliance and security as code across all environments. Develop reusable templates, internal tools, and self-service capabilities to boost developer productivity. Collaborate with product leaders and architects to align infrastructure goals with business outcomes. Mentor engineers and promote a culture of automation, excellence, and continuous improvement Requirements 10+ years of experience in DevOps, SRE, or Platform Engineering roles. Hands-on experience with: GCP and Azure Kubernetes ArgoCD (Must Have) Crossplane (Must Have) Terraform, Helm, Kustomize Strong understanding of: CI/CD automation GitOps workflows Multi-tenant architecture Infrastructure standardization Experience with observability tools: Grafana, Prometheus, Loki, InfluxDB. Deep knowledge of DevSecOps, security-by-design, and compliance automation. Proven leadership experience driving platform engineering initiatives. Excellent communication and collaboration skills across engineering and product teams. 💡 Nice to Have: Experience with GenAI/LLMs in engineering workflows. Exposure to policy-as-code and compliance frameworks. Experience in fast-paced, product-led engineering environments. Benefits Fun, happy and politics-free work culture built on the principles of lean and self-organisation; Work with large scale systems powering global businesses; Competitive salary and benefits About Mindera At Mindera we use technology to build products we are proud of, with people we love. Software Engineering Applications, including Web and Mobile, are at the core of what we do at Mindera. We partner with our clients, to understand their product and deliver high performance, resilient and scalable software systems that create an impact in their users and businesses across the world. You get to work with a bunch of great people, where the whole team owns the project together. Our culture reflects our lean and self-organisation attitude. We encourage our colleagues to take risks, make decisions, work in a collaborative way and talk to everyone to enhance communication. We are proud of our work and we love to learn all and everything while navigating through an Agile, Lean and collaborative environment. Check out our Blog: http://mindera.com/ and our Handbook: http://tinyurl.com/zc599tr Our offices are located: Aveiro, Portugal | Porto, Portugal | Leicester, UK | San Diego, USA | San Francisco, USA | Chennai, India | Bengaluru, India

Posted 3 weeks ago

Apply

8.0 years

5 - 8 Lacs

chennai

On-site

We are seeking a highly skilled and passionate GKE Platform Engineering Manager to join our growing team. This role is ideal for someone with deep experience in managing Google Kubernetes Engine (GKE) platforms at scale, particularly with enterprise-level workloads on Google Cloud Platform (GCP). As part of a dynamic team, you will design, develop, and optimize Kubernetes-based solutions, using tools like GitHub Actions, ACM, KCC, and workload identity to provide high-quality platform services to developers. You will drive CI/CD pipelines across multiple lifecycle stages, manage GKE environments at scale, and enhance the developer experience on the platform. You should have a strong mindset for developer experience, focused on creating reliable, scalable, and efficient infrastructure to support developer needs. This is a fast-paced environment where collaboration across teams is key to delivering impactful results. Experience: 8+ years of overall experience in cloud platform engineering, infrastructure management, and enterprise-scale operations. 5+ years of hands-on experience with Google Cloud Platform (GCP) , including designing, deploying, and managing cloud infrastructure and services. 5+ years of experience specifically with Google Kubernetes Engine (GKE) , managing large-scale, production-grade clusters in enterprise environments. Experience with deploying, scaling, and maintaining GKE clusters in production environments. Hands-on experience with CI/CD practices and automation tools like GitHub Actions. Proven track record of building and managing GKE platforms in a fast-paced, dynamic environment. Experience developing custom Kubernetes operators and controllers for managing complex workloads. Deep Troubleshooting Knowledge: Strong ability to troubleshoot complex platform issues, with expertise in diagnosing problems across the entire GKE stack. Technical Skills: Must Have: Google Cloud Platform (GCP): Extensive hands-on experience with GCP, particularly Kubernetes Engine (GKE), Cloud Storage, Cloud Pub/Sub, Cloud Logging, and Cloud Monitoring. Kubernetes (GKE) at Scale: Expertise in managing large-scale GKE clusters, including security configurations, networking, and workload management. CI/CD Automation: Strong experience with CI/CD pipeline automation tools, particularly GitHub Actions , for building, testing, and deploying applications. Kubernetes Operators & Controllers: Ability to develop custom Kubernetes operators and controllers to automate and manage applications on GKE. Workload Identity & Security: Solid understanding of Kubernetes workload identity and access management (IAM) best practices, including integration with GCP Identity and Google Cloud IAM. Anthos & ACM: Hands-on experience with Anthos Config Management (ACM) and Kubernetes Cluster Config (KCC) to manage and govern GKE clusters and workloads at scale. Infrastructure as Code (IaC): Experience with tools like Terraform to manage GKE infrastructure and cloud resources. Helm & Kustomize: Experience in using Helm and Kustomize for packaging, deploying, and managing Kubernetes resources efficiently. Ability to create reusable and scalable Kubernetes deployment templates. Observability & Logging Tools: Experience with observability tools such as Prometheus , Dynatrace , and Splunk to monitor and log GKE performance, providing developers with actionable insights for troubleshooting. Nice to Have: Zero Trust Security Model: Strong understanding of implementing and maintaining security in a Zero Trust model for GKE, including workload authentication, identity management, and network security. Ingress Patterns: Experience with designing and managing multi-cluster and multi-regional ingress in Kubernetes to ensure fault tolerance, traffic management, and high availability. Familiarity with Open Policy Agent (OPA) for policy enforcement in Kubernetes environments. Education & Certification: Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant GCP certifications, such as Google Cloud Certified Professional Cloud Architect or Google Cloud Certified Professional Cloud Developer . Soft Skills: Collaboration: Strong ability to work with cross-functional teams to ensure platform solutions meet development and operational needs. Problem-Solving: Excellent problem-solving skills with a focus on troubleshooting and performance optimization. Communication: Strong written and verbal communication skills, able to communicate effectively with both technical and non-technical teams. Initiative & Ownership: Ability to take ownership of platform projects, driving them from conception to deployment with minimal supervision. Adaptability: Willingness to learn new technologies and adjust to evolving business needs. GKE Platform Management at Scale: Manage and optimize large-scale GKE environments in a multi-cloud and hybrid-cloud context, ensuring the platform is highly available, scalable, and secure. CI/CD Pipeline Development: Build and maintain CI/CD pipelines using tools like GitHub Actions to automate deployment workflows across the GKE platform. Ensure smooth integration and delivery of services throughout their lifecycle. Enterprise GKE Management: Leverage advanced features of GKE such as ACM (Anthos Config Management) and KCC (Kubernetes Cluster Config) to manage GKE clusters efficiently at the enterprise scale. Workload Identity & Security: Implement workload identity and security best practices to ensure secure access and management of GKE workloads. Custom Operators & Controllers: Develop custom operators and controllers for GKE, automating the deployment and management of custom services to enhance the developer experience on the platform. Developer Experience Focus: Maintain a developer-first mindset to create an intuitive, reliable, and easy-to-use platform for developers. Collaborate with development teams to ensure seamless integration with the GKE platform. GKE Deployment Pipelines: Provide guidelines and best practices for GKE deployment pipelines, leveraging tools like Kustomize and Helm to manage and deploy GKE configurations effectively. Ensure pipelines are optimized for scalability, security, and repeatability. Zero Trust Model: Ensure GKE clusters operate effectively within a Zero Trust security model. Maintain a strong understanding of the principles of Zero Trust security, including identity and access management, network segmentation, and workload authentication. Ingress Patterns: Design and manage multi-cluster and multi-regional ingress patterns to ensure seamless traffic management and high availability across geographically distributed Kubernetes clusters. Deep Troubleshooting & Support: Provide deep troubleshooting knowledge and support to help developers pinpoint issues across the GKE platform, focusing on debugging complex Kubernetes issues, application failures, and performance bottlenecks. Utilize diagnostic tools and debugging techniques to resolve critical platform-related issues. Observability & Logging Tools: Implement and maintain observability across GKE clusters, using monitoring, logging, and alerting tools like Prometheus , Dynatrace , and Splunk . Ensure proper logging and metrics are in place to enable developers to effectively monitor and diagnose issues within their applications. Platform Automation & Integration: Automate platform management tasks, such as scaling, upgrading, and patching, using tools like Terraform, Helm, and GKE APIs. Continuous Improvement & Learning: Stay up-to-date with the latest trends and advancements in Kubernetes, GKE, and Google Cloud services to continuously improve platform capabilities.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description We are seeking a highly skilled and passionate GKE Platform Engineering Manager to join our growing team. This role is ideal for someone with deep experience in managing Google Kubernetes Engine (GKE) platforms at scale, particularly with enterprise-level workloads on Google Cloud Platform (GCP). As part of a dynamic team, you will design, develop, and optimize Kubernetes-based solutions, using tools like GitHub Actions, ACM, KCC, and workload identity to provide high-quality platform services to developers. You will drive CI/CD pipelines across multiple lifecycle stages, manage GKE environments at scale, and enhance the developer experience on the platform. You should have a strong mindset for developer experience, focused on creating reliable, scalable, and efficient infrastructure to support developer needs. This is a fast-paced environment where collaboration across teams is key to delivering impactful results. Responsibilities GKE Platform Management at Scale: Manage and optimize large-scale GKE environments in a multi-cloud and hybrid-cloud context, ensuring the platform is highly available, scalable, and secure. CI/CD Pipeline Development: Build and maintain CI/CD pipelines using tools like GitHub Actions to automate deployment workflows across the GKE platform. Ensure smooth integration and delivery of services throughout their lifecycle. Enterprise GKE Management: Leverage advanced features of GKE such as ACM (Anthos Config Management) and KCC (Kubernetes Cluster Config) to manage GKE clusters efficiently at the enterprise scale. Workload Identity & Security: Implement workload identity and security best practices to ensure secure access and management of GKE workloads. Custom Operators & Controllers: Develop custom operators and controllers for GKE, automating the deployment and management of custom services to enhance the developer experience on the platform. Developer Experience Focus: Maintain a developer-first mindset to create an intuitive, reliable, and easy-to-use platform for developers. Collaborate with development teams to ensure seamless integration with the GKE platform. GKE Deployment Pipelines: Provide guidelines and best practices for GKE deployment pipelines, leveraging tools like Kustomize and Helm to manage and deploy GKE configurations effectively. Ensure pipelines are optimized for scalability, security, and repeatability. Zero Trust Model: Ensure GKE clusters operate effectively within a Zero Trust security model. Maintain a strong understanding of the principles of Zero Trust security, including identity and access management, network segmentation, and workload authentication. Ingress Patterns: Design and manage multi-cluster and multi-regional ingress patterns to ensure seamless traffic management and high availability across geographically distributed Kubernetes clusters. Deep Troubleshooting & Support: Provide deep troubleshooting knowledge and support to help developers pinpoint issues across the GKE platform, focusing on debugging complex Kubernetes issues, application failures, and performance bottlenecks. Utilize diagnostic tools and debugging techniques to resolve critical platform-related issues. Observability & Logging Tools: Implement and maintain observability across GKE clusters, using monitoring, logging, and alerting tools like Prometheus, Dynatrace, and Splunk. Ensure proper logging and metrics are in place to enable developers to effectively monitor and diagnose issues within their applications. Platform Automation & Integration: Automate platform management tasks, such as scaling, upgrading, and patching, using tools like Terraform, Helm, and GKE APIs. Continuous Improvement & Learning: Stay up-to-date with the latest trends and advancements in Kubernetes, GKE, and Google Cloud services to continuously improve platform capabilities. Qualifications Experience: 8+ years of overall experience in cloud platform engineering, infrastructure management, and enterprise-scale operations. 5+ years of hands-on experience with Google Cloud Platform (GCP), including designing, deploying, and managing cloud infrastructure and services. 5+ years of experience specifically with Google Kubernetes Engine (GKE), managing large-scale, production-grade clusters in enterprise environments. Experience with deploying, scaling, and maintaining GKE clusters in production environments. Hands-on experience with CI/CD practices and automation tools like GitHub Actions. Proven track record of building and managing GKE platforms in a fast-paced, dynamic environment. Experience developing custom Kubernetes operators and controllers for managing complex workloads. Deep Troubleshooting Knowledge: Strong ability to troubleshoot complex platform issues, with expertise in diagnosing problems across the entire GKE stack. Technical Skills: Must Have: Google Cloud Platform (GCP): Extensive hands-on experience with GCP, particularly Kubernetes Engine (GKE), Cloud Storage, Cloud Pub/Sub, Cloud Logging, and Cloud Monitoring. Kubernetes (GKE) at Scale: Expertise in managing large-scale GKE clusters, including security configurations, networking, and workload management. CI/CD Automation: Strong experience with CI/CD pipeline automation tools, particularly GitHub Actions, for building, testing, and deploying applications. Kubernetes Operators & Controllers: Ability to develop custom Kubernetes operators and controllers to automate and manage applications on GKE. Workload Identity & Security: Solid understanding of Kubernetes workload identity and access management (IAM) best practices, including integration with GCP Identity and Google Cloud IAM. Anthos & ACM: Hands-on experience with Anthos Config Management (ACM) and Kubernetes Cluster Config (KCC) to manage and govern GKE clusters and workloads at scale. Infrastructure as Code (IaC): Experience with tools like Terraform to manage GKE infrastructure and cloud resources. Helm & Kustomize: Experience in using Helm and Kustomize for packaging, deploying, and managing Kubernetes resources efficiently. Ability to create reusable and scalable Kubernetes deployment templates. Observability & Logging Tools: Experience with observability tools such as Prometheus, Dynatrace, and Splunk to monitor and log GKE performance, providing developers with actionable insights for troubleshooting. Nice to Have: Zero Trust Security Model: Strong understanding of implementing and maintaining security in a Zero Trust model for GKE, including workload authentication, identity management, and network security. Ingress Patterns: Experience with designing and managing multi-cluster and multi-regional ingress in Kubernetes to ensure fault tolerance, traffic management, and high availability. Familiarity with Open Policy Agent (OPA) for policy enforcement in Kubernetes environments. Education & Certification: Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant GCP certifications, such as Google Cloud Certified Professional Cloud Architect or Google Cloud Certified Professional Cloud Developer. Soft Skills: Collaboration: Strong ability to work with cross-functional teams to ensure platform solutions meet development and operational needs. Problem-Solving: Excellent problem-solving skills with a focus on troubleshooting and performance optimization. Communication: Strong written and verbal communication skills, able to communicate effectively with both technical and non-technical teams. Initiative & Ownership: Ability to take ownership of platform projects, driving them from conception to deployment with minimal supervision. Adaptability: Willingness to learn new technologies and adjust to evolving business needs.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

pune, maharashtra, india

On-site

SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job – no more, no less. Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Data Engineer to help build robust data ingestion and processing system to power our data platform. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications BS in computer science or a related field. 4+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Familiarity with workflow orchestration tools such as Airflow. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges and propose software architecture designs to solve them elegantly by abstracting useful common patterns. By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Take a committed approach to prototyping and co-implementing systems alongside less experienced engineers on your team—there’s no room for ivory towers here. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact applicationassistance@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations. NOTE: Any unsolicited resumes sent by candidates or agencies to this email will not be considered for current openings at SailPoint.

Posted 3 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

karnataka, india

On-site

Who You’ll Work With You’ll be joining a dynamic, fast-paced Global FPE (Foundational Platforms Engineering) team within Nike. Our team is responsible for building innovative cloud-native platforms that scale with the growing demands of the business. Collaboration and creativity are at the core of our culture, and we’re passionate about pushing boundaries and setting new standards in platform development. Who We Are Looking For We are looking for an ambitious Software Engineer II – Platforms with a passion for cloud-native development and platform ownership. You are someone who thrives in a collaborative environment, is excited by cutting-edge technology, and excels at problem-solving. You have a strong understanding of AWS Cloud Services, Kubernetes, DevOps, Terraform, Node JS, Go and other cloud-native platforms. You should be an excellent communicator, able to explain technical details to both technical and non-technical stakeholders and operate with urgency and integrity. Key Skills & Traits Bachelor's degree in Computer Science, Engineering or similar field of relevant education 2-5 years of experience in designing and building production-grade platforms. Deep expertise in AWS Services, Full Stack – Node JS, Golang, Typescript etc., working experience in designing and building production grade Microservices in any programming languages preferably in Node JS, Golang or Python Experience Building end to end CI/CD pipeline to build, test and deploy to different AWS environments such as lambda, EC2,ECS , EKS etc. Technical expertise in API’s, AWS Cloud Services and cloud-native architectures. Strong understanding of PaaS architecture and DevOps tools like Kubernetes, Jenkins, Terraform, Docker Knowledge of software engineering best practices including version control, code reviews, and unit testing. Familiarity with software engineering best practices – including unit tests, code review, version control, production monitoring, etc. A proactive approach with the ability to work independently in a fast-paced, agile environment. Familiarity with governance, security features, and performance optimization. Keen attention to detail with a growth mindset and the desire to explore new technologies. Strong collaboration and problem-solving skills. What You’ll Work On You will play a key role in shaping and delivering Nike’s next-generation platforms. As a Software Engineer II, you’ll leverage your technical expertise to build resilient, scalable solutions, manage platform performance, and ensure high standards of code quality. You’ll also be responsible for leading the adoption of open-source and agile methodologies within the organization. You Will Have Deep working experience on AWS Services, NodeJS, Golang, Typescript etc., Experience in IOT, Greengrass etc., Experience building API’s, AWS lambda etc., Working experience of infrastructure as code tools, such as Helm, Kustomize, or Terraform. Implementation of Open Source Projects in K8s, Docker, Kubernetes

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Comcast brings together the best in media and technology. We drive innovation to create the worlds best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Analyzes, tests and assists with the integration of new applications. Oversees the documentation of all development activity. Trains non-technical personnel. Assists with tracking performance metrics. Integrates knowledge of business and functional priorities. Acts as a key contributor in a complex and crucial environment. May lead teams or projects and shares expertise. Job Description Position: Cloud DevOps Engineer 4 Experience: 8 years to 12 years Skills required: Must Have : Terraform, Docker and Kubernetes, CICD, AWS, Python, Linux/Unix, Git, DBMS (e.g. MySQL), NoSQL (e.g. MongoDB) Good to have: Ansible, Helm, Prometheus, ELK stack, R, GCP/Azure Key Responsibilities Develop and maintain best-in-class infrastructure solutions Participate in the whole lifecycle of the product - design, documentation, coding, testing, and release Contribute to system-wide architecture discussions, and make decisions collaboratively Consistently produce clean, efficient code based on specifications Revise, update, refactor, and debug code Qualifications & Requirements Bachelor’s degree in computer science, Engineering, or a related field. 5+ experience in a scripting language (e.g. Bash, Python) 2+ experience in a programming language (e.g. JavaScript, Golang, Java) 3+ years of experience with Micro-services 5+ years of hands-on experience with Docker and Kubernetes 5+ years of hands-on experience with CI tools (e.g. Jenkins, GitLab CI, GitHub Actions, Concourse CI, ...) 5+ years of hands-on experience with CD tools (ArgoCD, Helm, kustomize, …) 5+ years of hands-on experience with LINUX/UNIX systems 5+ years of hands-on experience with cloud providers (e.g. AWS, GCP, Azure) 5+ years of hands-on experience with one IAC framework (e.g. Terraform, Pulumi, Ansible) 2+ years of hands-on experience with Observability tools (e.g. Prometheus, ELK stack, OpenTelemetry, …) Good knowledge of virtualization technologies (e.g. VMware) is a plus Good knowledge of one database (MySQL, SQL Server, Couchbase, MongoDB, Redis, ...) is a plus Good knowledge of GIT and one Git Provider (e.g. GitLab, GitHub) Good knowledge of networking Experience writing technical documentation. Good Communication & Time Management Skills. Able to work independently and as part of a team. Analytical thinking & Problem-Solving Attitude. Core Responsibilities Collaborates with project stakeholders to identify product and technical requirements. Conducts analysis to determine integration needs. Designs new software and web applications, supports applications under development and customizes current applications. Develops software update process for existing applications. Assists in the roll-out of software releases. Trains junior Software Development Engineers on internally developed software applications. Oversees the researching, writing and editing of documentation and technical requirements, including evaluation plans, test results, technical manuals and formal recommendations and reports. Keeps current with technological developments within the industry. Monitors and evaluates competitive applications and products. Reviews literature, patents and current practices relevant to the solution of assigned projects. Provides technical leadership throughout the design process and guidance with regards to practices, procedures and techniques. Serves as a guide and mentor for junior level Software Development Engineers. Assists in tracking and evaluating performance metrics. Ensures team delivers software on time, to specification and within budget. Works with Quality Assurance team to determine if applications fit specification and technical requirements. Displays expertise in knowledge of engineering methodologies, concepts and skills and their application in the area of specified engineering specialty. Displays expertise in process design and redesign skills. Presents and defends architectural, design and technical choices to internal audiences. Consistent exercise of independent judgment and discretion in matters of significance. Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) and overtime as necessary. Other duties and responsibilities as assigned. Employees At All Levels Are Expected To Understand our Operating Principles; make them the guidelines for how you do your job. Own the customer experience - think and act in ways that put our customers first, give them seamless digital options at every touchpoint, and make them promoters of our products and services. Know your stuff - be enthusiastic learners, users and advocates of our game-changing technology, products and services, especially our digital tools and experiences. Win as a team - make big things happen by working together and being open to new ideas. Be an active part of the Net Promoter System - a way of working that brings more employee and customer feedback into the company - by joining huddles, making call backs and helping us elevate opportunities to do better for our customers. Drive results and growth. Respect and promote inclusion & diversity. Do whats right for each other, our customers, investors and our communities. Disclaimer This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelors Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 7-10 Years Job Details Role Level: Mid-Level Work Type: Full-Time Country: India City: Chennai ,Tamil Nadu Company Website: https://corporate.comcast.com/ Job Function: Engineering Company Industry/ Sector: IT Services And IT Consulting Technology Information And Internet And Telecommunications What We Offer About The Company Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand. Report Similar Jobs Senior Specialist - Business Analysis Talentmate Senior Engineer - Quality Assurance Talentmate Specialist - Tooling Operations Talentmate AFCAP V OCN LN UAE Escort Secret Clearance Talentmate Senior Client Partner Strategic Digital Alliances Middle East Talentmate Business Analyst HCM Talentmate Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

hyderābād

On-site

Do you love understanding every detail of how new technologies work? Join the team that serves as Apple’s nerve center, our Information Systems and Technology group. There are countless ways you’ll contribute here, whether you’re coordinating technology needs for product launches, designing music solutions for retail locations, or ensuring the strength of in-store Wi-Fi connections. From Apple Pay to the Apple website to our data centers around the globe, you’ll help design and manage the massive systems that countless employees and customers rely on every day. You’ll also build custom tools for employees, empowering them to solve complex problems on their own. Join our team, and together we’ll explore all the ways to improve how Apple operates, freeing our employees to do what they do best: craft magical experiences for our customers. Are you a passionate operations engineer who wants to work on solving large scale problems? Join us in building best in class solutions and implementing sophisticated software applications across IS&T. At Apple, we support both open source and home-grown technologies to provide internal Apple developers with the best possible CI/CD solutions. In this role you will have the unique opportunity to own and improve tooling for best of the class large-scale platform solutions to help build modern software systems! This role is primarily responsible for building and managing tools that enable software releases in a fast paced enterprise environment We operate with on-prem, private, and public cloud platforms. A DevOps Engineer would be partnering closely with global software development teams and infrastructure teams. Description As a virtue of being part of this team you would be exposed to a variety of challenges supporting and building highly available systems, working closely with U.S. and India based teams and have the opportunity to expand the capabilities the team has to offer to the wider organization. This may include: - Designing and implementing new solutions to streamline manual operations - Triaging security and production issues along with other operational team members. Conduct root cause analysis of critical issues - Expand the capacity and performance of current operational systems The ideal candidate will be a self-motivated, hands-on, dynamic and detail oriented individual with a strong technical background Minimum Qualifications 5+ years proven experience in DevOps/SRE, systems engineering, build/release/deployment automation, etc. Good understanding of distributed systems, APIs, and cloud computing Experience with hosted services in a high-volume enterprise environment Implementing applications in private/public cloud infrastructure and container technologies, like Kubernetes and Docker Experience with develop software tooling to deliver programmable infrastructure & environments and building CI/CD pipeline with tools like Terraform, CloudFormation, Ansible, and Kubernetes toolset (e.g, kubectl, kustomize) Strong foundation in computer science fundamentals: object oriented programming, data structures, algorithms, operating systems, and distributed systems concepts Solid understanding of software design principles and patterns for building maintainable systems Able to visualise datasets and analytics using PL/SQL programming Ability to program in a high-level programming languages or scripting, such Java, Python, Shell, Golang, etc. Experience with observability tools like Grafana, Prometheus along with logging infrastructure and using tools like Splunk Operational experience with AWS or similar platforms through migrations, scaling operations etc. Background building distributed, server-based infrastructure supporting a high traffic in a critically important environment Preferred Qualifications In-Depth knowledge on AWS services including VPC, IAM, EC2, EKS, CloudWatch, S3, RDS, Route53. Experience in similar services for GCP AWS Cloud Architect and/or Certified Kubernetes Administrator (CKA) certifications Experience into large datasets and databases in Cassandra, Solr, OpenSearch is added advantage. Exposure to RDBMS and Datawarehouse concepts. Submit CV

Posted 4 weeks ago

Apply

3.0 years

20 - 28 Lacs

Hyderabad, Telangana, India

On-site

Role & Responsibilities Operate and improve platform reliability for cloud-native services: set SLIs/SLOs, define error budgets, and drive uptime and performance improvements. Design and maintain Infrastructure-as-Code and automated CI/CD pipelines (Terraform/CloudFormation, GitHub Actions/Jenkins) to ship safely and quickly. Build observability and alerting: instrument services with metrics, logs, and traces (Prometheus, Grafana, ELK/EFK, Jaeger) and manage alerting runbooks. Lead incident response and postmortems—triage, mitigate, automate remediation, and implement long-term fixes to reduce repeat incidents. Automate operational tasks and scaling (autoscaling policies, capacity planning, cost optimizations) to keep systems efficient and resilient. Collaborate with product and engineering teams to design reliable architectures, provide operational guidance, and embed reliability early in the delivery lifecycle. Skills & Qualifications Must-Have 3+ years experience in SRE/DevOps/Platform engineering or equivalent hands-on systems engineering role. Strong Linux administration skills and production troubleshooting experience. Proven experience with containerization and orchestration (Docker & Kubernetes). Hands-on with at least one major cloud provider (AWS, GCP or Azure) and IaC tools (Terraform or CloudFormation). Practical scripting or programming skills (Python, Go, or Bash) to automate operations and build reliability tooling. Experience implementing monitoring, alerting and distributed tracing (Prometheus/Grafana, ELK/EFK, Jaeger) and designing SLIs/SLOs. Preferred Experience with service meshes (Istio/Linkerd), Helm/Kustomize and chaos engineering tools. Familiarity with security hardening, cost-optimization practices and multi-cloud deployments. Knowledge of platform observability automation, canary releases, and progressive delivery patterns. Benefits & Culture Highlights Hybrid working model with flexible hours and focus on work-life balance (India). Competitive compensation, health benefits and learning & development allowance to upskill in cloud and SRE practices. Collaborative, blameless postmortem culture that rewards ownership, experimentation, and continuous improvement. Keywords: Site Reliability Engineer, SRE, Kubernetes, AWS, Terraform, CI/CD, Prometheus, Grafana, Observability, Incident Response, SLIs/SLOs, Linux, Cloud Infrastructure. Skills: aws,eks,kubernetes,python

Posted 4 weeks ago

Apply

30.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Today’s world is crime-riddled. Criminals are everywhere, invisible, virtual and sophisticated. Traditional ways to prevent and investigate crime and terror are no longer enough… Technology is changing incredibly fast. The criminals know it, and they are taking advantage. We know it too. For nearly 30 years, the incredible minds at Cognyte around the world have worked closely together and put their expertise to work, to keep up with constantly evolving technological and criminal trends, and help make the world a safer place with leading investigative analytics software solutions. We are defined by our dedication to doing good and this translates to business success, meaningful work friendships, a can-do attitude, and deep curiosity. So, if you rock at DevSecOps and being a technical expert, and want in on the action, let’s talk! Role Overview: This role focuses on integrating security best practices into CI/CD pipelines and production system deployments, ensuring security is embedded throughout the software development lifecycle. As a DevSecOps Engineer, you will work closely with architecture, development, and operations teams to make security a shared responsibility across all stages of software development and deployment. Your primary responsibility will be implementing security best practices, testing, and automation tools into CI/CD pipelines and production environments using industry-standard tools such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and other security mechanisms. Key Responsibilities : Security Integration into DevOps: Collaborate with development and operations teams to integrate security practices into every stage of the software development lifecycle, from code creation to deployment. CI/CD Pipeline Security: Configure, implement, and manage security tools and automation in CI/CD pipelines to detect vulnerabilities early in the development process. Security Testing: Use SAST and DAST tools to automate security testing for code and applications. Continuously monitor security scans, report findings, and recommend remediation strategies. Automation & Process Improvement: Continuously enhance and automate security processes to deliver secure software efficiently while minimizing manual intervention. Requirements: Experience Required: 3+ years of experience in DevOps or a similar role focused on integrating security into CI/CD processes. Proven experience implementing and configuring security tools such as SAST, DAST, and other automation tools. Strong hands-on experience with CI/CD tools and languages (e.g., Jenkins, Groovy, Git, Python, Bash) for pipeline automation. Proficiency in cloud-native deployments and management (e.g., Helm, Kustomize), Kubernetes objects, and cluster debugging. Familiarity with Infrastructure as Code (IaC) tools like Terraform and Ansible. Knowledge of CIS benchmark recommendations and system hardening practices. Technical Skills : Proficiency in programming/scripting languages (e.g., Python, Bash, Groovy, Ansible, Helm) for automation. In-depth knowledge of security vulnerabilities (e.g., OWASP Top 10) and mitigation best practices. Experience with vulnerability scanning and static and dynamic application security testing tools (e.g., SonarQube, Checkmarx, OWASP ZAP, Coverity, Lint). Familiarity with on-premises cloud platforms (e.g., OpenShift, Tanzu) and public cloud platforms (AWS, Azure, GCP) and their security configurations. Soft Skills : Strong communication skills to effectively collaborate with cross-functional teams. A problem-solving mindset with the ability to quickly troubleshoot and resolve security issues. A proactive and collaborative approach to fostering a security-first mindset across the organization.

Posted 1 month ago

Apply

8.0 years

0 Lacs

India

On-site

Flexera saves customers billions of dollars in wasted technology spend. A pioneer in Hybrid ITAM and FinOps, Flexera provides award-winning, data-oriented SaaS solutions for technology value optimization (TVO), enabling IT, finance, procurement and cloud teams to gain deep insights into cost optimization, compliance and risks for each business service. Flexera One solutions are built on a set of definitive customer, supplier and industry data, powered by our Technology Intelligence Platform, that enables organizations to visualize their Enterprise Technology Blueprint™ in hybrid environments—from on-premises to SaaS to containers to cloud. We’re transforming the software industry. We’re Flexera. With more than 50,000 customers across the world, we’re achieving that goal. But we know we can’t do any of that without our team. Ready to help us re-imagine the industry during a time of substantial growth and ambitious plans? Come and see why we’re consistently recognized by Gartner, Forrester and IDC as a category leader in the marketplace. Learn more at flexera.com Flexera is on a journey to transform its market-leading offering to cloud-native microservices delivered as Software as a service. This role will suit an experienced SRE who can support our drive to continuously improve our SaaS service. The MTS (Member of Technical Staff) SRE will work very tightly with the technology, product, and development teams to help define our path forward. It means, a lot of freedom and autonomy but also comes with a lot of responsibility. It also means you’re willing to share what you’ve learned by presenting new ideas to the team and the wider engineering organization. This is a unique opportunity to work with the leading cloud technologies and methodologies as well as being a key player in the definition and implementation of Flexera’s SaaS offering. What We Do We provide our developers with a stable and reliable platform as a product. Our aim is to abstract the complexities of Kubernetes away so that teams can easily create and deploy services into production by just specify the configuration and resources that are required for the application to run. We believe that GitOps is the best way to realize this vision, using tools such as ArgoCD, Terraform, Helm, Kustomize, and Backstage. We are not afraid to evaluate new technologies if it can further improve the developer experience; current technologies we are assessing are Cue, Pulumi, and Crossplane. We also provide our development team with a monitoring stack so that they can effectively monitor metrics and logs from their applications in production. We believe in “You build it, you run it”. Our Challenge For you Support our initiatives aimed at improving the reliability of our services by providing guidance, engineering solutions and improving our processes. Drive reliability practices across our engineering organization. Provide improvements and best practices targeting observability and predictability. Experiment, learn new things and help grow those around you. Work in short iterations in a lightweight Kanban environment shaped by the team. Participation in an on-call rotation to support our 24x7 service availability. Technologies you’ll come in contact with: Microsoft Azure, Terraform, GitHub, Sumologic, Helm, Backstage, ArgoCD, Kubernetes, NATS. Your Profile & Skills 8+ years of experience managing production environments as SRE, DevOps Engineer or similar. 5+ years of hands-on Kubernetes experience with a proven track record of deploying and managing Kubernetes clusters running microservices in Azure on AKS. 5+ years of hands-on experience from previous jobs with infrastructure as code (IaC) and tools used to automate Kubernetes infrastructure in Azure. This includes experience creating Terraform modules, Helm Charts, and Kubernetes manifests from scratch. Proficient in Golang fundamentals Experience working with SLOs, metrics, incident management in a cloud environment. Passion about reliability engineering practices and automation. Curiosity to learn, explore and collaborate with those around you. Working hours India based candidates would need to be available for 1.5 hours in the evening twice a week Monday – Thursday for meetings with US and or EU based staff. March through October 8:30PM – 10:00PM IST November through March 8:00PM – 9:30PM IST Candidates can flex their hours to cover after-hours activities. Flexera is proud to be an equal opportunity employer. Qualified applicants will be considered for open roles regardless of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by local/national laws, policies and/or regulations. Flexera understands the value that results from employing a diverse, equitable, and inclusive workforce. We recognize that equity necessitates acknowledging past exclusion and that inclusion requires intentional effort. Our DEI (Diversity, Equity, and Inclusion) council is the driving force behind our commitment to championing policies and practices that foster a welcoming environment for all. We encourage candidates requiring accommodations to please let us know by emailing careers@flexera.com.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Position: JRS Infrastructure Specialist – Red Hat Platform Location: Mumbai – Andheri East (Work from office) Experience Requirements Total Experience: 7+ years Relevant Experience: 7 years Mode of Interview: Face-to-Face Mandatory Skills OpenShift & Kubernetes Management Deploy, upgrade, and maintain OpenShift clusters (on-prem/cloud) Manage pods, nodes, operators, and namespaces Troubleshoot cluster issues (networking, storage, performance) Docker/Podman & Linux administration CI/CD pipelines: Jenkins, Tekton, ArgoCD Infrastructure as Code: Terraform, Ansible Cloud platforms: AWS, Azure, GCP Nice-to-Have Skills Strong understanding of containerized environments and scalability/security best practices Automation tools: Helm, Kustomize, Operators GitOps workflows (ArgoCD) Security compliance (SOC2, HIPAA) Monitoring tools: Prometheus, Grafana, ELK Key Responsibilities OpenShift & Kubernetes Management Deploy, upgrade, and maintain OpenShift clusters (on-prem/cloud) Manage pods, nodes, operators, and namespaces Troubleshoot cluster issues related to networking, storage, and performance CI/CD & Automation Integrate OpenShift with CI/CD tools (Jenkins, GitLab, ArgoCD) Automate deployments with Helm, Kustomize, or Operators Implement GitOps workflows (ArgoCD) Security & Compliance Configure RBAC, network policies, and image scanning Ensure compliance with industry security standards (SOC2, HIPAA) Monitoring & Optimization Set up logging and monitoring (Prometheus, Grafana, ELK) Optimize cluster performance and cost efficiency Collaboration & Support Assist developers with deployments and issue resolution Document processes and provide L3 support

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Maintenance Operation System-Maintenance Product Driven Organization (MOS-Maintenance PDO) has three large applications called Maximo For Maintenance (MFM), Fordland Maintenance Management Systems (FLMMS) and Bodyshop Information System (BIS). MFM and FLMMS applications are developed using a COTS Product Maximo from IBM. This Application is used by all Ford Manufacturing Plants and Fordland Sites. BIS is a COTS product which is used Manufacturing Plants. The position role and responsibilities are follows. Incident, Problem Management. Enhancements Tech Refresh Projects Launch application in new sites / Plants. Application Software upgrades. Modernization New API Development and Deployment to APIGEE. Application integrations with other Corporate Applications. Requirement gathering and developing requirements to working software New Application Development. Data Analyses Develop Automation Scripts using Selenium. Develop Jmeter Scripts for Load, Performance and Duration Testing. Creating CD/CI pipes using Tekton. Application migration to Google Cloud Platform or equivalent. Application monitoring using Dynatrace Application development using Agile Principles and JIRA Incident management using ServiceNow Application Regression, Integration Testing. Application Source code management using Git Proof of concepts using Maximo Application Suite Products Like Monitor, Predict New Reports development using MAXIMO's BIRT (Business Intelligence Reporting Tool) Develop and Enhance applications using Maximo Application Designer, Database Configuration, Maximo Integration Framework, Maximo Automation scripts using Python. Develop Applications using Full Stack (Core Java, Angular and SpringBoot) Develop UI Screen using HTML, CSS , JavaScript, FIGMA (UX Designing tool) Application development on CaaS (Openshift and Kubernetes) Database Upgrades, Creating Complex queries, Procedures and Triggers if needed. Responsibilities The position role and responsibilities are follows. Incident, Problem Management. Enhancements Tech Refresh Projects Launch application in new sites / Plants. Application Software upgrades. Modernization New API Development and Deployment to APIGEE. Application integrations with other Corporate Applications. Requirement gathering and developing requirements to working software New Application Development. Data Analyses Develop Automation Scripts using Selenium. Develop Jmeter Scripts for Load, Performance and Duration Testing. Creating CD/CI pipes using Tekton. Application migration to Google Cloud Platform or equivalent. Application monitoring using Dynatrace Application development using Agile Principles and JIRA Incident management using ServiceNow Application Regression, Integration Testing. Application Source code management using Git Proof of concepts using Maximo Application Suite Products Like Monitor, Predict New Reports development using MAXIMO's BIRT (Business Intelligence Reporting Tool) Develop and Enhance applications using Maximo Application Designer, Database Configuration, Maximo Integration Framework, Maximo Automation scripts using Python. Develop Applications using Full Stack (Core Java, Angular and SpringBoot) Develop UI Screen using HTML, CSS , JavaScript, FIGMA (UX Designing tool) Application development on CaaS (Openshift and Kubernetes) Database Upgrades, Creating Complex queries, Procedures and Triggers if needed. Qualifications Overall IT Experience of minimum 3 years in Software Development using Agile Methodology , Full Stack Development using Core Java, Angular, SpringBoot. Hands on coding experience of minimum 2 years using any programing Language. 2 years or more Experience with OpenShift, Kubernetes, CaaS and Cloud Platforms like GCP. Experience in developing Ansible and Kustomize scripting. Experience in developing API, Hosting API's in APIGEE or similar gateways. Developing CD/CI pipies using TekTon. Knowledge in source code management using Git Knowledge in TerraForm. Good Knowledge in Oracle, SQL Server and MongoDB database. Experience developing Application using Maximo and related functions of Maximo is added Plus. Experience in developing Automation Scripts using Selenium, Load Test scripts with Jmeter, Application code development using SonarQube. Experience with AI and ML tools is an added Plus. Good Communication (Written and Oral) using English is a must.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Software Architecture and Developer for Kubernetes Location : Remote Effort : 5 days per week during India day-time Duration : 4 to 6 months Role and responsibilities The external contractor takes on the following tasks within the project, which are carried out independently: Design and implement Kubernetes Operators using Go to automate complex application lifecycle management. Design, develop, and maintain APIs that are efficient, scalable, and easy to use Build internal tools and CLIs in Python and Go to support deployment, monitoring, and debugging of Kubernetes workloads. Integrate Kubernetes with CI/CD pipelines. Collaborate with platform and application teams to define and implement best practices for Kubernetes usage. Stay up to date with the latest Kubernetes features and CNCF ecosystem tools. Participate in incident response and root cause analysis for Kubernetes-related issues. Implementation of automated quality checks Documentation and support with knowledge transfer to technical teams required skills Strong proficiency in software architecture and design patterns. Proven experience in designing and developing APIs. Strong proficiency in Go and Python. Solid understanding of Kubernetes architecture, CRDs, controllers, and Operators. Experience with Kubernetes Operator SDK, Kube builder, or similar frameworks. Familiarity with containerization (Docker) and container orchestration. Experience with Helm, Kustomize, and GitOps workflows. Knowledge of cloud platforms (AWS, GCP, Azure). Experience with monitoring and logging tools in Kubernetes environments. Excellent communication and collaboration skills to work effectively with cross-functional teams Qualifications and education requirements Bachelor's degree in computer science or related field CKA: Certified Kubernetes Administrator CKAD: Certified Kubernetes Application Developer CKS: Certified Kubernetes Security Specialist

Posted 1 month ago

Apply

0 years

4 - 8 Lacs

Hyderābād

On-site

Job Description: ability to write Kubernetes yaml file all from scratch to manage infrastructure on EKS experience with writing Jenkins pipelines for setting up new pipeline or extend existing create docker images for new applications like Java NodeJS ability to setup backups for storage services on AWS and EKS Setup Splunk log aggregation tools for all existing applications Setup Integration of our EKS Lambda Cloudwatch with Grafana Splunk etc Manage and Setup DevOps SRE tools independently for existing stack and review with the CORE engineering teams Independently manage the work stream for new features of DevOps and SRE with minimum day to day oversight of the tasks activities Deploy and leverage existing public domain helm charts for repetitive stuff and orchestration and terraform pulumi creation Site Reliability Engineer SRE Cloud Infrastructure Data Ensure reliable scalable and secure cloud based data infrastructure Design implement and maintain AWS infrastructure with a focus on data products Automate infrastructure management using Pulumi Terraform and policy as code Monitor system health optimize performance and manage Kubernetes EKS clusters Implement security measures ensure compliance and mitigate risks Collaborate with development teams on deployment and operation of data applications Optimize data pipelines for efficiency and cost effectiveness Troubleshoot issues participate in incident response and drive continuous improvement Experience with Kubernetes administration data pipelines and monitoring and observability tools In depth coding and debugging skills in Python Unix scripting Excellent communication and problem solving skills Self driven highly motivated and ability to work both independently and within a team Operate optimally in fast paced development environment with dynamic changes tight deadlines and limited resources Key Responsibilities: Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam aws policy management aws kms kube rbac etc Understanding of best practices for security access management hybrid cloud etc Technical Requirements: should be able to write bash scripts for monitoring existing running infrastructure and report out should be able to extend existing IAC code in pulumi typescript ability to debug and fix kubernetes deployment failures network connectivity ingress volume issues etc with kubectl good knowledge of networking basics to debug basic networking and connectivity issues with tools like dig bash ping curl ssh etc knowledge for using monitoring tools like splunk cloudwatch kube dashboard and create dashboards and alerts when and where needed knowledge of aws vpc subnetting alb nlb egress ingress knowledge of doing disaster recovery from prepared backups for dynamodb kube volume storage keyspaces etc AWS Backup Amazon S3 Systems Manager Additional Responsibilities: Knowledge of advance kube concepts and tools like service mesh cluster mesh karpenter kustomize etc Templatise infra IAC creation with pulumi and terraform using advanced techniques for modularisation Extend existing helm charts for repetitive stuff and orchestration and write terraform pulumi creation Use complicated manual infrastructure setup with Ansible Chef etc Certifications AWS Certified Advanced Networking Specialty AWS Certified DevOps Engineer Professional DOP C02 Preferred Skills: Technology->Cloud Platform->Amazon Webservices DevOps->AWS DevOps

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description R25_0009780 At NIQ, we deliver the clearest understanding of consumer buying behavior, revealing new pathways for growth. Our Enterprise Platform Engineering team is crucial to this mission, ensuring our corporate technologies are best-in-class for over 30,000 global employees. We're seeking a skilled Platform Engineer to join our team in Madrid or Valladolid. As a Platform Engineer , you'll be a key player on a highly skilled team, designing, building, and maintaining the core frameworks and platforms that power NIQ. You'll work with a diverse and cutting-edge tech stack, including Kubernetes, GitHub, Terraform, Argo CD, Datadog, OpenTelemetry, CAST AI, and more. Job Description Design and architect scalable, resilient platforms that empower other engineering teams to confidently deploy and run their services. Collaborate closely with Application Development and SRE teams to deliver effective solutions. Deepen your expertise in core platform technologies like Kubernetes, Helm, Kustomize, GitHub, Terraform, and various GitOps tools. Ensure seamless deployment and operation of platforms by working hand-in-hand with development teams. Proactively monitor, analyze, and optimize system performance and security. Continuously improve platform reliability, scalability, and availability. Create and maintain comprehensive documentation for all platforms and frameworks. Qualifications 5+ years of experience in software development or DevOps, with at least 2 years specifically in platform engineering. Strong hands-on experience with Kubernetes, Helm, Kustomize, GitHub, Terraform, and GitOps tooling (e.g., Argo CD). Proven experience with Docker and Kubernetes. Familiarity with monitoring and observability tools like Datadog, Coralogix, or OpenTelemetry. Exposure to multiple cloud platforms (GCP, Azure, AWS). Proficiency in scripting languages like Go, Python, Bash, or JavaScript. Excellent communication skills, both verbal and written, capable of clearly articulating complex technical concepts. A team-oriented mindset and the ability to work effectively both collaboratively and independently. Strong attention to detail and a proven ability to prioritize tasks in a fast-paced environment. Familiarity with testing frameworks. Bachelor's degree in Computer Science, Computer Engineering, or equivalent practical work experience. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Application Management Services AMS’s mission is to maximize the contributions of MMC Technology as a business-driven, future-ready and competitive function by reducing the time and cost spent managing applications AMS , Business unit of Marsh McLennan is seeking candidates for the following position based in the Gurgaon/Noida office: Principal Engineer Kubernetes Platform Engineer Position overview: We are seeking a skilled Kubernetes Platform Engineer with strong background in Cloud technologies (AWS, Azure) to manage, configure, and support Kubernetes infrastructure in a dynamic, high-availability environment. The Engineer collaborates with Development, DevOps and other technology teams to ensure that the Kubernetes platform ecosystem is reliable, scalable and efficient. The ideal candidate must possess hands-on experience in Kubernetes clusters operations management, and container orchestration, along with strong problem-solving skills. Experience in infrastructure platform management is required. Responsibilities: Implement and maintain platform services in Kubernetes infrastructure. Perform upgrades and patch management for Kubernetes and its associated components (not limited to API management system) are expected and required. Monitor and optimize Kubernetes resources, such as pods, nodes, and namespaces. Implement and enforce Kubernetes security best practices, including RBAC, network policies, and secrets management. Work with the security team to ensure container and cluster compliance with organizational policies. Troubleshoot and resolve issues related to Kubernetes infrastructure in a timely manner. Provide technical guidance and support to developers and DevOps teams. Maintain detailed documentation of Kubernetes configurations and operational processes. Maintain and support of Ci/CD pipelines are not part of the support scope of this position. Preferred skills and experience: At least 3 years of experience in managing and supporting Kubernetes clusters at platform operation layer, and its ecosystem. At least 2 years of infrastructure management and support, not limited to SSL certificate, Virtual IP. Proficiency in managing Kubernetes clusters using tools such as `kubectl`, Helm, or Kustomize. In-depth knowledge and experience of container technologies, including Docker. Experience with cloud platforms (AWS, GCP, Azure) and Kubernetes services (EKS, GKE, AKS). Understanding of infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of centralized logging systems like Fluentd, Logstash, or Loki. Proficiency in scripting languages (e.g., Bash, Python, or Go). Experience in supporting Public Cloud or hybrid cloud environments. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s 85,000 colleagues advise clients in 130 countries. With annual revenue of over $20 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh advises individual and commercial clients of all sizes on insurance broking and innovative risk management solutions. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and wellbeing for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and Twitter Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_310034

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description We are hiring for Senior QA Engineer role at Noida location. Key Responsibilities Lead the design and implementation of scalable and maintainable test automation frameworks using Java, Cucumber, and Serenity. Review and optimize API test suites (functional, security, load) using REST Assured, Postman, and Gatling. Architect CI/CD-ready testing workflows within Jenkins pipelines, integrated with Docker, Kubernetes, and Cloud deployments (Azure/AWS). Define QA strategies and environment setups using Helm, Kustomize, and Kubernetes manifests. Validate digital payment journeys (tokenization, authorization, fallback) against EMV, APDU, and ISO 20022 specs. Drive technical discussions with cross-functional Dev/DevOps/R&D teams. Mentor junior QAs, conduct code/test reviews, and enforce test coverage and quality standards. IDEAL CANDIDATE PROFILE 4–8 years of hands-on experience in test automation and DevOps. Deep understanding of design patterns, OOP principles, and scalable system design. Experience working in cloud-native environments (Azure & AWS). Knowledge of APDU formats, EMV specs, ISO 20022, and tokenization flows is a strong plus. Exposure to secure payment authorization protocols and transaction validations. TECH STACK YOU’LL WORK WITH Languages & Frameworks: Java, JUnit/TestNG, Serenity, Cucumber, REST Assured Cloud Platforms: Azure (VMs, Functions, AKS), AWS (Lambda, EC2, S3, IAM) DevOps/Containerization: Jenkins, Docker, Kubernetes (AKS/EKS), Helm, Kustomize, Maven API & Performance Testing: Postman, Gatling Proficient in test environment provisioning and pipeline scripting Domain Knowledge Required Deep understanding of card tokenization, EMV standards, and APDU formats Experience with payment authorization flows across methods (credit, debit, wallets, NFC) Familiarity with ISO 20022 and other financial messaging standards

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a DevOps Engineer, you will play a crucial role in building and maintaining CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices. You will be responsible for managing Kubernetes infrastructure (AWS EKS), Helm charts, and service mesh configurations (ISTIO). Your expertise will be utilized in utilizing tools like kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Your main focus will include evaluating the security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software. You will support development and QA teams with code merge, build, install, and deployment environments. Additionally, you will ensure the continuous improvement of the software automation pipeline to enhance build and integration efficiency. Monitoring and maintaining the health of software repositories and build tools will also be part of your responsibilities. You will be required to verify final software release configurations, ensuring integrity against specifications, architecture, and documentation. Your role will involve performing fulfillment and release activities to ensure timely and reliable deployments. To be successful in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. You should have 8-12 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services (EKS, IAM, CloudWatch, S3, Secrets Manager), including networking and security components, is essential. Strong experience with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize is required. You should have expertise in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools. Hands-on experience with infrastructure provisioning tools such as Docker and CloudFormation is preferred. Familiarity with CI/CD pipeline tools and build systems including Jenkins and Maven is a plus. Experience administering software repositories such as Git or Bitbucket is beneficial. Proficiency in scripting/programming languages such as Ruby, Groovy, and Java is desired. You should have a proven ability to analyze and resolve issues related to performance, scalability, and reliability. A solid understanding of DNS, Load Balancing, SSL, TCP/IP, and general networking and security best practices will be advantageous in this role.,

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Who We Are. Newfold Digital (with over $1b in revenue) is a leading web technology company serving nearly seven million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, our portfolio of brands includes: Bluehost, Crazy Domains, HostGator, Network Solutions, Register.com, Web.com and many others. We help customers of all sizes build a digital presence that delivers results. With our extensive product offerings and personalized support, we take pride in collaborating with our customers to serve their online presence needs. We’re hiring for our Developer Platform team at Newfold Digital — a team focused on building the internal tools, infrastructure, and systems that improve how our engineers develop, test, and deploy software. In this role, you’ll help design and manage CI/CD pipelines, scale Kubernetes-based infrastructure, and drive adoption of modern DevOps and GitOps practices. You’ll work closely with engineering teams across the company to improve automation, deployment velocity, and overall developer experience. We’re looking for someone who can take ownership, move fast, and contribute to a platform that supports thousands of deployments across multiple environments. What You'll Do & How You'll Make Your Mark. Build and maintain scalable CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI Manage and improve Kubernetes clusters (Helm, Kustomize) used across environments Implement GitOps workflows using Argo CD or Argo Workflows Automate infrastructure provisioning and configuration with Terraform and Ansible Develop scripts and tooling in Bash, Python, or Go to reduce manual effort and improve reliability Work with engineering teams to streamline and secure the software delivery process Deploy and manage services across cloud platforms (AWS, GCP, Azure, OCI). Who You Are & What You'll Need To Succeed. Strong understanding of core DevOps concepts including CI/CD, GitOps, and Infrastructure as Code Hands-on experience with Docker, Kubernetes, and container orchestration Proficiency with at least one major cloud provider (AWS, Azure, GCP, or OCI) Experience writing and managing Jenkins pipelines or similar CI/CD tools Comfortable working with Terraform, Ansible, or other configuration management tools Strong scripting skills (Bash, Python, Go) and a mindset for automation Familiarity with Linux-based systems and cloud-native infrastructure Ability to work independently and collaboratively across engineering and platform teams Good to Have Experience with build tools like Gradle or Maven Familiarity with Bitbucket or Git-based workflows Prior experience with Argo CD or other GitOps tooling Understanding of internal developer platforms and shared libraries Prior experience with agile development and project management. Why you’ll love us. We’ve evolved; we provide three work environment scenarios. You can feel like a Newfolder in a work-from-home, hybrid, or work-from-the-office environment. Work-life balance. Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences. We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20 + affinity groups where you can network and connect with Newfolders globally. We care about you. . At Newfold, taking care of our employees is our top priority. we make sure that cutting edge benefits are in place to for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold This Job Description includes the essential job functions required to perform the job described above, as well as additional duties and responsibilities. This Job Description is not an exhaustive list of all functions that the employee performing this job may be required to perform. The Company reserves the right to revise the Job Description at any time, and to require the employee to perform functions in addition to those listed above.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Lead/Architect Infrastructure (L5/L6) Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecture. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Google Cloud Platform (GCP ) and/or Amazon Web Services (AWS). Develop and maintain Infrastructure as Code (IaC ) using Terraform for enterprise-scale deployments. Utilize Kubernetes deployment tools such as Helm/Kustomize for container orchestration and management. Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments. Ensure auditability and observability of pipeline states. Implement security best practices, audit, and compliance requirements within the infrastructure. Provide technical leadership, mentorship, and training to engineering staff. Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs. Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams. Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role. Extensive experience with Google Cloud Platform (GCP) and/or Amazon Web Services (AWS). Proven ability to architect for scale, availability, and high-performance workloads. Deep knowledge of Infrastructure as Code (IaC) with Terraform. Strong experience with Kubernetes and related tools (Helm, Kustomize). Solid understanding of CI/CD pipelines and deployment strategies. Experience with security, audit, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Experience in technical leadership and mentoring. Experience with client relationship management and project planning. Certifications: Relevant certifications (e.g., Kubernetes Certified Administrator, Google Cloud Certified Professional Cloud Architect, Google Cloud Networking Certifications. Google Cloud Security Certifications etc.). Software development experience (e.g., Terraform, Python). Experience with machine learning infrastructure. Education: Bachelor's degree in Computer Science, a related field, or equivalent experience.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies