Jobs
Interviews

127 Kustomize Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description We are hiring for Senior QA Engineer role at Noida location. Key Responsibilities Lead the design and implementation of scalable and maintainable test automation frameworks using Java, Cucumber, and Serenity. Review and optimize API test suites (functional, security, load) using REST Assured, Postman, and Gatling. Architect CI/CD-ready testing workflows within Jenkins pipelines, integrated with Docker, Kubernetes, and Cloud deployments (Azure/AWS). Define QA strategies and environment setups using Helm, Kustomize, and Kubernetes manifests. Validate digital payment journeys (tokenization, authorization, fallback) against EMV, APDU, and ISO 20022 specs. Drive technical discussions with cross-functional Dev/DevOps/R&D teams. Mentor junior QAs, conduct code/test reviews, and enforce test coverage and quality standards. IDEAL CANDIDATE PROFILE 4–8 years of hands-on experience in test automation and DevOps. Deep understanding of design patterns, OOP principles, and scalable system design. Experience working in cloud-native environments (Azure & AWS). Knowledge of APDU formats, EMV specs, ISO 20022, and tokenization flows is a strong plus. Exposure to secure payment authorization protocols and transaction validations. TECH STACK YOU’LL WORK WITH Languages & Frameworks: Java, JUnit/TestNG, Serenity, Cucumber, REST Assured Cloud Platforms: Azure (VMs, Functions, AKS), AWS (Lambda, EC2, S3, IAM) DevOps/Containerization: Jenkins, Docker, Kubernetes (AKS/EKS), Helm, Kustomize, Maven API & Performance Testing: Postman, Gatling Proficient in test environment provisioning and pipeline scripting Domain Knowledge Required Deep understanding of card tokenization, EMV standards, and APDU formats Experience with payment authorization flows across methods (credit, debit, wallets, NFC) Familiarity with ISO 20022 and other financial messaging standards

Posted 2 months ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Devops Engineer : Bangalore Job Description DevOps Engineer_Qilin Lab Bangalore, India Role We are seeking an experienced DevOps Engineer to deliver insights from massive-scale data in real time. Specifically, were searching for someone who has fresh ideas and a unique viewpoint, and who enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences for every of this role : Work with DevOps to run the production environment by monitoring availability and taking a holistic view of system health Build software and systems to manage our Data Platform infrastructure Improve reliability, quality, and time-to-market of our Global Data Platform Measure and optimize system performance and innovate for continual improvement Provide operational support and engineering for a distributed Platform at : Define, publish and defend service-level objectives (SLOs) Partner with data engineers to improve services through rigorous testing and release procedures Participate in system design, Platform management and capacity planning Create sustainable systems and services through automation and automated run-books Proactive approach to identifying problems and seeking areas for improvement Mentor the team in infrastructure best : Bachelors degree in Computer Science or an IT related field, or equivalent practical experience with a proven track record. The following hands-on working knowledge and experience is required : Kubernetes , EC2 , RDS,ELK Stack, Cloud Platforms (AWS, Azure, GCP) preferably AWS. Building & operating clusters Related technologies such as Containers, Helm, Kustomize, Argocd Ability to program (structured and OOP) using at least one high-level language such as Python, Java, Go, etc. Agile Methodologies (Scrum, TDD, BDD, etc.) Continuous Integration and Continuous Delivery Tools (gitops) Terraform, Unix/Linux environments Experience with several of the following tools/technologies is desirable : Big Data platforms (eg. Apache Hadoop and Apache Spark)Streaming Technologies (Kafka, Kinesis, etc.) ElasticSearch Service, Mesh Orchestration technologies, e.g., Argo Knowledge of the following is a plus : Security (OWASP, SIEM, etc.)Infrastructure testing (Chaos, Load, Security), Github, Microservices architectures. Notice period : Immediate to 15 days Experience : 3 to 5 years Job Type : Full-time Schedule : Day shift Monday to Friday Work Location : On Site Job Type : Payroll Must Have Skills Python - 3 Years - Intermediate DevOps - 3 Years - Intermediate AWS - 2 Years - Intermediate Agile Methodology - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate ElasticSearch - 3 Years - Intermediate (ref:hirist.tech)

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Cloud Engineering Specialist at BT, you will have the opportunity to be part of a team that is shaping the future of communication services and defining how people interact with these services. Your role will involve fulfilling various requirements in Voice platforms, ensuring timely delivery and integration with other platform components. Your responsibilities will include deploying infrastructure, networking, and software packages, as well as automating deployments. You will implement up-to-date security practices and manage issue diagnosis and resolution across infrastructure, software, and networking areas. Collaboration with development, design, ops, and test teams will be essential to ensure the reliable delivery of services. To excel in this role, you should possess in-depth knowledge of Linux, server management, and issue diagnosis, along with hands-on experience. Proficiency in TCP/IP, HTTP, SIP, DNS, and Linux tooling for debugging is required. Additionally, you should be comfortable with Bash/Python scripting, have a strong understanding of Git, and experience in automation through tools like Ansible and Terraform. Your expertise should also include a solid background in cloud technologies, preferably Azure, and familiarity with container technologies such as Docker, Kubernetes, and GitOps tooling like FluxCD/ArgoCD. Exposure to CI/CD frameworks, observability tooling, RDBMS, NoSQL databases, service discovery, message queues, and Agile methodologies will be beneficial. At BT, we value inclusivity, safety, integrity, and customer-centricity. Our leadership standards emphasize building trust, owning outcomes, delivering value to customers, and demonstrating a growth mindset. We are committed to building diverse, future-ready teams where individuals can thrive and contribute positively. BT, as part of BT Group, plays a vital role in connecting people, businesses, and public services. We embrace diversity and inclusion in everything we do, reflecting our core values of being Personal, Simple, and Brilliant. Join us in making a difference through digital transformation, and be part of a team that empowers lives and businesses through innovative communication solutions.,

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

should be able to write bash scripts for monitoring existing running infrastructure and report out. should be able to extend existing IAC code in pulumi typescript ability to debug and fix kubernetes deployment failures, network connectivity, ingress, volume issues etc with kubectl good knowledge of networking basics to debug basic networking and connectivity issues with tools like dig, bash, ping, curl, ssh etc knowledge for using monitoring tools like splunk, cloudwatch, kube dashboard and create dashboards and alerts when and where needed. knowledge of aws vpc, subnetting, alb/nlb, egress/ingress knowledge of doing disaster recovery from prepared backups for dynamodb, kube volume storage, keyspaces etc (AWS Backup, Amazon S3, Systems Manager Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam, aws policy management, aws kms, kube rbac, etc. Understanding of best practices for security, access management, hybrid cloud, etc. Knowledge of advance kube concepts and tools like service mesh, cluster mesh, karpenter, kustomize etc Templatise infra IAC creation with pulumi and terraform, using advanced techniques for modularisation. Extend existing helm charts for repetitive stuff and orchestration, and write terraform/pulumi creation. Use complicated manual infrastructure setup with Ansible, Chef, etc. Certifications: ▪ AWS Certified Advanced Networking - Specialty ▪ AWS Certified DevOps Engineer - Professional (DOP-C02)

Posted 2 months ago

Apply

0 years

3 - 8 Lacs

Hyderābād

On-site

Job Description: ability to write Kubernetes yaml file all from scratch to manage infrastructure on EKS experience with writing Jenkins pipelines for setting up new pipeline or extend existing create docker images for new applications like Java NodeJS ability to setup backups for storage services on AWS and EKS Setup Splunk log aggregation tools for all existing applications Setup Integration of our EKS Lambda Cloudwatch with Grafana Splunk etc Manage and Setup DevOps SRE tools independently for existing stack and review with the CORE engineering teams Independently manage the work stream for new features of DevOps and SRE with minimum day to day oversight of the tasks activities Deploy and leverage existing public domain helm charts for repetitive stuff and orchestration and terraform pulumi creation Site Reliability Engineer SRE Cloud Infrastructure Data Ensure reliable scalable and secure cloud based data infrastructure Design implement and maintain AWS infrastructure with a focus on data products Automate infrastructure management using Pulumi Terraform and policy as code Monitor system health optimize performance and manage Kubernetes EKS clusters Implement security measures ensure compliance and mitigate risks Collaborate with development teams on deployment and operation of data applications Optimize data pipelines for efficiency and cost effectiveness Troubleshoot issues participate in incident response and drive continuous improvement Experience with Kubernetes administration data pipelines and monitoring and observability tools In depth coding and debugging skills in Python Unix scripting Excellent communication and problem solving skills Self driven highly motivated and ability to work both independently and within a team Operate optimally in fast paced development environment with dynamic changes tight deadlines and limited resources Key Responsibilities: Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam aws policy management aws kms kube rbac etc Understanding of best practices for security access management hybrid cloud etc Technical Requirements: should be able to write bash scripts for monitoring existing running infrastructure and report out should be able to extend existing IAC code in pulumi typescript ability to debug and fix kubernetes deployment failures network connectivity ingress volume issues etc with kubectl good knowledge of networking basics to debug basic networking and connectivity issues with tools like dig bash ping curl ssh etc knowledge for using monitoring tools like splunk cloudwatch kube dashboard and create dashboards and alerts when and where needed knowledge of aws vpc subnetting alb nlb egress ingress knowledge of doing disaster recovery from prepared backups for dynamodb kube volume storage keyspaces etc AWS Backup Amazon S3 Systems Manager Additional Responsibilities: Knowledge of advance kube concepts and tools like service mesh cluster mesh karpenter kustomize etc Templatise infra IAC creation with pulumi and terraform using advanced techniques for modularisation Extend existing helm charts for repetitive stuff and orchestration and write terraform pulumi creation Use complicated manual infrastructure setup with Ansible Chef etc Certifications AWS Certified Advanced Networking Specialty AWS Certified DevOps Engineer Professional DOP C02 Preferred Skills: Technology->Cloud Platform->Amazon Webservices DevOps->AWS DevOps

Posted 2 months ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Who We Are. Newfold Digital (with over $1b in revenue) is a leading web technology company serving nearly seven million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, our portfolio of brands includes: Bluehost, Crazy Domains, HostGator, Network Solutions, Register.com, Web.com and many others. We help customers of all sizes build a digital presence that delivers results. With our extensive product offerings and personalized support, we take pride in collaborating with our customers to serve their online presence needs. We’re hiring for our Developer Platform team at Newfold Digital — a team focused on building the internal tools, infrastructure, and systems that improve how our engineers develop, test, and deploy software. In this role, you’ll help design and manage CI/CD pipelines, scale Kubernetes-based infrastructure, and drive adoption of modern DevOps and GitOps practices. You’ll work closely with engineering teams across the company to improve automation, deployment velocity, and overall developer experience. We’re looking for someone who can take ownership, move fast, and contribute to a platform that supports thousands of deployments across multiple environments. What You'll Do & How You'll Make Your Mark. Build and maintain scalable CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI Manage and improve Kubernetes clusters (Helm, Kustomize) used across environments Implement GitOps workflows using Argo CD or Argo Workflows Automate infrastructure provisioning and configuration with Terraform and Ansible Develop scripts and tooling in Bash, Python, or Go to reduce manual effort and improve reliability Work with engineering teams to streamline and secure the software delivery process Deploy and manage services across cloud platforms (AWS, GCP, Azure, OCI). Who You Are & What You'll Need To Succeed. Strong understanding of core DevOps concepts including CI/CD, GitOps, and Infrastructure as Code Hands-on experience with Docker, Kubernetes, and container orchestration Proficiency with at least one major cloud provider (AWS, Azure, GCP, or OCI) Experience writing and managing Jenkins pipelines or similar CI/CD tools Comfortable working with Terraform, Ansible, or other configuration management tools Strong scripting skills (Bash, Python, Go) and a mindset for automation Familiarity with Linux-based systems and cloud-native infrastructure Ability to work independently and collaboratively across engineering and platform teams Good to Have Experience with build tools like Gradle or Maven Familiarity with Bitbucket or Git-based workflows Prior experience with Argo CD or other GitOps tooling Understanding of internal developer platforms and shared libraries Prior experience with agile development and project management. Why you’ll love us. We’ve evolved; we provide three work environment scenarios. You can feel like a Newfolder in a work-from-home, hybrid, or work-from-the-office environment. Work-life balance. Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences. We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20 + affinity groups where you can network and connect with Newfolders globally. We care about you. . At Newfold, taking care of our employees is our top priority. we make sure that cutting edge benefits are in place to for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold This Job Description includes the essential job functions required to perform the job described above, as well as additional duties and responsibilities. This Job Description is not an exhaustive list of all functions that the employee performing this job may be required to perform. The Company reserves the right to revise the Job Description at any time, and to require the employee to perform functions in addition to those listed above.

Posted 2 months ago

Apply

0 years

0 Lacs

India

On-site

Core Stack/Keywords GitHub – Source control, GitHub Actions (CI/CD) GKE (Google Kubernetes Engine) – Kubernetes cluster management Terraform – Infrastructure as Code (IaC) GCP – Cloud platform for compute, networking, IAM, etc. Networking – VPC, load balancers, firewall rules, peering Security – IAM, secrets management, workload identity, policies Argo CD – GitOps-based deployment to Kubernetes 📌 Typical Responsibilities CI/CD Pipelines Create and maintain GitHub Actions workflows Integrate Argo CD for GitOps-style continuous delivery Infrastructure Automation Write and maintain Terraform code for GCP infrastructure Modularize Terraform for reusable networking, compute, and GKE clusters Kubernetes Operations (GKE) Deploy and manage workloads Helm or Kustomize usage Auto-scaling, pod disruption budgets, affinity rules Cloud Networking Configure VPCs, subnets, Cloud NAT, private Google access Manage internal/external load balancers Hybrid connectivity (if needed): VPN, Interconnect Cloud Security Enforce least privilege using IAM Manage secrets with Secret Manager or Vault Use workload identity federation for Kubernetes-to-GCP auth Policy-as-code with OPA or GCP Org Policies Monitoring & Logging Set up observability stack (Cloud Monitoring, Prometheus/Grafana) Logging with Cloud Logging or Loki Alerting and SLOs Thanks & Regards Prashant Awasthi Vastika Technologies PVT LTD 9711189829

Posted 2 months ago

Apply

50.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: Java With Kubernetes Experience Required: 8-10yrs Notice: immediate Work Location: Hyderabad/Pune Mode Of Work: Hybrid Type of Hiring: Contract to hire Primary Skills : Java Developer ,Kubernetes Responsibilities:- • Development: Develop, code, test, and debug Java applications for insurance products and services. • Application Design: Design and implement RESTful APIs and web services for insurance applications. • Database Management: Work with relational databases (e.g., Oracle, SQL Server, MySQL) and potentially NoSQL databases (e.g., Elasticsearch) to manage insurance data. • Frameworks: Utilize Java frameworks like Spring, Hibernate, and Java EE in application development. • Collaboration: Collaborate with front-end developers, business analysts, and other stakeholders to integrate insurance applications. • Testing: Conduct unit testing and code reviews to ensure code quality and reliability. • Problem Solving: Troubleshoot and resolve technical issues in a timely manner. • Insurance Knowledge: Possess a basic understanding of insurance principles, processes, and products. • Agile Development: Participate in Agile/Scrum development processes and contribute to sprint planning. Kubernetes:- • Should have extensive experience of implementing Kubernetes concepts, like pods, services, deployments, and stateful sets. • Experience with container runtimes like Docker. • Familiarity with Kubernetes networking, including CNI plugins, ingress controllers, and service meshes. • Knowledge of infrastructure as code tools, such as Helm, Kustomize, or Terraform. • Good if has experience on OpenShift • Ability to set up monitoring, logging, and alerting for Kubernetes clusters. • Understanding of Kubernetes security best practices, like RBAC, network, and pod security policies.

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: We are seeking a highly skilled and experienced Senior Cloud Infrastructure (GCP) to join our dynamic team. The ideal candidate is passionate about building and maintaining complex systems, with a holistic approach to architecting resilient infrastructure that thrives in production. You will be responsible for designing, implementing, and managing cloud infrastructure with a strong focus on scalability, availability, security, and cost optimization. Additionally, you will provide technical leadership and mentorship to engineers while engaging with clients to understand their requirements and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Google Cloud Platform (GCP) . Develop and maintain Infrastructure as Code (IaC) using Terraform for enterprise-scale maintainability and repeatability. Utilize Kubernetes deployment tools such as Helm and Kustomize , along with GitOps tools like ArgoCD, for container orchestration and management. Design and implement robust CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness , etc., with a focus on rolling deployments, canary releases, and blue/green deployments. Ensure pipeline auditability and observability throughout the deployment process. Implement security best practices and ensure compliance with audit requirements across the infrastructure. Provide technical leadership, mentorship, and training to engineering staff. Collaborate with clients to understand technical and business needs and provide tailored infrastructure solutions. When required, lead Agile ceremonies and project planning efforts, including backlog creation and board management, in collaboration with Service Delivery Leads. Troubleshoot and resolve complex infrastructure issues promptly. Potentially participate in pre-sales activities , offering technical expertise to support the sales team. Qualifications: 5+ years of experience in an Infrastructure Engineer , DevOps , or similar role. Extensive hands-on experience with Google Cloud Platform (GCP) . Proven ability to architect systems for scalability, high availability, and performance. Experience executing zero-downtime migrations . Deep expertise in Terraform and Infrastructure as Code practices. Strong experience with Kubernetes and ecosystem tools (Helm, Kustomize, ArgoCD). Solid understanding of Git workflows , branching strategies, and CI/CD pipeline automation. Experience implementing security, audit, and compliance standards within infrastructure. Excellent analytical and problem-solving skills. Strong communication skills with the ability to engage both technical and non-technical stakeholders. Demonstrated leadership in mentoring teams and fostering ownership and self-organization. Experience with client engagement , project planning , and delivery management . Certifications (Preferred): Google Cloud Certified – Professional Cloud Architect Certified Kubernetes Administrator (CKA) Google Cloud Networking or Security Certifications Additional certifications in related areas are a plus. Bonus Experience: Software development experience using Terraform , Python , or similar scripting languages. Experience with Machine Learning infrastructure or ML Ops tools. Education: Bachelor's degree in Computer Science , a related technical field, or equivalent practical experience.

Posted 2 months ago

Apply

6.0 - 11.0 years

40 - 45 Lacs

Mohali, Chandigarh

Work from Office

We are looking for a Senior DevOps Engineer with strong Site Reliability Engineering (SRE) expertise to join our Cloud Engineering/DevOps team. This role is critical to ensuring the security, scalability, and reliability of our cloud infrastructure, software delivery pipelines, and business-critical systemsespecially in a high-compliance environment like financial services. Youll lead the design and implementation of scalable, automated solutions across AWS-based environments, helping teams ship software faster and more reliably, while championing infrastructure as code, observability, and incident response. Key Responsibilities: Serve as a primary owner for system health, security, performance, and capacity of business systems. Develop automation tools and monitoring solutions to support scalable and observable systems. Define and update SLAs, SLOs, and error budgets aligned with service goals. Lead incident response and root cause analysis for production issues, driving continuous improvement. Collaborate with software engineers to embed operability, resilience, and scalability into app architecture. Build, manage, and optimize Kubernetes-based infrastructure using tools like: Terraform, Helm, Kustomize AWS CDK with TypeScript for SaaS infrastructure Implement and manage deployment pipelines and CI/CD best practices. Support cloud security initiatives using tools such as AWS Inspector, Detective, or Lacework. Required Skills & Experience: 68+ years in DevOps, SRE, or Cloud Infrastructure roles. Extensive hands-on experience with AWS and cloud-native infrastructure design. Deep experience with UNIX/Linux system administration in enterprise environments. Strong troubleshooting skills across infrastructure, networking, and application layers. Proven track record in monitoring & alerting systems (e.g., Prometheus, Grafana, CloudWatch). Proficient in at least one programming or scripting language: Python, Node.js, Java, Bash, or Shell Solid experience with Kubernetes orchestration, Helm chart development, and GitOps. Familiarity with Terraform, AWS CDK, and Infrastructure as Code (IaC) principles. Knowledge of security best practices in cloud environments is a plus. Nice to Have: Experience in high-compliance industries like FinTech or Banking. Familiarity with AWS security tools (Inspector, Detective, GuardDuty, etc.). Exposure to observability stacks (ELK, Loki, Datadog, or similar). Why Join Us? Be part of a mission-critical team driving cloud transformation in the financial sector Work with cutting-edge DevOps and SRE tools Competitive compensation and benefits A collaborative, learning-driven engineering culture

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About SpotDraft SpotDraft is an end-to-end CLM for high-growth companies. We are building a product to ensure convenient, fast and easy contracting for businesses. We know the potential to be unlocked if legal teams are equipped with the right kind of tools and systems. So here we are, building them. Currently, customers like PhonePe, Chargebee, Unacademy, Meesho and Cred use SpotDraft to streamline contracting within their organisations. On average, SpotDraft saves legal counsels within the company 10 hours per week and helps close deals 25% faster. Job Summary As a Jr. DevOps Engineer, you will be responsible for planning, building and optimizing the Cloud Infrastructure and CI/CD pipelines for the applications which power SpotDraft. You will be closely working with Product Teams across the organization and help them ship code and reduce manual processes. You will directly work with the Engineering Leaders including the CTO to deliver the best experience for users by ensuring high availability of all systems. We follow the GitOps pattern to deploy infrastructure using Terraform and ArgoCD. We leverage tools like Sentry, DataDog and Prometheus to efficiently monitor our Kubernetes Cluster and Workload. Key Responsibilities Developing and maintaining CI/CD workflows on Github Provisioning and maintaining cloud infrastructure on GCP and AWS using Terraform Set up logging, monitoring and alerting of applications and infrastructure using DataDog and GCP Automate deployment of applications to Kubernetes using ArgoCD, Helm, Kustomize and Terraform Design and promote efficient DevOps process and practices Continuously optimize infrastructure to reduce cloud costs Requirements Proficiency with Docker and Kubernetes Proficiency in git Proficiency in any scripting language (bash, python, etc..) Experience with any of the major clouds Experience working on linux based infrastructure Experience with open source monitoring tools like Prometheus Experience with any ingress controllers (nginx, traefik, etc..) Working at SpotDraft When you join SpotDraft, you will be joining an ambitious team that is passionate about creating a globally recognized legal tech company. We set each other up for success and encourage everyone in the team to play an active role in building the company. An opportunity to work alongside one of the most talent-dense teams. An opportunity to build your professional network through interacting with influential and highly sought-after founders, investors, venture capitalists and market leaders. Hands-on impact and space for complete ownership of end-to-end processes. We are an outcome-driven organisation and trust each other to drive outcomes whilst being audacious with our goals. ‘ Our Core Values Our business is to delight Customers Be Transparent. Be Direct Be Audacious Outcomes over everything else Be 1% better every day Elevate each other Be passionate. Take Ownership

Posted 2 months ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

In the minute it takes you to read this job description, Bluecore has launched over 100,000 individually personalized marketing campaigns for our retail ecommerce customers! Senior Software Engineer - Platform Engineering We are looking for a Senior Software Engineer - Platform Engineering to help our engineering teams build scalable, extensible, reliable, and performant systems. The role will be hands-on: optimizing our Kubernetes clusters, managing our GCP infrastructure, and improving our DevOps and SRE practices. Learn more about our automation with this blog post on Argo, Kustomize and Config Connector. Bluecore ingests 100's of millions of events per day, sends millions of personalized emails, and manages hundreds of terabytes of data. We use Google Cloud hosted infrastructure services including Google App Engine, Kubernetes/GKE, BigQuery, PubSub and Cloud SQL. Our stack consists primarily of Python and Golang on the backend with gRPC services, and JavaScript (React) on the frontend. We emphasize a culture of making good tradeoffs, working as a team, and leaving your ego at the door. Bluecore's Engineering team is made up of exemplary engineers who believe in working collaboratively to solve complex technical problems and build creative solutions that are as simple as possible, but as powerful as necessary. You will be hands-on managing hundreds of servers with Infrastructure-as-Code tools (Terraform, Config Connector) and optimizing our multi-zone and multi-region network topology. You will be responsible for designing, building and supporting automation tools to help developers safely manage release operations and highly-available systems. You will be maintaining Bluecore's shared libraries providing best practices for service development. You will be responsible for our system and application level security practices, from container scanning, to RBAC, to service and user level AuthN and AuthZ systems. Responsibilities Manage hundreds of servers and thousands of containers on our Google Kubernetes Engine clusters using various automation tools. Manage zero-trust networks using Kyverno, Cilium, and Istio. Develop and scale our Observability strategy with OpenTelemetry, Chronosphere, and other Saas Products. Build performant, reliable, high-quality systems at scale within one or more domains (Observability, Networking, Kubernetes, etc.). Comfort in providing significant contributions, even within domains with less expertise. Collaborate and build large and complex projects with difficult and intertwined dependencies that push for smart automation and innovation. Promote coding standards, styles, and design principles across the organization. Ad Hoc and incident availability as part of a greater team rotation. Ad Hoc requests are triaged to support the greater Engineering team's needs. Incident coverage involves troubleshooting and assisting during an incident, bringing it to resolution. Ad Hoc/incident rotations are one week long on dayside hours. Proactively identify technology opportunities for the company, and push technical ideas, proposals, and plans to the entire organization and beyond. Provide perspective and advocate toward Platform Engineering's technical strategy and decisions. Advise and advocate for best tools, methods, and approaches for the entire engineering organization. Evangelize Bluecore Engineering internally and externally, including leading external initiatives to promote Bluecore Engineering in the wider community. Requirements 4-7 years of software engineering experience, primarily in systems and infrastructure management. Hands-on experience maintaining Kubernetes at scale and running various workloads on Kubernetes. Preferred - experience in designing and implementing progressive improvements within API Gateways (Gloo), Istio, and Continuity scopes. Designing and maintaining network infrastructure, including cloud-based load balancing and ingress/egress design. Experience with metric and alerting tools, and experience with distributed tracing systems. Capable of owning engineering and company level issues or gaps, and successfully planning and executing towards resolution, all while continuously identifying and exploring additional opportunities and proactively preventing upcoming risks. Experience developing technical roadmaps and estimates that have pushed product and business growth over several quarters. Has a proven track record of successfully completing work that spans multiple teams and quarters, creating large impacts to business success. Experience excelling within a high-growth, startup environment or building out a new team/function within a larger company preferred. Experience with technical team mentorship, including guiding other engineers to become more effective technical leaders and providing feedback on best practices in code and design. Able to identify your team's dependencies on other areas and across functions, communicate effectively to remove any immediate blockers, and propose and implement process changes that strengthen and maintain efficiency. Adept at effectively communicating with external customers about critical production issues, as well as working with internal stakeholders from Bluecore's customer success and support teams to propose and implement immediate fixes and resolution plans. More About Us: Bluecore is a multi-channel personalization platform that gives retailers a competitive advantage in a digital-first world. Unlike systems built for mass marketing and a physical-first world, Bluecore unifies shopper and product data in a single platform, and using easy-to-deploy predictive models, activates welcomed one-to-one experiences at the speed and scale of digital. Through Bluecore's dynamic shopper and product matching, brands can personalize 100% of communications delivered to consumers through their shopping experiences, anywhere . This comes to life in three core product lines: Bluecore Communicate™ a modern email service provider (ESP) + SMS Bluecore Site™ an onsite capture and personalization product Bluecore Advertise™ a paid media product At Bluecore we believe in encouraging an inclusive environment in which employees feel encouraged to share their unique perspectives, demonstrate their strengths, and act authentically. We know that diverse teams are strong teams, and welcome those from all backgrounds and varying experiences. Bluecore is a proud equal opportunity employer. We are committed to fair hiring practices and to building a welcoming environment for all team members. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, familial status or veteran status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

50.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details:- location : Hyderabad Mode Of Work : Hybrid Notice Period : Immediate Joiners Experience : 8-10yrs Type Of Hire : Contract to Hire Job Description:- Java + Kubernetes need to work with Claims domain and Team ‘London Calling’ Java: Responsibilities: • Development: Develop, code, test, and debug Java applications for insurance products and services. • Application Design: Design and implement RESTful APIs and web services for insurance applications. • Database Management: Work with relational databases (e.g., Oracle, SQL Server, MySQL) and potentially NoSQL databases (e.g., Elasticsearch) to manage insurance data. • Frameworks: Utilize Java frameworks like Spring, Hibernate, and Java EE in application development. • Collaboration: Collaborate with front-end developers, business analysts, and other stakeholders to integrate insurance applications. • Testing: Conduct unit testing and code reviews to ensure code quality and reliability. • Problem Solving: Troubleshoot and resolve technical issues in a timely manner. • Insurance Knowledge: Possess a basic understanding of insurance principles, processes, and products. • Agile Development: Participate in Agile/Scrum development processes and contribute to sprint planning. Kubernetes: • Should have extensive experience of implementing Kubernetes concepts, like pods, services, deployments, and stateful sets. • Experience with container runtimes like Docker. • Familiarity with Kubernetes networking, including CNI plugins, ingress controllers, and service meshes. • Knowledge of infrastructure as code tools, such as Helm, Kustomize, or Terraform. • Good if has experience on OpenShift • Ability to set up monitoring, logging, and alerting for Kubernetes clusters. • Understanding of Kubernetes security best practices, like RBAC, network, and pod security policies.

Posted 2 months ago

Apply

3.0 years

6 - 8 Lacs

Noida

Remote

JOB DESCRIPTION Application Management ServicesAMS’s mission is to maximize the contributions of MMC Technology as a business-driven, future-ready and competitive function by reducing the time and cost spent managing applicationsAMS, Business unit of Marsh McLennan is seeking candidates for the following position based in the Gurgaon/Noida office: Principal Engineer Kubernetes Platform Engineer Position overview: We are seeking a skilled Kubernetes Platform Engineer with strong background in Cloud technologies (AWS, Azure) to manage, configure, and support Kubernetes infrastructure in a dynamic, high-availability environment. The Engineer collaborates with Development, DevOps and other technology teams to ensure that the Kubernetes platform ecosystem is reliable, scalable and efficient. The ideal candidate must possess hands-on experience in Kubernetes clusters operations management, and container orchestration, along with strong problem-solving skills. Experience in infrastructure platform management is required. Responsibilities: Implement and maintain platform services in Kubernetes infrastructure. Perform upgrades and patch management for Kubernetes and its associated components (not limited to API management system) are expected and required. Monitor and optimize Kubernetes resources, such as pods, nodes, and namespaces. Implement and enforce Kubernetes security best practices, including RBAC, network policies, and secrets management. Work with the security team to ensure container and cluster compliance with organizational policies. Troubleshoot and resolve issues related to Kubernetes infrastructure in a timely manner. Provide technical guidance and support to developers and DevOps teams. Maintain detailed documentation of Kubernetes configurations and operational processes. Maintain and support of Ci/CD pipelines are not part of the support scope of this position. Preferred skills and experience: At least 3 years of experience in managing and supporting Kubernetes clusters at platform operation layer, and its ecosystem. At least 2 years of infrastructure management and support, not limited to SSL certificate, Virtual IP. Proficiency in managing Kubernetes clusters using tools such as 'kubectl', Helm, or Kustomize. In-depth knowledge and experience of container technologies, including Docker. Experience with cloud platforms (AWS, GCP, Azure) and Kubernetes services (EKS, GKE, AKS). Understanding of infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of centralized logging systems like Fluentd, Logstash, or Loki. Proficiency in scripting languages (e.g., Bash, Python, or Go). Experience in supporting Public Cloud or hybrid cloud environments. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s 85,000 colleagues advise clients in 130 countries. With annual revenue of over $20 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh advises individual and commercial clients of all sizes on insurance broking and innovative risk management solutions. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and wellbeing for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and TwitterMarsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law.Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X.Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law.Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person.

Posted 2 months ago

Apply

7.0 - 10.0 years

0 Lacs

Karnataka, India

On-site

Who You’ll Work With You’ll be joining a dynamic, fast-paced Global EADP (Enterprise Architecture & Developer Platforms) team within Nike. Our team is responsible for building innovative cloud-native platforms that scale with the growing demands of the business. Collaboration and creativity are at the core of our culture, and we’re passionate about pushing boundaries and setting new standards in platform development. Who We Are Looking For We are looking for an ambitious Lead Software Engineer – Platforms with a passion for cloud-native development and platform ownership. You are someone who thrives in a collaborative environment, is excited by cutting-edge technology, and excels at problem-solving. You have a strong understanding of AWS Cloud Services, Kubernetes, DevOps, Databricks, Python and other cloud-native platforms. You should be an excellent communicator, able to explain technical details to both technical and non-technical stakeholders and operate with urgency and integrity. Key Skills & Traits Deep expertise in Kubernetes, AWS Services, Full Stack. working experience in designing and building production grade Microservices in any programming languages preferably in Python Experience Building end to end CI/CD pipeline to build, test and deploy to different AWS environments such as lambda, EC2,ECS , EKS etc. Experience on AIML with proven knowledge of building chatbots by using LLM’s. Familiarity with software engineering best practices – including unit tests, code review, version control, production monitoring, etc. Strong Experience on React, Node JS, Proficient in managing cloud-native platforms, with a strong PaaS (Platform as a Service) focus. Knowledge of software engineering best practices including version control, code reviews, and unit testing. A proactive approach with the ability to work independently in a fast-paced, agile environment. Strong collaboration and problem-solving skills. Mentoring team through the complex technical problems What You’ll Work On You will play a key role in shaping and delivering Nike’s next-generation platforms. As a Lead Software Engineer, you’ll leverage your technical expertise to build resilient, scalable solutions, manage platform performance, and ensure high standards of code quality. You’ll also be responsible for leading the adoption of open-source and agile methodologies within the organization. Day-to-Day Activities: Deep working experience on Kubernetes, AWS Services, Databricks, AIML etc., Working experience of infrastructure as code tools, such as Helm, Kustomize, or Terraform. Implementation of Open Source Projects in K8s. Ability to set up monitoring, logging, and alerting for Kubernetes clusters. Implementation of Kubernetes security best practices, like RBAC, network, and pod security policies Experience with container runtimes like Docker Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Design, implement, and maintain robust CI/CD pipelines using Jenkins for efficient software delivery. Manage and optimize Artifactory repositories for efficient artifact storage and distribution. Architect, deploy, and manage AWS EC2 instances, Lambda functions, Auto Scaling Groups (ASG), and Elastic Block Store (EBS) volumes. Collaborate with cross-functional teams to ensure seamless integration of DevOps practices into the software development lifecycle. Monitor, troubleshoot, and optimize AWS resources to ensure high availability, scalability, and performance. Implement security best practices and compliance standards in the AWS environment. Develop and maintain scripts in Python, Groovy, and Shell for automation and core engineering tasks. Deep expertise in at least one of the technologies - Python, React, NodeJS Good Knowledge on CI/CD Pipelines and DevOps Skills – Jenkins, Docker, Kubernetes etc., Collaborate with product managers to scope new features and capabilities. Strong collaboration and problem-solving skills. 7-10 years of experience in designing and building production-grade platforms. Technical expertise in Kubernetes, AWS Cloud Services and cloud-native architectures. Proficiency in Python, Node JS, React, SQL, and AWS. Strong understanding of PaaS architecture and DevOps tools like Kubernetes, Jenkins, Terraform, Docker Familiarity with governance, security features, and performance optimization. Keen attention to detail with a growth mindset and the desire to explore new technologies.

Posted 2 months ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Location: Bangalore / Hyderabad / Chennai / Pune / Gurgaon Mode: Hybrid (3 days/week from office) Relevant Experience: 7+ years must Role Type: Individual Contributor Client: US-based multinational banking institution Role Summary We are hiring a seasoned DevOps Engineer (IC) to drive infrastructure automation, deployment reliability, and engineering velocity for AWS-hosted platforms. You’ll play a hands-on role in building robust CI/CD pipelines, managing Kubernetes (EKS or equivalent), and implementing GitOps, infrastructure as code, and monitoring systems. Must-Have Skills & Required Depth AWS Cloud Infrastructure Independently provisioned core AWS services — EC2, VPC, S3, RDS, Lambda, SNS, ECR — using CLI and Terraform. Configured IAM roles, security groups, tagging standards, and cost monitoring dashboards. Familiar with basic networking and serverless deployment models. Containerization (EKS / Kubernetes) Deployed containerized services to Amazon EKS or equivalent. Authored Helm charts, configured ingress controllers, pod autoscaling, resource quotas, and health probes. Troubleshot deployment rollouts, service routing, and network policies. Infrastructure as Code (Terraform / Ansible / AWS SAM) Created modular Terraform configurations with remote state, reusable modules, and drift detection. Implemented Ansible playbooks for provisioning and patching. Used AWS SAM for packaging and deploying serverless workloads. GitOps (Argo CD / Equivalent) Built and managed GitOps pipelines using Argo CD or similar tools. Configured application sync policies, rollback strategies, and RBAC for deployment automation. CI/CD (Bitbucket / Jenkins / Jira) Developed multi-stage pipelines covering build, test, scan, and deploy workflows. Used YAML-based pipeline-as-code and integrated Jira workflows for traceability. Scripting (Bash / Python) Written scripts for log rotation, backups, service restarts, and automated validations. Experienced in handling conditional logic, error management, and parameterization. Operating Systems (Linux) Proficient in Ubuntu/CentOS system management, package installations, and performance tuning. Configured Apache or NGINX for reverse proxy, SSL, and redirects. Datastores (MySQL / PostgreSQL / Redis) Managed relational and in-memory databases for application integration, backup handling, and basic performance tuning. Monitoring & Alerting (Tool-Agnostic) Configured metrics collection, alert rules, and dashboards using tools like CloudWatch, Prometheus, or equivalent. Experience in designing actionable alerts and telemetry pipelines. Incident Management & RCA Participated in on-call rotations. Handled incident bridges, triaged failures, communicated status updates, and contributed to root cause analysis and postmortems. Nice-to-Have Skills Skill Skill Depth Kustomize / FluxCD Exposure to declarative deployment strategies using Kustomize overlays or FluxCD for GitOps workflows. Kafka Familiarity with event-streaming architecture and basic integration/configuration of Kafka clusters in application environments. Datadog (or equivalent) Experience with Datadog for monitoring, logging, and alerting. Configured custom dashboards, monitors, and anomaly detection. Chaos Engineering Participated in fault-injection or resilience testing exercises. Familiar with chaos tools or simulations for validating system durability. DevSecOps & Compliance Exposure to integrating security scans in pipelines, secrets management, and contributing to compliance audit readiness. Build Tools (Maven / Gradle / NPM) Experience integrating build tools with CI systems. Managed dependency resolution, artifact versioning, and caching strategies. Backup / DR Tooling (Veeam / Commvault) Familiar with backup scheduling, data restore processes, and supporting DR drills or RPO/RTO planning. Certifications (AWS / Terraform) Possession of certifications like AWS Certified DevOps Engineer, Developer Associate, or HashiCorp Certified Terraform Associate is preferred.

Posted 2 months ago

Apply

0 years

0 Lacs

India

On-site

Who we are. Newfold Digital (with over $1b in revenue) is a leading web technology company serving nearly seven million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, our portfolio of brands includes: Bluehost, Crazy Domains, HostGator, Network Solutions, Register.com, Web.com and many others. We help customers of all sizes build a digital presence that delivers results. With our extensive product offerings and personalized support, we take pride in collaborating with our customers to serve their online presence needs. We’re hiring for our Developer Platform team at Newfold Digital — a team focused on building the internal tools, infrastructure, and systems that improve how our engineers develop, test, and deploy software. In this role, you’ll help design and manage CI/CD pipelines, scale Kubernetes-based infrastructure, and drive adoption of modern DevOps and GitOps practices. You’ll work closely with engineering teams across the company to improve automation, deployment velocity, and overall developer experience. We’re looking for someone who can take ownership, move fast, and contribute to a platform that supports thousands of deployments across multiple environments. What you'll do & how you'll make your mark. Build and maintain scalable CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI Manage and improve Kubernetes clusters (Helm, Kustomize) used across environments Implement GitOps workflows using Argo CD or Argo Workflows Automate infrastructure provisioning and configuration with Terraform and Ansible Develop scripts and tooling in Bash, Python, or Go to reduce manual effort and improve reliability Work with engineering teams to streamline and secure the software delivery process Deploy and manage services across cloud platforms (AWS, GCP, Azure, OCI). Who you are & what you'll need to succeed. Strong understanding of core DevOps concepts including CI/CD, GitOps, and Infrastructure as Code Hands-on experience with Docker, Kubernetes, and container orchestration Proficiency with at least one major cloud provider (AWS, Azure, GCP, or OCI) Experience writing and managing Jenkins pipelines or similar CI/CD tools Comfortable working with Terraform, Ansible, or other configuration management tools Strong scripting skills (Bash, Python, Go) and a mindset for automation Familiarity with Linux-based systems and cloud-native infrastructure Ability to work independently and collaboratively across engineering and platform teams Good to Have Experience with build tools like Gradle or Maven Familiarity with Bitbucket or Git-based workflows Prior experience with Argo CD or other GitOps tooling Understanding of internal developer platforms and shared libraries Prior experience with agile development and project management. Why you’ll love us. We’ve evolved; we provide three work environment scenarios. You can feel like a Newfolder in a work-from-home, hybrid, or work-from-the-office environment. Work-life balance. Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences. We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20 + affinity groups where you can network and connect with Newfolders globally. We care about you. . At Newfold, taking care of our employees is our top priority. we make sure that cutting edge benefits are in place to for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold!

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary The AI&E portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE - Edge CI/CD Specialist Level: Senior Consultant As Senior Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Responsibilities: The work you will do includes Integrate tools like SonarQube, Synk, or other widely used code quality analysis tools to detect issues such as bugs, vulnerabilities, and code smells. Develop and maintain infrastructure as code using tools like Terraform. Work closely with development, operations, and security teams to ensure seamless integration of CI/CD processes. Provide support for deployment issues and troubleshoot problems in the CI/CD pipeline. Maintain comprehensive documentation of CI/CD processes, tools, and best practices. Train and mentor junior team members on edge and cloud computing technologies and best practices. Qualifications Skills / Project Experience: Cloud Platform: Extensive experience with cloud platforms such as AWS, Azure, or Google Cloud Proficiency in CI/CD tools such as Jenkins, GitLab CI, CircleCI, Harness, GITHub Actions, Argo CD, or Travis CI. Proficiency in using Helm charts and Kustomize for templating and managing Kubernetes manifests. Strong skills in scripting languages such as Bash, Python, or Groovy for automation tasks. Se curity: Experience working with SonarQube, Snyk or similar tools. Containerization: Experience with containerization technologies like Docker. Co ntainerization and Orchestration: Expertise in containerization technologies like Docker and Proficiency in orchestration tools like Kubernetes. Programming Languages: Proficiency in languages such as Python, Java, C++, or similar. Infrastructure as Code: Extensive experience with Ansible, Terraform, Chef, Puppet or similar tools. Project Management: Proven track record in leading large-scale cloud infrastructure projects. Collaboration: Effective communicator with cross-functional teams. Must Have: Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domains and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Problem-Solving: Strong analytical and troubleshooting skills to address client-specific challenges. Adaptability: Ability to quickly adapt to changing client requirements and emerging technologies. Project Leadership: Demonstrated leadership in managing client projects, ensuring timely delivery and client satisfaction. Business Acumen: Understanding of business processes and the ability to align technical solutions with client business goals. Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 6 - 10 years of experience working with DevOps, Terraform, Ansible, Jenkins, CI/CD, Edge Computing, monitoring, Infrastructure as Code (IaC), SonarQube, Synk, Docker, Kustomize Location: Bengaluru/ Hyderabad/ Gurugram The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services. Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com . #HC&IE Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302210 Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary The AI&E portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE – Edge CI/CD Specialist Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Responsibilities: The work you will do includes: Design, implement, and maintain CI/CD pipelines tailored for edge computing environments using tools such as Jenkins, Git and Travis CI etc. Develop automation scripts to streamline the deployment process. Manage code repositories and version control systems (e.g., Git). Implement automated testing frameworks with tools such as JUnit, pytest, Selenium, and Cypress for test automation to ensure code quality and reliability. Integrate tools like SonarQube, Synk, or other widely used code quality analysis tools to detect issues such as bugs, vulnerabilities, and code smells. Develop and maintain infrastructure as code using tools like Terraform. Work closely with development, operations, and security teams to ensure seamless integration of CI/CD processes. Provide support for deployment issues and troubleshoot problems in the CI/CD pipeline. Maintain comprehensive documentation of CI/CD processes, tools, and best practices. Train and mentor junior team members on edge and cloud computing technologies and best practices. Qualifications Skills / Project Experience: Cloud Platform: Extensive experience with cloud platforms such as AWS, Azure, or Google Cloud Proficiency in CI/CD tools such as Jenkins, GitLab CI, CircleCI, Harness, GITHub Actions, Argo CD, or Travis CI. Proficiency in using Helm charts and Kustomize for templating and managing Kubernetes manifests. Strong skills in scripting languages such as Bash, Python, or Groovy for automation tasks. Security: Experience working with SonarQube, Snyk or similar tools. Containerization: Experience with containerization technologies like Docker. Containerization and Orchestration: Expertise in containerization technologies like Docker and Proficiency in orchestration tools like Kubernetes. Programming Languages: Proficiency in languages such as Python, Java, C++, or similar. Infrastructure as Code: Extensive experience with Ansible, Terraform, Chef, Puppet or similar tools. Project Management: Proven track record in leading large-scale cloud infrastructure projects. Collaboration: Effective communicator with cross-functional teams. Must Have: Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domains and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Problem-Solving: Strong analytical and troubleshooting skills to address client-specific challenges. Adaptability: Ability to quickly adapt to changing client requirements and emerging technologies. Project Leadership: Demonstrated leadership in managing client projects, ensuring timely delivery and client satisfaction. Business Acumen: Understanding of business processes and the ability to align technical solutions with client business goals. Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 -7 years of experience working with DevOps, Terraform, Ansible, Jenkins, CI/CD, Edge Computing, monitoring, Infrastructure as Code (IaC), SonarQube, Synk, Docker, Kustomize Location: Bengaluru/ Hyderabad/ Gurugram The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services. Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com. #HC&IE Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303055 Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Company Description Prioxis Technologies, formerly known as HypeTeq Software Solutions, is dedicated to delivering exceptional IT services and custom software solutions. With 5+ years of experience, we have successfully completed over 100 projects across various industries, serving clients in more than 8 countries. Our team comprises over 50 certified software developers. As a Microsoft Gold Partner, we are recognized for our innovative approach and technical excellence in technology outsourcing. Our services include custom software development, cloud consulting, front-end and back-end development, enterprise mobility, and DevOps. Founded in 2019, Prioxis Technologies aims to empower businesses with tailor-made technology solutions. 🛡️ Security & Compliance Lead Security Risk Assessments (SRA) and Data Classification Requests (DCR) Ensure compliance with Roche’s security standards Conduct security audits and implement remediation plans 💰 Financial Operations (FinOps) Optimize cloud infrastructure costs Manage and monitor MLOps budget plans Provide cost analysis and financial reporting 🏗️ Architecture & Engineering Design, document, and maintain MLOps infrastructure Contribute to architectural best practices Implement and deploy robust MLOps pipelines 🔍 Technical Evaluations Run Proofs of Concept (PoCs) for emerging tools Evaluate solutions and recommend technical direction 🧩 Task Management Break down technical epics into actionable tasks Identify dependencies and propose optimal approaches 5+ years in MLOps , DevOps , or related roles Proficient in Python Hands-on with AWS , Docker , Kubernetes (Helm, Kustomize) Experience with Terraform or CDK (Infrastructure as Code) Skilled in CI/CD tools: GitLab CI , ArgoCD Familiar with observability tools: Grafana , ELK , or Datadog Bonus: Experience with Kubeflow , KServe Solid understanding of system architecture and design patterns Work Timing: 12:30 PM – 9:30 PM IST Contract Duration: 6 Months Location: Remote (India-based candidates preferred) Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Technical Operation for pRED-MLOps Job Profile Summary: Support technical operations for pRED-MLOps, focusing on security, financial operations, architecture, technical evaluations, and task breakdown. Key Responsibilities Security ● Drive security processes including Security Risk Assessment (SRA) and Data Classification Request (DCR). ● Ensure compliance with Roche security policies. ● Conduct security audits and lead implementations of remediation plans. Financial Operations (FinOps) ● Manage and optimize cloud infrastructure costs. ● Develop and monitor budget plans for MLOps operations. ● Provide regular cost analysis and reporting. Architecture and Engineering Support ● Contribute to the design and maintenance of the MLOps solutions and infrastructure. ● Contribute to architectural best practices. ● Support the team in documenting system architecture and configurations ● Contribute to the hands-on implementation of MLOps solutions and infrastructure Technical Explorations/Evaluations ● Conduct Proofs of Concept (PoCs) for new technologies. ● Evaluate technical solutions and make recommendations. Technical Task Breakdown ● Support the team in ○ breaking down tasks and epics into manageable components ○ identifying dependencies between tasks ○ proposing an optimal approach Qualifications Security Experience- Experience with security processes, preferably Roche SRA/DCR FinOps Experience- Experience managing and optimizing cloud costs Architecture- Understanding of system architecture principles and design patterns, preferably have previous experience in MLOps or similar area of work Technical Skills- Proficient in Python ● Extensive hands-on experience with cloud technologies, preferably AWS ● Extensive hands-on experience in Docker and Kubernetes (incl. Helm, Kustomize) ● Familiar with Infrastructure-as-Code tools, such as Terraform/CDK ● Familiar with CI/CD tools, such as Gitlab CI, ArgoCD ● Familiar with observability stacks, such as GrafanaLab stack or ELK or Datadog ● Preferably has previous experience in popular MLOps technologies, such as Kubeflow, KServe. Task Management- Experience in breaking down technical tasks and epics Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS) Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments Support deployment of infrastructure lambda functions Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments Ensure auditability and observability of pipeline states Implement security best practices, audit, and compliance requirements within the infrastructure Provide technical leadership, mentorship, and training to engineering staff Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role Extensive experience with Amazon Web Services (AWS) Proven ability to architect for scale, availability, and high-performance workloads Ability to plan and execute zero-disruption migrations Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD) Solid understanding of git, branching models, CI/CD pipelines and deployment strategies Experience with security, audit, and compliance best practices Excellent problem-solving and analytical skills Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership Experience with client relationship management and project planning Certifications: Relevant certifications (for example Kubernetes Certified Administrator, AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional etc.) Software development experience (for example Terraform, Python) Experience with machine learning infrastructure Education: B.Tech /BE in computer science, a related field or equivalent experience Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: We are seeking a highly skilled and experienced Senior Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecture. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance. You will also provide mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS) Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments Support deployment of infrastructure lambda functions Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments Ensure auditability and observability of pipeline states Implement security best practices, audit, and compliance requirements within the infrastructure Engage with clients to understand their technical and business requirements, and provide tailored solutions If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads Troubleshoot and resolve complex infrastructure issues Qualifications: 6+ years of experience in Infrastructure Engineering or similar role Extensive experience with Amazon Web Services (AWS) Proven ability to architect for scale, availability, and high-performance workloads Deep knowledge of Infrastructure as Code (IaC) with Terraform Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD) Solid understanding of git, branching models, CI/CD pipelines and deployment strategies Experience with security, audit, and compliance best practices Excellent problem-solving and analytical skills Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders Experience in technical mentoring, team-forming and fostering self-organization and ownership Experience with client relationship management and project planning Certifications: Relevant certifications (e.g., Kubernetes Certified Administrator, AWS Certified Machine Learning Engineer - Associate, AWS Certified Data Engineer - Associate, AWS Certified Developer - Associate, etc.) Software development experience (e.g., Terraform, Python) Experience/Exposure with machine learning infrastructure Education: B.Tech/BE in computer sciences, a related field or equivalent experience Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Application Management Services AMS’s mission is to maximize the contributions of MMC Technology as a business-driven, future-ready and competitive function by reducing the time and cost spent managing applications AMS , Business unit of Marsh McLennan is seeking candidates for the following position based in the Gurgaon/Noida office: Principal Engineer Kubernetes Platform Engineer Position overview: We are seeking a skilled Kubernetes Platform Engineer with strong background in Cloud technologies (AWS, Azure) to manage, configure, and support Kubernetes infrastructure in a dynamic, high-availability environment. The Engineer collaborates with Development, DevOps and other technology teams to ensure that the Kubernetes platform ecosystem is reliable, scalable and efficient. The ideal candidate must possess hands-on experience in Kubernetes clusters operations management, and container orchestration, along with strong problem-solving skills. Experience in infrastructure platform management is required. Responsibilities: Implement and maintain platform services in Kubernetes infrastructure. Perform upgrades and patch management for Kubernetes and its associated components (not limited to API management system) are expected and required. Monitor and optimize Kubernetes resources, such as pods, nodes, and namespaces. Implement and enforce Kubernetes security best practices, including RBAC, network policies, and secrets management. Work with the security team to ensure container and cluster compliance with organizational policies. Troubleshoot and resolve issues related to Kubernetes infrastructure in a timely manner. Provide technical guidance and support to developers and DevOps teams. Maintain detailed documentation of Kubernetes configurations and operational processes. Maintain and support of Ci/CD pipelines are not part of the support scope of this position. Preferred skills and experience: At least 3 years of experience in managing and supporting Kubernetes clusters at platform operation layer, and its ecosystem. At least 2 years of infrastructure management and support, not limited to SSL certificate, Virtual IP. Proficiency in managing Kubernetes clusters using tools such as `kubectl`, Helm, or Kustomize. In-depth knowledge and experience of container technologies, including Docker. Experience with cloud platforms (AWS, GCP, Azure) and Kubernetes services (EKS, GKE, AKS). Understanding of infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of centralized logging systems like Fluentd, Logstash, or Loki. Proficiency in scripting languages (e.g., Bash, Python, or Go). Experience in supporting Public Cloud or hybrid cloud environments. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s 85,000 colleagues advise clients in 130 countries. With annual revenue of over $20 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh advises individual and commercial clients of all sizes on insurance broking and innovative risk management solutions. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and wellbeing for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and Twitter Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_310034 Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description We are seeking a highly skilled and passionate GKE Platform Engineer to join our growing team. This role is ideal for someone with deep experience in managing Google Kubernetes Engine (GKE) platforms at scale, particularly with enterprise-level workloads on Google Cloud Platform (GCP). As part of a dynamic team, you will design, develop, and optimize Kubernetes-based solutions, using tools like GitHub Actions, ACM, KCC, and workload identity to provide high-quality platform services to developers. You will drive CI/CD pipelines across multiple lifecycle stages, manage GKE environments at scale, and enhance the developer experience on the platform. You should have a strong mindset for developer experience, focused on creating reliable, scalable, and efficient infrastructure to support developer needs. This is a fast-paced environment where collaboration across teams is key to delivering impactful results. Responsibilities Responsibilities: GKE Platform Management at Scale: Manage and optimize large-scale GKE environments in a multi-cloud and hybrid-cloud context, ensuring the platform is highly available, scalable, and secure. CI/CD Pipeline Development: Build and maintain CI/CD pipelines using tools like GitHub Actions to automate deployment workflows across the GKE platform. Ensure smooth integration and delivery of services throughout their lifecycle. Enterprise GKE Management: Leverage advanced features of GKE such as ACM (Anthos Config Management) and KCC (Kubernetes Cluster Config) to manage GKE clusters efficiently at the enterprise scale. Workload Identity & Security: Implement workload identity and security best practices to ensure secure access and management of GKE workloads. Custom Operators & Controllers: Develop custom operators and controllers for GKE, automating the deployment and management of custom services to enhance the developer experience on the platform. Developer Experience Focus: Maintain a developer-first mindset to create an intuitive, reliable, and easy-to-use platform for developers. Collaborate with development teams to ensure seamless integration with the GKE platform. GKE Deployment Pipelines: Provide guidelines and best practices for GKE deployment pipelines, leveraging tools like Kustomize and Helm to manage and deploy GKE configurations effectively. Ensure pipelines are optimized for scalability, security, and repeatability. Zero Trust Model: Ensure GKE clusters operate effectively within a Zero Trust security model. Maintain a strong understanding of the principles of Zero Trust security, including identity and access management, network segmentation, and workload authentication. Ingress Patterns: Design and manage multi-cluster and multi-regional ingress patterns to ensure seamless traffic management and high availability across geographically distributed Kubernetes clusters. Deep Troubleshooting & Support: Provide deep troubleshooting knowledge and support to help developers pinpoint issues across the GKE platform, focusing on debugging complex Kubernetes issues, application failures, and performance bottlenecks. Utilize diagnostic tools and debugging techniques to resolve critical platform-related issues. Observability & Logging Tools: Implement and maintain observability across GKE clusters, using monitoring, logging, and alerting tools like Prometheus, Dynatrace, and Splunk. Ensure proper logging and metrics are in place to enable developers to effectively monitor and diagnose issues within their applications. Platform Automation & Integration: Automate platform management tasks, such as scaling, upgrading, and patching, using tools like Terraform, Helm, and GKE APIs. Continuous Improvement & Learning: Stay up-to-date with the latest trends and advancements in Kubernetes, GKE, and Google Cloud services to continuously improve platform capabilities. Qualifications Qualifications: Experience: 8+ years of overall experience in cloud platform engineering, infrastructure management, and enterprise-scale operations. 5+ years of hands-on experience with Google Cloud Platform (GCP), including designing, deploying, and managing cloud infrastructure and services. 5+ years of experience specifically with Google Kubernetes Engine (GKE), managing large-scale, production-grade clusters in enterprise environments. Experience with deploying, scaling, and maintaining GKE clusters in production environments. Hands-on experience with CI/CD practices and automation tools like GitHub Actions. Proven track record of building and managing GKE platforms in a fast-paced, dynamic environment. Experience developing custom Kubernetes operators and controllers for managing complex workloads. Deep Troubleshooting Knowledge: Strong ability to troubleshoot complex platform issues, with expertise in diagnosing problems across the entire GKE stack. Technical Skills: Must Have: Google Cloud Platform (GCP): Extensive hands-on experience with GCP, particularly Kubernetes Engine (GKE), Cloud Storage, Cloud Pub/Sub, Cloud Logging, and Cloud Monitoring. Kubernetes (GKE) at Scale: Expertise in managing large-scale GKE clusters, including security configurations, networking, and workload management. CI/CD Automation: Strong experience with CI/CD pipeline automation tools, particularly GitHub Actions, for building, testing, and deploying applications. Kubernetes Operators & Controllers: Ability to develop custom Kubernetes operators and controllers to automate and manage applications on GKE. Workload Identity & Security: Solid understanding of Kubernetes workload identity and access management (IAM) best practices, including integration with GCP Identity and Google Cloud IAM. Anthos & ACM: Hands-on experience with Anthos Config Management (ACM) and Kubernetes Cluster Config (KCC) to manage and govern GKE clusters and workloads at scale. Infrastructure as Code (IaC): Experience with tools like Terraform to manage GKE infrastructure and cloud resources. Helm & Kustomize: Experience in using Helm and Kustomize for packaging, deploying, and managing Kubernetes resources efficiently. Ability to create reusable and scalable Kubernetes deployment templates. Observability & Logging Tools: Experience with observability tools such as Prometheus, Dynatrace, and Splunk to monitor and log GKE performance, providing developers with actionable insights for troubleshooting. Nice to Have: Zero Trust Security Model: Strong understanding of implementing and maintaining security in a Zero Trust model for GKE, including workload authentication, identity management, and network security. Ingress Patterns: Experience with designing and managing multi-cluster and multi-regional ingress in Kubernetes to ensure fault tolerance, traffic management, and high availability. Familiarity with Open Policy Agent (OPA) for policy enforcement in Kubernetes environments. Education & Certification: Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant GCP certifications, such as Google Cloud Certified Professional Cloud Architect or Google Cloud Certified Professional Cloud Developer. Soft Skills: Collaboration: Strong ability to work with cross-functional teams to ensure platform solutions meet development and operational needs. Problem-Solving: Excellent problem-solving skills with a focus on troubleshooting and performance optimization. Communication: Strong written and verbal communication skills, able to communicate effectively with both technical and non-technical teams. Initiative & Ownership: Ability to take ownership of platform projects, driving them from conception to deployment with minimal supervision. Adaptability: Willingness to learn new technologies and adjust to evolving business needs. Show more Show less

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies