Jobs
Interviews

183 Glm Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

13 - 17 Lacs

gurugram

Work from Office

About The Role Project Role : Security Architect Project Role Description : Define the cloud security framework and architecture, ensuring it meets the business requirements and performance goals. Document the implementation of the cloud security controls and transition to cloud security-managed operations. Must have skills : Cloud Security Architecture Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Security Architect, lead the design and implementation of secure, scalable cloud environments across GCP, Azure, and AWS. With deep hands-on expertise and strong stakeholder engagement, drive security strategy, define controls, and support teams in building cloud-native solutions aligned with business, compliance, and operational goals. Roles & Responsibilities:Design and implement GCP security architecture.Define IAM, network, and data protection controls.Lead threat modeling and architecture risk assessments.Collaborate with engineering, DevOps, and compliance teams.Engage stakeholders on cloud security risks and solutions.Integrate and automate security tooling (SCC, DLP, Chronicle).Guide secure CI/CD practices and container hardening.Develop documentation, playbooks, and handover materials. Professional & Technical Skills: Strong knowledge of GCP native services:IAM, VPC Service Controls, KMS, GKE, SCC, Cloud Functions.Proficient in Terraform, Python, and shell scripting.Hands-on experience with CSPM, CNAPP, and CWPP tools (e.g., Prisma, Wiz).Familiar with CIS, NIST, PCI-DSS, and other compliance frameworks.Knowledge of Zero Trust principles and container security.Skilled in secure CI/CD pipelines and policy automation.Effective communicator with strong stakeholder and team leadership skills.Structured and detailed in documentation and architecture delivery. Additional Information7+ years of experience in security architecture with strong GCP expertise.This role is based in our Gurugram office.15 years of full-time education required. Qualification 15 years full time education

Posted 1 day ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

bengaluru

Work from Office

As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Relevant Experience5 years Proficient in Core Java (Java 17+), Spring Boot, tomcat/JBoss/WebSphere, RDBMS (My SQL/Oracle) and developing RESTful APIs. Experienced in modernizing and migrating on-premise Java/J2EE applications to Google Cloud. Hands-on with Docker and Kubernetes (GKE) for containerizing and deploying workloads. Skilled in using GCP services like Cloud SQL, GCE, GKE, and setting up CI/CD pipelines, Well-versed with discovery and intake tools such as Delivery Curator for cloud migration planning Preferred technical and professional experience None

Posted 1 day ago

Apply

8.0 - 12.0 years

30 - 42 Lacs

hyderabad, pune, bengaluru

Work from Office

We are seeking a Technical Lead with strong Application Development expertise in Google Cloud Platform (GCP). The successful candidate will provide technical leadership in designing and implementing robust, scalable cloud-based solutions. If you are an experienced professional passionate about GCP technologies and committed to staying abreast of emerging trends, apply today. Responsibilities Design, develop, and deploy cloud-based solutions using GCP, establishing and adhering to cloud architecture standards and best practices Hands on coding experience in building Java Applications using GCP Native Services like GKE, CloudRun, Functions, Firestore, CloudSQL, PubSub, etc. Develop low-level application architecture designs based on enterprise standards Choose appropriate GCP services meeting functional and non-functional requirements Demonstrate comprehensive knowledge with GCP PaaS, Serverless, and Database services Provide technical leadership to development and infrastructure teams, guiding them throughout the project lifecycle Ensure all cloud-based solutions comply with security and regulatory standards Enhance cloud-based solutions optimizing performance, cost, and scalability Stay up-to-date with the latest cloud technologies and trends in the industry Familiarity with GCP GenAI solutions and models including Vertex.ai, Codebison, and Gemini models is preferred, but not required Having hands on experience in front end technologies like Angular or React will be added advantage Requirements Bachelor's or Master's degree in Computer Science, Information Technology, or a similar field Must have 8 + years of extensive experience in designing, implementing, and maintaining applications on GCP Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team

Posted 1 day ago

Apply

8.0 - 13.0 years

2 - 2 Lacs

hyderabad

Work from Office

SUMMARY Experience : 10+ Location : Electronic City, Bangalore Leadership and Strategy Develop Container platform roadmaps and strategies for growth based on business needs Engage in and enhance the complete service lifecycle from conceptualization and design to implementation and operation Identify the growth path and scalability options of a solution and include these in design activities Solution Design, architecture and Planning Gather requirements, assess technical feasibility, and design integrated container solutions that align with business objectives. Architect and optimize the technical solutions to meet the requirements of the customer. Identify the potential challenges and constraints that impact the solution and project plan. Opportunity assessment Respond to the technical sections of RFIs/RFPs and Lead proof-of-concept engagements to a successful conclusion Utilize an effective consultative approach to advance opportunities Innovation and Research Stay abreast of emerging technologies, trends, and industry developments related to Kubernetes, containers, cloud computing, and Security. Develop best practices, Accelerators and Show & Tell for Container platform solutions and integrations. Customer-centric mindset Strong focus on understanding customer business requirements and solving complex cloud technology issues Be the trusted advisor, delight customers, and deliver exceptional customer experiences to drive customer success. Communicate complex technical concepts and findings to non-technical stakeholders Team Collaboration Collaborate with cross-functional teams, including system administrators, developers, data scientist and project managers, to ensure successful project delivery. Understands the roles and effectively engages other teams and resources within the company Mentor and train new team members and lead the way in participation in tech talks, forums, innovation. Performance Optimization and Troubleshooting Troubleshoot and resolve technical issues related to complete solutions. Identify performance bottlenecks and provide remediation's. Project Delivery Ability to lead technical projects by gathering the requirements, preparing the architecture / design and executing it end to end. Must be able to bring clarity and drive complex projects involving multiple stakeholders Solid business acumen and ability to converse with clients on issues and challenges Qualifications Bachelor’s/master’s degree in computer science, Information Technology, or a related field Proven experience as a Solutions Architect and Container platform Expert, or similar role, with expertise in designing and implementing complex solutions Red Hat Certified Specialist in Containers and Kubernetes (RHCSA, RHCE), CNCF certification - CKA, CKAD, CKS is preferred Typically, 10 years of experience in delivering complex Container platform projects Technical Skills Container Technologies and Orchestration Platform In-depth knowledge and hands-on experience with containerization technologies like Docker,or Pod-man In-depth knowledge and hands-on experience with at least two (2) of the container orchestration technologies like CNCF Kubernetes, Red Hat OpenShift, SUSE Rancher RKE/K3S, Canonical charmed Kubernetes or HPE AI Essentials Linux Knowledge and experience with Linux System Administration, package management, scheduling, boot procedures/troubleshooting, performance optimization, and networking concepts Good knowledge and hands-on experience with at least two various Linux distributions like RHEL, SLES, Ubuntu, Debian. Virtualization Good knowledge and hands-on experience with virtualization technologies like KVM, OpenShift virtualization Programming Languages Good experience with Programming like python, Good experience withScripting languages like bash Cloud Platforms Good knowledge and hands-on experience with OpenStack cloud solutions Good Knowledge with any of the public cloud container services- AKS, EKS, GKE Understanding of cloud infrastructure and services for scalable AI deployments. Good understanding of Cloud Security and Observability Storage Indepth knowledge and hands-on experience with CSI drivers Good knowledge of storage concepts - Block,File and/or Object Storage (like Minio) Networks Good knowledge of network protocols like TCP/IP, S3, FTP, NFS, or SMB/CIFS Good knowledge of DNS, TCP/IP, Routing and Load Balancing GPU Knowledge of GPU technologies, NVIDIA GPU operator, NVIDIA vGPU technology

Posted 4 days ago

Apply

12.0 - 16.0 years

0 Lacs

noida, uttar pradesh

On-site

As a leader in supporting AWM LFO Predictive Analytics Ameriprise India team, you will be responsible for leading advanced data science work and collaborating with cross-functional teams to recommend and implement analytical solutions for the Advise & Wealth Management business. **Key Responsibilities:** - Analyze, design, develop, and implement decision support models and data visualizations using tools like Python, Microsoft Excel/PowerBI. - Develop innovative analytical solutions such as segmentation, predictive modeling, simulation modeling, and optimization. - Maintain infrastructure systems connecting internal data sets and create new data collection frameworks. - Interact with business partners to analyze and interpret business needs, translating top-level goals into quantifiable analysis activities. - Develop presentation decks and effectively communicate with business leaders. - Demonstrate thought leadership in solving complex business problems. - Drive team performance and ensure strong people development for high performance. **Required Qualifications:** - Engineering (B.E./ B.Tech.) graduate or Masters in Stats, Quant, Mathematics, OR, etc. from a well-recognized institute; MBA degree preferred. - 12+ years of experience in Data Science. - Hands-on experience in complex analytics projects using advanced statistical methods. - Proficiency in Python, cloud computing (AWS), SQL, and Data Lake experience. - Ability to effectively present complex technical materials to business leaders. - Strategic thinker providing technical solutions for complex business problems. - Excellent knowledge of MS Office Suite. - In-depth knowledge of the financial services industry with a focus on risk. - People leadership experience with prior team management responsibilities. **Preferred Qualifications:** - Ability to engage business partners and finance personnel with clear, data-backed insights. - Experience working with AWS Cloud framework and associated tools. - Experience working in a data lake environment. - Knowledge of tools like DataIku, PowerBI, Tableau, and other BI and analytical tools. At Ameriprise India LLP, we have been providing client-based financial solutions for 125 years. Join our inclusive and collaborative culture that rewards contributions and offers opportunities for growth. Make your mark and a difference in your community by creating a career at Ameriprise India LLP. (Note: The additional details about the company have been omitted as they were not explicitly mentioned in the job description.),

Posted 4 days ago

Apply

15.0 - 20.0 years

40 - 45 Lacs

noida

Work from Office

15+ years of experience in SAP EHS and related modules. Proven track record in implementing MSDS, GLM, SVT. Hands-on experience with SAP S/4HANA Cloud for Environment Management.

Posted 4 days ago

Apply

10.0 - 15.0 years

13 - 17 Lacs

noida

Work from Office

About The Role Qualifications and Experience A Bachelor"s degree in Engineering, around 10+ years of professional technology experience Experience deploying and running enterprise grade public cloud infrastructure, preferably with GCP Hands-on Automation with Terraform, Groovy and experience with CI-CD. Hands-on experience in Linux/Unix environment and scripting languages(eg Shell, Perl, Python, Javascript, Golang etc). Hands-on experience in two or more of the following areas Databases (NoSQL/ SQL)Hadoop, Cassandra, MySQL Messaging system configuration and maintenance (Kafka+Zookeeper, MQTT, RabbitMQ) WAF, CloudArmor, NGINX Apache/Tomcat/JBoss based web applications and services (REST) Observability stacks (eg ELK, Grafana Labs) Hands-on experience with Kubernetes (GKE, AKS) Hands-on experience with Jenkins GitOps experience is a plus Experience working with large Enterprise grade SAAS products. Proven capability for critical thinking, problem solving and the patience to see hard problems through to the end About The Role - Grade Specific

Posted 5 days ago

Apply

2.0 - 6.0 years

6 - 9 Lacs

mumbai

Work from Office

About The Role Cloud/Edge Engineer GKE Specialist Location: Experience: 8-12 years Choosing Capgemini means choosing a place where youll be empowered to shape your career, supported by a collaborative global community, and inspired to reimagine whats possible. Join us in helping leading organizations unlock the value of cloud and edge technologies to drive scalable, intelligent, and sustainable digital transformation. Your Role As a Cloud/Edge Engineer specializing in Google Kubernetes Engine (GKE), you will play a key role in architecting and deploying scalable edge and cloud-native solutions across distributed environments. You will work on cutting-edge technologies like K3s, KubeEdge, and Kosmotron to enable seamless orchestration between edge clusters and centralized GCP infrastructure. In this role, you will: Architect and deploy Kubernetes-based edge clusters (K3s) across distributed environments such as retail or restaurant chains. Integrate shared storage solutions using Rook.io and manage persistent volumes across edge nodes. Implement Kosmotron for multi-cluster Kubernetes control plane management. Extend Kubernetes capabilities to edge devices using KubeEdge for real-time device communication. Deploy and manage workloads on Google Cloud GKE for centralized analytics and cloud-native services. Leverage CloudCore for edge node coordination and data synchronization. Your Profile 8-12 years of experience in cloud-native engineering, with a strong focus on GCP and Kubernetes. Deep expertise in GKE, Cloud Monitoring, Cloud Logging, and other GCP services. Proficiency in containerization technologies including Docker and Kubernetes. Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, Helm, and Deployment Manager. Strong understanding of DevOps practices including CI/CD, GitOps, automated testing, and release management. Solid grasp of cloud security principles including IAM, VPC design, encryption, and vulnerability management. Programming proficiency in Python, Go, or Java for automation and cloud-native development. Experience working in distributed edge environments is a strong plus. What Youll Love About Working Here Flexible work options and remote-friendly culture to support work-life balance. A collaborative and inclusive environment that values innovation and continuous learning. Access to cutting-edge projects and certifications in cloud, edge, and DevOps technologies.

Posted 5 days ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

bengaluru

Work from Office

As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Relevant Experience5 years Proficient in Core Java (Java 17+), Spring Boot, tomcat/JBoss/WebSphere, RDBMS (My SQL/Oracle) and developing RESTful APIs. Experienced in modernizing and migrating on-premise Java/J2EE applications to Google Cloud. Hands-on with Docker and Kubernetes (GKE) for containerizing and deploying workloads. Skilled in using GCP services like Cloud SQL, GCE, GKE, and setting up CI/CD pipelines, Well-versed with discovery and intake tools such as Delivery Curator for cloud migration planning Preferred technical and professional experience None

Posted 5 days ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

bengaluru

Work from Office

Field: Information Technology, Data Management, Data Analytics, Business, Supply Chain, Operations Experience: Bachelors or Masters degree in Data Science, Computer Science, Statistics, or a related field Number of Years: 5+ years in relevant roles with Palantir experience/ knowledge is Must Other: At least five years of relevant project experience in successfully launching, planning, and executing data science projects, including statistical analysis, data engineering, and data visualization Proven experience in conducting statistical analysis and building models with advanced scripting languages Experience leading projects that apply ML and data science to business functions Specialization in text analytics, image recognition, graph analysis, or other specialized ML techniques, such as deep learning, is preferred Skills: Fluency in multiple programming languages and statistical analysis tools such as Python, PySpark, C++, JavaScript, R, SAS, Excel, SQL Knowledge of distributed data/computing tools such as MapReduce, Hadoop, Hive, or Kafka Knowledge of statistical and data mining techniques such as generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN) Strong understanding of AI, its potential roles in solving business problems, and the future trajectory of generative AI models Willingness and ability to learn new technologies on the job Ability to communicate complex projects, models, and results to a diverse audience with a wide range of understanding Ability to work in diverse, cross-functional teams in a dynamic business environment Superior presentation skills, including storytelling and other techniques to guide and inspire Familiarity with big data, versioning, and cloud technologies such as Apache Spark, Azure Data Lake Storage, Git, Jupyter Notebooks, Azure Machine Learning, and Azure Databricks. Familiarity with data visualization tools (Power BI experience preferred). Knowledge of database systems and SQL. Strong communication and collaboration abilities.

Posted 6 days ago

Apply

0.0 - 5.0 years

0 - 0 Lacs

bengaluru

Work from Office

SUMMARY Wissen Technology is Hirin g for GCP Cloud Engineer About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia . Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise , and a relentless focus on quality. We don’t just meet expectations we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives . We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent . Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact the first time, every time. Job Summary : We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer , Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability. You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost. Experience : 4 - 12 Years Location: Pune Mode of Work : Full Time Key Responsibilities : 1. Cloud Infrastructure Design & Management Architect, deploy, and maintain GCP cloud resources via terraform/other automation. Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs. Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability. Optimize resource allocation, monitoring, and cost efficiency across GCP environments. 2. Kubernetes & Container Orchestration Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE). Work with Helm charts, Istio, and service meshes for microservices deployments. Automate scaling, rolling updates, and zero-downtime deployments. 3. Serverless & Compute Services Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads. Optimize containerized applications running on Cloud Run for cost efficiency and performance. 4. CI/CD & DevOps Automation Design, implement, and manage CI/CD pipelines using Azure DevOps. Automate infrastructure deployment using Terraform, Bash and Power shell scripting Integrate security and compliance checks into the DevOps workflow ( DevSecOps ). Requirements: 8+ years of experience in Cloud Engineering, with strong focus on GCP . Hands-on experience with GKE, Compute Engine, IAM, VPC, Cloud Functions, Cloud SQL . Solid expertise in Docker, Kubernetes networking , and Helm charts . Proficient with Azure DevOps for building automated CI/CD pipelines. Strong experience in Terraform and Infrastructure as Code ( IaC ) . Proficiency in scripting languages like Python, Bash, or PowerShell . Knowledge of cloud security , IAM, and compliance frameworks. Strong problem-solving skills and ability to work independently in fast-paced environments Wissen Sites: Website: www.wissen.com LinkedIn: https://www.linkedin.com/company/wissen - technology Wissen Leadership: https://www.wissen.com/company/leadership - team/ Wissen Live: https://www.linkedin.com/company/wissen - technology/posts/feedView=All

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

bengaluru

Work from Office

Educational Requirements Master Of Engineering,MCA,MTech,BTech,BE,BCA,Bachelor of Engineering,Bachelor Of Science,Master Of Science Service Line Application Development and Maintenance Responsibilities Design and implement cloud-native solutions on Google Cloud PlatformDeploy and manage infrastructure using Terraform, Cloud Deployment Manager, or similar IaC toolsManage GCP services such as Compute Engine, GKE (Kubernetes), Cloud Storage, Pub/Sub, Cloud Functions, BigQuery, etc.Optimize cloud performance, cost, and scalabilityEnsure security best practices and compliance across the GCP environmentMonitor and troubleshoot issues using Stackdriver/Cloud MonitoringCollaborate with development, DevOps, and security teamsAutomate workflows, CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Build Additional Responsibilities: GCP Professional certification (e.g., Professional Cloud Architect, Cloud Engineer)Experience with hybrid cloud or multi-cloud architectureExposure to other cloud platforms (AWS/Azure) is a plusStrong communication and teamwork skills Technical and Professional Requirements: 35 years of hands-on experience with GCPStrong expertise in Terraform, GCP networking, and cloud securityProficient in container orchestration using Kubernetes (GKE)Experience with CI/CD, DevOps practices, and shell scripting or PythonGood understanding of IAM, VPC, firewall rules, and service accountsFamiliarity with monitoring/logging tools like Stackdriver or PrometheusStrong problem-solving and troubleshooting skills Preferred Skills: .Net Java Python Java->Springboot Cloud Platform ->Google Cloud Platform Developer->GCP/ Google Cloud

Posted 1 week ago

Apply

10.0 - 15.0 years

20 - 35 Lacs

noida, delhi / ncr

Work from Office

SAP EHS Solution Architect Location: Noida, India Work Mode: Work From Office - 4 days Experience: 15+ Years Immediate Joiners or short notice Joiners preferred. Key points for your reference: F2F required for 2 round. This is new greenfield implementation project with 4 yrs and above scope. Resource should be based out of Noida and ready to work from office 4 days a week. Platform RISE on SAP. Role & responsibilities 15+ years of experience in SAP EHS and related modules. Proven track record in implementing MSDS, GLM, SVT. Hands-on experience with SAP S/4HANA Cloud for Environment Management. Strong understanding of GHG emissions tracking and sustainability footprint management. Excellent communication and stakeholder management skills. Preferred Qualifications SAP EHS Certification(s) Experience in global rollouts and multi-country implementations Knowledge of regulatory compliance standards

Posted 1 week ago

Apply

4.0 - 7.0 years

4 - 8 Lacs

pune

Work from Office

Sr No. Category Description 1 Education B.E./B.Tech/MCA in Computer Science 2 Experience 3 to 7 Years of Experience in Kubernetes/GKE/AKS/OpenShift Administration 3 Mandatory Skills ( Docker and Kubernetes) Should have good understanding of various components of various types of kubernetes clusters (Community/AKS/GKE/OpenShift) Should have provisioning experience of various type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have Upgradation and monitoring experience of variouos type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have good experience on Conatiner Security Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin) Should have goood experiene / knowlede of cloud platforms preferably Azure / Google / OpenStack Should have good experience of container runtimes like docker/cotainerd Should have basic understanding of application life cycle management on container platform Should have good understatning of container registry Should have good understanding of Helm and Helm Charts Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK Should have good exeperince on Linux operating system Should have basis understanding of enterprise networks and container networks Should able to handle Severity#2 and Severity#3 incidents Good communication skills Should have capability to provide the support Should have analytical and problem solving capabilities, ability to work with teams Should have experince on 24*7 operation support framework) Should have knowledge of ITIL Process 4 Preferred Skills/Knowledge Container Platforms - Docker, Kubernetes, GKE, AKS OR OpenShift Automation Platforms - Shell Scripts, Ansible, Jenkin Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu Container Storage and Backup 5 Desired Skills 1. Certified Kubernetes Administrator OR 2. Certified Redhat OpenShift Administrator 3. Certification of administration of any Cloud Platform will be an added advantage 6 Soft Skills 1. Must have good troubleshooting skills 2. Must be ready to learn new technologies and acquire new skills 3. Must be a Team Player 4. Should be good in Spoken and Written English

Posted 1 week ago

Apply

3.0 - 8.0 years

15 - 20 Lacs

chennai

Work from Office

Your Role Architect and implement Apigee Hybrid on Kubernetes (GKE or other platforms). Automate infrastructure provisioning using Terraform, Helm, and CI/CD pipelines. Monitor and optimize API gateway performance and reliability. Collaborate with software engineers and SREs to integrate APIs with microservices. Ensure compliance with security and governance standards. Troubleshoot and resolve issues across the API lifecycle . Your Profile 3+ years of experience with Kubernetes (GKE preferred). Hands-on experience with Apigee (Hybrid or Edge). Proficiency in Terraform, Helm, and container orchestration. Strong understanding of RESTful APIs, OAuth, and API security What You"ll love about working here You can shape your career with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have the opportunity to learn on one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications

Posted 1 week ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

bengaluru

Work from Office

Experience:5+Years Category: GCP+GKE Main location: Bangalore / Chennai / Hyderabad / Pune / Mumbai Position ID: J0425-1242 Employment Type: Full Time Job Description : We are seeking a skilled and proactive Google Cloud Engineer with strong experience in DevOps with hands-on expertise in Google Kubernetes Engine (GKE) to design, implement, and manage cloud-native infrastructure . You will play a key role in automating deployments, maintaining scalable systems, and ensuring the availability and performance of our cloud services on Google Cloud Platform (GCP). Key Responsibilities and Required Skills 5+ years of experience in DevOps Cloud Engineering roles. Design and manage cloud infrastructure using Google Cloud services such as Compute Engine, Cloud Storage, VPC, IAM, Cloud SQL, GKE, and more. Proficient in writing Infrastructure-as-Code using Terraform, Deployment Manager, or similar tools. Automate CI/CD pipelines using tools like Cloud Build, Jenkins, GitHub Actions, etc. Manage and optimize Kubernetes clusters for high availability, performance, and security. Collaborate with developers to containerize applications and streamline their deployment. Monitor cloud environments and troubleshoot performance, availability, or security issues. Implement best practices for cloud governance, security, cost management, and compliance. Participate in cloud migration and modernization projects. Ensure system reliability and high availability through redundancy, backup strategies, and proactive monitoring. Contribute to cost optimization and cloud governance practices. Strong hands-on experience with core GCP services including Compute, Networking, IAM, Storage, and optional Kubernetes (GKE). Proven expertise in Kubernetes (GKE)managing clusters, deployments, services, autoscaling, etc. Experience in Configuring Kubernetes resources (Deployments, Services, Ingress, Helm charts, etc.) to support application lifecycles. Solid scripting knowledge (e.g., Python, Bash, Go). Familiarity with GitOps and deployment tools like ArgoCD, Helm. Experience with CI/CD tools and setting up automated deployment pipelines. Should have Google Cloud certifications (e.g., Professional Cloud DevOps Engineer, Cloud Architect, or Cloud Engineer). Behavioural Competencies : Proven experience of delivering process efficiencies and improvements Clear and fluent English (both verbal and written) Ability to build and maintain efficient working relationships with remote teams Demonstrate ability to take ownership of and accountability for relevant products and services Ability to plan, prioritise and complete your own work, whilst remaining a team player Willingness to engage with and work in other technologies Your future duties and responsibilities Required qualifications to be successful in this role Skills: DevOps Google Cloud Platform Kubernetes Terraform Helm

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 16 Lacs

pune

Work from Office

Job Description: Job Title Lead Engineer Location Pune Corporate Title Director As a lead engineer within the Transaction Monitoring department, you will lead and drive forward critical engineering initiatives and improvements to our application landscape whilst supporting and leading the engineering teams to excel in their roles. You will be closely aligned to the architecture function and delivery leads, ensuring alignment with planning and correct design and architecture governance is followed for all implementation work. You will lead by example and drive and contribute to automation and innovation initiatives with the engineering teams. Join the fight against financial crime with us! Your key responsibilities Experienced hands-on cloud and on-premise engineer, leading by example with engineering squads Thinking analytically, with systematic and logical approach to solving complex problems, and high attention to detail Design & document complex technical solutions at varying levels in an inclusive and participatory manner with a range of stakeholders Liaise and face-off directly to senior stakeholders in technology, business and modelling areas Collaborating with application development teams to design and prototype solutions (both on-premises and on-cloud), supporting / presenting these via the Design Authority forum for approval and providing good practice and guidelines to the teams Ensuring engineering & architecture compliance with bank-standard processes for deploying new applications, working directly with central functions such as Group Architecture, Chief Security Office and Data Governance Innovate and think creatively, showing willingness to apply new approaches to solving problems and to learn new methods, technologies and potentially outside-of-box solution Your skills and experience Proven hands-on engineering and design experience in a delivery-focused (preferably agile) environment Solid technical/engineering background, preferably with at least two high level languages and multiple relational databases or big-data technologies Proven experience with cloud technologies, preferably GCP (GKE / DataProc / CloudSQL / BigQuery), GitHub & Terraform Competence / expertise in technical skills across a wide range of technology platforms and ability to use and learn new frameworks, libraries and technologies A deep understanding of the software development life cycle and the waterfall and agile methodologies Experience leading complex engineering initiatives and engineering teams Excellent communication skills, with demonstrable ability to interface and converse at both junior and senior level and with non-IT staff Line management experience including working in a matrix management configuration

Posted 1 week ago

Apply

8.0 - 13.0 years

10 - 14 Lacs

hyderabad

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP S/4HANA for Product Compliance Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead end-to-end SAP EHS Global Label Management (GLM) implementations within S/4HANA Product Compliance projects. Youll be responsible for project delivery, stakeholder engagement, and ensuring regulatory alignment. Roles & Responsibilities:- Manage full-cycle implementation of SAP GLM- Define labelling strategies and oversee WWI template delivery- Coordinate cross-functional teams across product safety, compliance, and regulatory domains- Collaborate with clients and business users to gather requirements and translate them into effective EHS solutions.- Configure and maintain the SAP EHS Product Safety module, including specifications, phrase management, and data architecture.- Design and validate WWI report templates and guide ABAP developers with symbol logic, layout, and enhancements.- Implement and support SAP GLM (Global Label Management) including label determination, print control, and output conditions. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP EH&S GLM End to end implementation experience.- Deep expertise in SAP GLM, label determination logic, and print control setup- Strong knowledge of S/4HANA Product Compliance architecture- Excellent communication and team management skills- 8+ years in SAP EHS with 2+ full-cycle GLM implementations Additional Information:- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

4.0 - 8.0 years

7 - 11 Lacs

chennai

Hybrid

Profile Requirement: Good knowledge on Terraform. Since they are moving to ISO 27001 they also want to uplift their security-related areas. Deploying applications ( Docker, Kubernetes , debugging, open-source) Monitoring and Operations (dashboards, DR, design for Zero downtime) Experience in multiple deployments, the huge scale of deployment Should have hands-on in Azure Should have hands-on in building a pipeline in Jenkins /Azure DevOps pipeline, The candidate should work in an end-to-end DevOps implementation. Should have good hands-on Kubernetes/AKS/GKE/EKS.. Should have good hands-on the infrastructure monitoring and Log management Should have hands-on in scripting languages like Ansible , Python, Shell, PowerShell Should have working experience on Agile aspects in software development pipelines.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

pune

Work from Office

Required Skills: Strong proficiency in JavaScript/TypeScript and Node.js. Understanding and practical experience with Google Cloud Platform (GCP) services. Experience with front-end frameworks (React / Vue.js). Knowledge of databases (SQL/NoSQL) Familiarity with CI/CD tools, containerization (Docker), and orchestration (Kubernetes/GKE). Understanding of application architecture, including microservices and serverless concepts. Strong analytical, problem-solving, and communication skills. Relevant GCP certifications (e.g., Professional Cloud Developer) are often preferred.

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

bengaluru

Work from Office

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 43,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow. With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details: Function/Department : Advanced Analytics Location : Bangalore, India Employment Type: Full-time Role Overview Senior Data Scientist (Loss Cost modelling) We are seeking a highly motivated and talented Data Scientist to join our team and work on COG Accident & Health Actuarial Projects. The successful candidate will be responsible for loss cost modeling, streamlining data management processes, revenue maximization, and various AI/ML projects. The intern will also work on other automation projects as required. As a member of our team, you will have the opportunity to work independently as well as collaborate with other team members and gain valuable experience in the industry. Responsibilities: Collaborate with business partners and peers in the organization to understand and scope the problem, gather business requirements, and plan project tasks and timelines. Build data management tools (ETL processes for modeling and dashboards, automating data files, etc.) to improve efficiency in portfolio management. Employ GLM, ML, or other statistical methods to improve rating sophistication across all lines of business; identify and implement new rating factors in actuarial pricing. Collaborate with actuarial and underwriting counterparts to develop and peer review pricing recommendations. Provide models and algorithms for price optimization initiatives to support topline growth. Support actuarial pricing frameworks that can be leveraged easily for multiple product lines and multiple regions. Focus on system connectivity, automation of core components, code reproducibility, and creating generalizable technology assets for the team. Support regional and home office initiatives. Provide training and support for other A&H data scientists and actuaries as necessary, acting as an internal resource for data analysis and actuarial work across regions. Skills and Qualifications: Knowledge in statistical analysis and multivariate procedures, including Regression, Classification, Clustering, Dimensionality Reduction, Ensemble Methods, GLMs, GBM, Decision Trees, Regularization, and Kernelling. Programming experience in Python, including experience using Pythons libraries such as NumPy and Pandas. The ability to write SQL, as well as a strong understanding of relational databases and approaches to extracting data. The ability to write pipelines to extract, load, and transform data from various data sources. Understand the connection of technical tools and applications such as Databricks and Power BI. Proficiency in Microsoft Suite applications. Knowledge of the Property and Casualty insurance industry in general. Knowledge of RADAR is a plus. Knowledge of A&H products, underwriting techniques, actuarial pricing, and GAAP Accounting is a plus. Benefits of being Part of COG A&H Team: You will work on COG Accident & Health Actuarial Projects with a dynamic squad which are made of data scientists, actuaries, underwriters and data engineers. You will make contributions to projects that directly impact business toplines and bottom lines. You will receive guidance and mentorship from experts in the A&H product line. You will gain product knowledge and experience to supplement and strengthen your analytic skills .

Posted 2 weeks ago

Apply

7.0 - 12.0 years

37 - 45 Lacs

pune

Work from Office

Job Title- Corporate Bank Technology Commercial Banking Senior Data Engineer, AVP Location- Pune, India Role Description Responsible to provide fast and reliable data solutions for warehousing, reporting, Customer- and Business Intelligence solutions. Loading data from various systems of record into our platform and make them available for further use. Automate deployment and test processes to deliver fast incremental improvements of our application and platform. Implement data governance and protection to adhere regulatory requirements and policies. Transform and combine data into a data model which supporting our data analysts or can easily consumed by operational databases. Maintain hygiene, Risk and Control and Stability at to core to every delivery. Be a role model for the team. Work in an agile setup, helping with feedback to improve our way of working. Commercial Banking Tribe Youll be joining the Commercial Bank Tribe, who is focusing on the special needs of the small and medium enterprise clients in Germany, a designated area for further growth and investment within Corporate Bank. We are responsible for the digital transformation of ~800.000 clients in 3 brands, i.e. the establishment of the BizBanking platform including development of digital sales and service processes as well as the automation of processes for this client segment. Our tribe is on a journey of an extensive digitalisation of business processes and to migrate our applications to the cloud. On that we are working jointly together with our business colleagues in an agile setup and collaborating closely with stakeholders and engineers from other areas thriving to achieve a highly automated and adoptable process and application landscape. Your key responsibilities Design, develop, and deploy data processing pipelines and data-driven applications on GCP Write and maintain SQL queries and use data modeling tools like Dataform or dbt for data management. Write clean, maintainable code in Java and/or Python, adhering to clean code principles. Apply concepts of deployments and configurations in GKE/OpenShift, and implement infrastructure as code using Terraform. Set up and maintain CI/CD pipelines using GitHub Actions, write and maintain unit and integration tests. Your skills and experience Bachelor's degree in Computer Science, Data Science, or related field, or equivalent work experience. Proven experience as a Data Engineer or Backend Engineer or similar role. Strong experience with Cloud, Terraform, and GitHub Actions. Proficiency in SQL and Java and/or Python, experience with tools and frameworks like Apache Beam, Spring Boot and Apache Airflow. Familiarity with data modeling tools like Dataform or dbt, and experience writing unit and integration tests. Understanding of clean code principles and commitment to writing maintainable code. Excellent problem-solving skills, attention to detail, and strong communication skills.

Posted 2 weeks ago

Apply

10.0 - 16.0 years

15 - 35 Lacs

noida, uttar pradesh, india

On-site

About us -Coders Brain is a global leader in its services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers. We achieved our success because of how successfully we integrate with our clients. Quick Implementation - We offer quick implementation for the new onboarding client. Experienced Team - We've built an elite and diverse team that brings its unique blend of talent, expertise, and experience to make you more successful, ensuring our services are uniquely customized to your specific needs. One Stop Solution - Coders Brain provides end-to-end solutions for the businesses at an affordable price with uninterrupted and effortless services. Ease of Use - All of our products are user friendly and scalable across multiple platforms. Our dedicated team at Coders Brain implements keeping the interest of enterprise and users in mind. Secure - We understand and treat your security with utmost importance. Hence we blend security and scalability in our implementation considering long term impact on business benefit. Exp- 10+ Yrs Role- SAP EHS Solution-Architect Location- Noida Permanent-Codersbrain Technology Pvt Ltd Client:-Coforge JobDescription 15+ years of experience in SAP EHS and related modules. Proven track record in implementing MSDS, GLM, SVT. Hands-on experience with SAP S/4HANA Cloud for Environment Management. Strong understanding of GHG emissions tracking and sustainability footprint management. Excellent communication and stakeholder management skills. Preferred Qualifications SAP EHS Certification(s) Experience in global rollouts and multi-country implementations Knowledge of regulatory compliance standards If you're interested then please share the below-mentioned details : oCurrent CTC: oExpected CTC: oCurrent Company: oNotice Period: oCurrent Location: oPreferred Location: oTotal-experience: oRelevant experience: oHighest qualification: oDOJ(If Offer in Hand from Other company)

Posted 2 weeks ago

Apply

10.0 - 14.0 years

20 - 30 Lacs

hyderabad

Work from Office

Job Title: SAP EHS + GLM Consultant / Senior Consultant Experience: 10+ Years Location: Hyderabad - Hybrid Job Description We are seeking an experienced SAP EH&S + Global Label Management (GLM) Consultant with strong expertise in end-to-end implementations, global support, and configuration . The ideal candidate should have deep knowledge of SAP EH&S modules, label management, and regulatory compliance within chemical/manufacturing industries. Key Responsibilities:- Manage SAP EH&S and Global Label Management (GLM) solutions. Configure systems and install WWI generation servers . Resolve issues in Product Safety, Dangerous Goods Management, SVT, Global Label Management, WWI server issues . Handle Specification Management, Dangerous Goods, Material Master data, Phrases, Report Information System, Expert Rules, EHS Data Migration tools . Support and configure SAP EHS Reports (SDS, Report Shipping Order, Inbound Documents, Label Authoring). Manage label printing (transaction CBGL_MP01 ) and printer setup. Implement modifications as per legal and business requirements. Ensure compliance with label templates, legal, and business requirements . Additional Skills Expertise in WWI technology (MS Word-based) for designing labels & SDS. Strong ABAP / ABAP OO coding skills. Experience with interfacing systems and cross-module processes (Sales, Logistics, Production). Experience in AS-IS & TO-BE documentation, gap analysis, FS/TS preparation, testing (Unit, Integration, UAT) . At least 2 full life-cycle implementations in SAP EHS & GLM plus support experience. Professional Experience :- 10+ years of overall SAP consulting experience. Bachelors or Masters degree with background in Chemical / Manufacturing / Environmental Safety . Knowledge of Regulatory Requirements, SAP REACH Compliance, Federal & Regional Standards for chemical compliance. Proven experience in ticket handling, SLA-based support, and enhancements using tools like ServiceNow, ALM . Strong problem-solving ability with business-to-IT solution mapping. Excellent communication skills with the ability to work in a global, culturally diverse environment . Passionate about SAP, with continuous learning mindset.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

9 - 13 Lacs

hyderabad, pune

Work from Office

Key Responsibilities: 1. Cloud Infrastructure Management:o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP).o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization:o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications.o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines:o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD.o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance:o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption.o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support:o Work closely with development teams to containerize applications and ensure smooth deployment on GCP.o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization:o Monitor and optimize GCP resource usage to ensure cost efficiency.o Implement strategies to reduce cloud spend without compromising performance. Required Skills and Qualifications:1. Certifications:o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise:o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools:o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build.o Experience with containerization tools like Docker. 4. Kubernetes Expertise:o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets.o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting:o Strong scripting skills in Python, Bash, or Go.o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging:o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking:o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers.8. Soft Skills: o Strong problem-solving and troubleshooting skills.o Excellent communication and collaboration abilities.o Ability to work in an agile, fast-paced environment.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies