Home
Jobs

84 Glm Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Skill – Aks , Istio service mesh Shift timing - Afternoon Shift Location - Chennai, Kolkata, Bangalore Excellent AKS, GKE or Kubernetes admin experience. Good troubleshooting experience on istio service mesh, connectivity issues. Experience with Github Actions or similar ci/cd tool to build pipelines.Working experience on any cloud, preferably Azure, Google with good networking knowledge. Experience on python or shell scripting. Experience on building dashboards, configure alerts using prometheus and Grafana.

Posted 2 weeks ago

Apply

6.0 - 8.0 years

35 - 50 Lacs

Chennai

Work from Office

Naukri logo

Skill – Aks , Istio service mesh Shift timing - Afternoon Shift Location - Chennai, Kolkata, Bangalore Excellent AKS, GKE or Kubernetes admin experience. Good troubleshooting experience on istio service mesh, connectivity issues. Experience with Github Actions or similar ci/cd tool to build pipelines.Working experience on any cloud, preferably Azure, Google with good networking knowledge. Experience on python or shell scripting. Experience on building dashboards, configure alerts using prometheus and Grafana.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Infrastructure, Red Hat OpenShift, Kubernetes Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time educationSUMMARYAs an Deployment and managing Kubernetes Cluster in Azure and GCP. You will act as software detectives, providing a dynamic service that identifies and resolves issues within various components of critical business systems. Your typical day will involve collaborating with team members to troubleshoot problems, analyzing system performance, and ensuring the smooth operation of applications. You will engage with stakeholders to understand their needs and provide timely solutions, all while maintaining a focus on enhancing system reliability and user satisfaction. Roles & Responsibilities:1.The Candidate should have hands-on experience in managing Kubernetes Cluster in Cloud (Azure / GCP) with the ability to Upgrade Kubernetes version and Cluster Version based on requirement.2.Should be able to troubleshoot and fix issues related to pods deployment and Connectivity issues.3.Configure the Cluster with Monitoring solution and manage with best security practices.4.Candidate should be capable of managing storages for the Pods and configuring CI/CD tools for Application deployment to container Cluster.5.Other Container Management tools like Openshift and Rancher is value added. Professional & Technical Skills: 4+ years of hands-on experience on Kubernetes management (AKS/GKE/Openshift)Strong Knowledge in Container Management in Cloud Managed services (AKS/EKS/GKE)Good Understanding of Microservice Architecture with ability to build and scale Container ClustersHands on experience in Kubernetes Upgrade and Container Cluster Upgrades.Ability to troubleshoot and fix container & Cluster service issues.Experience in integrating with Network, storage, Monitoring and firewall components.Configuration of Helm-Charts, Config Maps and automatic rollout and service discovery.Knowledge in Application Deployments to Container Clusters using CI/CD ToolsExperience in Container Monitoring Tools, Access Management and security best practices Hands-on experience on AKS, GKE, Openshift. Tools :Kubernetes Dashboard, Kube-monkey, Kube-hunter, Project Quay, Kube-burner,Kube-bench. Certification Kubernetes and Openshift. Bridge the relationships between offshore and onshore/client/stakeholders/third part vendor support teams Maintain the confidence level of the client by adhering to the SLA and deliverables Maximize the contribution from Offshore teams for a better and effective support Better understanding of client process, architecture and necessary execution Quick response, timely follow-up and ownership till closure Additional Informationcandidate should have minimum 3 years of experience in Kubernetes.High personal drive; results oriented; makes things happen.Excellent communication, interpersonal skills.Effective in building close working relationships with others.Innovative and creative and adaptive to new environment.Strong data point analytical, team work skills.Good attitude to learn and develop. A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

6.0 - 8.0 years

30 - 35 Lacs

Pune

Work from Office

Naukri logo

: Job TitleSenior Engineer LocationPune, India Corporate TitleAVP Role Description Investment Banking is technology centric businesses, with an increasing move to real-time processing, an increasing appetite from customers for integrated systems and access to supporting data. This means that technology is more important than ever for business. The IB CARE Platform aims to increase the productivity of both Google Cloud and on-prem application development by providing a frictionless build and deployment platform that offers service and data reusability. The platform provides the chassis and standard components of an application ensuring reliability, usability and safety and gives on-demand access to services needed to build, host and manage applications on the cloud/on-prem. In addition to technology services the platform aims to have compliance baked in, enforcing controls/security reducing application team involvement in SDLC and ORR controls enabling teams to focus more on application development and release to production faster. We are looking for a platform engineer to join a global team working across all aspects of the platform from GCP/on-prem infrastructure and application deployment through to the development of CARE based services. Deutsche Bank is one of the few banks with the scale and network to compete aggressively in this space, and the breadth of investment in this area is unmatched by our peers. Joining the team is a unique opportunity to help build a platform to support some of our most mission critical processing systems. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities As a CARE platform engineer you will be working across the board on activities to build/support the platform and liaising with tenants. To be successful in this role the below are key responsibility areas: Responsible for managing and monitoring cloud computing systems and providing technical support to ensure the systems efficiency and security Work with platform leads and platform engineers at technical level. Liaise with tenants regarding onboarding and providing platform expertise. Contribute to the platform offering as part of Sprint deliverables. Support the production platform as part of the wider team. Your skills and experience Understanding of GCP and services such as GKE, IAM, identity services and Cloud SQL. Kubernetes/Service Mesh configuration. Experience in IaaS tooling such as Terraform. Proficient in SDLC / DevOps best practices. Github experience including Git workflow. Exposure to modern deployment tooling, such as ArgoCD, desirable. Programming experience (such as Java/Python) desirable. A strong team player comfortable in a cross-cultural and diverse operating environment Result oriented and ability to deliver under tight timelines. Ability to successfully resolve conflicts in a globally matrix driven organization. Excellent communication and collaboration skills Must be comfortable with navigating ambiguity to extract meaningful risk insights. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

We are seeking a talented and experienced Kafka Architect with migration experience to Google Cloud Platform (GCP) to join our team. As a Kafka Architect, you will be responsible for designing, implementing, and managing our Kafka infrastructure to support our data processing and messaging needs, while also leading the migration of our Kafka ecosystem to GCP. You will work closely with our engineering and data teams to ensure seamless integration and optimal performance of Kafka on GCP. Responsibilities: Discovery, analysis, planning, design, and implementation of Kafka deployments on GKE, with a specific focus on migrating Kafka from AWS to GCP. Design, architect and implement scalable, high-performance Kafka architectures and clusters to meet our data processing and messaging requirements. Lead the migration of our Kafka infrastructure from on-premises or other cloud platforms to Google Cloud Platform (GCP). Conduct thorough discovery and analysis of existing Kafka deployments on AWS. Develop and implement best practices for Kafka deployment, configuration, and monitoring on GCP. Develop a comprehensive migration strategy for moving Kafka from AWS to GCP. Collaborate with engineering and data teams to integrate Kafka into our existing systems and applications on GCP. Optimize Kafka performance and scalability on GCP to handle large volumes of data and high throughput. Plan and execute the migration, ensuring minimal downtime and data integrity. Test and validate the migrated Kafka environment to ensure it meets performance and reliability standards. Ensure Kafka security on GCP by implementing authentication, authorization, and encryption mechanisms. Troubleshoot and resolve issues related to Kafka infrastructure and applications on GCP. Ensure seamless data flow between Kafka and other data sources/sinks. Implement monitoring and alerting mechanisms to ensure the health and performance of Kafka clusters. Stay up to date with Kafka developments and GCP services to recommend and implement new features and improvements. Requirements: Bachelors degree in computer science, Engineering, or related field (Masters degree preferred). Proven experience as a Kafka Architect or similar role, with a minimum of [5] years of experience. Deep knowledge of Kafka internals and ecosystem, including Kafka Connect, Kafka Streams, and KSQL. In-depth knowledge of Apache Kafka architecture, internals, and ecosystem components. Proficiency in scripting and automation for Kafka management and migration. Hands-on experience with Kafka administration, including cluster setup, configuration, and tuning. Proficiency in Kafka APIs, including Producer, Consumer, Streams, and Connect. Strong programming skills in Java, Scala, or Python. Experience with Kafka monitoring and management tools such as Confluent Control Center, Kafka Manager, or similar. Solid understanding of distributed systems, data pipelines, and stream processing. Experience leading migration projects to Google Cloud Platform (GCP), including migrating Kafka workloads. Familiarity with GCP services such as Google Kubernetes Engine (GKE), Google Cloud Storage, Google Cloud Pub/Sub, and Big Query. Excellent communication and collaboration skills. Ability to work independently and manage multiple tasks in a fast-paced environment.

Posted 2 weeks ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Rajkot

Work from Office

Naukri logo

Technical Requirements: Excellent understanding of Linux commands. Thorough knowledge of CI/CD pipelines, automation, and debugging, particularly with Jenkins. Intermediate to advanced understanding of Docker and container orchestration platforms. Hands-on experience with web servers (Apache, Nginx), database servers (MongoDB, MySQL, PostgreSQL), and application servers (PHP, Node.js). Knowledge of proxies and reverse proxies is required. Good understanding and hands-on experience with site reliability tools such as Prometheus, Grafana, New Relic, Datadog, and Splunk. (Hands-on experience with at least one tool is highly desirable.) Ability to identify and fix security vulnerabilities at the OS, database, and application levels. Knowledge of cloud platforms, specifically AWS and DigitalOcean, and their commonly used services. Other Requirements: Good communication skills. Out-of-the-box problem-solving capabilities, especially in the context of technology automation and application architecture reviews. Hands-on experience with GKE, AKS, EKS, or ECS is a plus. Excellent understanding of how to craft effective AI prompts to solve specific issues.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Devops + GCP 5+ Years Location - Hyderabad Essential Experience Proficient with CICD toolchains (e.g. Azure DevOps, Jenkins, Git, Artefactory etc.) Proficient in one or more scripting languages for automation (e.g. Linux Bash, PowerShell, Python) Proficient in provisioning platforms via Infrastructure-as-Code (IaC) techniques (e.g. Terraform, YAML, Azure Resource Manager (ARM)) Working experience configuring, securing and administering platforms in Azure; knowledge in Cloud infrastructure and networking principles (e.g. Azure PaaS, IaaS) Demonstrable knowledge of working with distributed data platforms (e.g. Azure ADLS, Data Lakes) Experience working with vulnerability management and code-inspection tooling (e.g. Snyk, SonarQube) Possess an automation-firstmindset when building solutions; considerations for self-healing and fault-tolerant methods to minimize manual intervention and downtime

Posted 3 weeks ago

Apply

4.0 - 7.0 years

11 - 15 Lacs

Chennai

Work from Office

Naukri logo

As a Systems Integration Specialist, you will be responsible for executing integration, configuration, and validation of network and software systems based on customer and project requirements. You will collaborate with cross-functional teams to ensure seamless deployment, support troubleshooting efforts, and contribute to successful end-to-end solution delivery in both remote and on-site environments. You have: 26 years of experience in the telecommunications domain, with a strong academic background in the field. Proficiency in Python, JavaScript, Java, and good understanding of Linux environments. Practical knowledge of CI/CD pipelines, Agile/SCRUM methodologies, and experience working in virtual teams. Experience with containerization and orchestration tools such as Kubernetes, Docker/containerd. Hands-on understanding of Netconf/YANG, microservices architecture, and related cloud infrastructure technologies (AWS, GKE, RedHat, Azure, Rancher). Strong problem-solving, analytical, and troubleshooting skills, along with effective communication in English. It would be nice if you also have: Familiarity with Nokia FN products (e.g., SDAN, Home Wi-Fi) and GPON domain knowledge. Exposure to open-source tools and protocols like Keycloak, RabbitMQ, Kafka, MQTT/USP, and Index Search components. Execute integration, configuration, and testing tasks as per the defined implementation plan. Understand customer requirements and ensure alignment during integration activities. Perform troubleshooting and support issue resolution in coordination with internal teams. Collaborate with architects and solution teams to support detailed design and integration activities. Analyze data, evaluate solutions, and make informed decisions to resolve technical problems. Integrate network elements into systems either remotely or on-site, ensuring seamless operation. Follow organizational policies and provide inputs to translate technical concepts into actionable tasks. Offer informal guidance and support to new team members to ensure effective onboarding and task execution.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

35 - 60 Lacs

Pune

Work from Office

Naukri logo

About the Role: We are seeking a skilled Site Reliability Engineer (SRE) / DevOps Engineer to join our infrastructure team. In this role, you will design, build, and maintain scalable infrastructure, CI/CD pipelines, and observability systems to ensure high availability, reliability, and security of our services. You will work cross-functionally with development, QA, and security teams to automate operations, reduce toil, and enforce best practices in cloud-native environments. Key Responsibilities: Design, implement, and manage cloud infrastructure (GCP/AWS/Azure) using Infrastructure as Code (Terraform). Maintain and improve CI/CD pipelines using tools like circleci, GitLab CI, or ArgoCD. Ensure high availability and performance of services using Kubernetes (GKE/EKS/AKS) and container orchestration. Implement monitoring, logging, and alerting using Prometheus, Grafana, ELK, or similar tools. Collaborate with developers to optimize application performance and deployment processes. Manage and automate security controls such as IAM, RBAC, network policies, and vulnerability scanning. Basic Qualifications: Strong knowledge of Linux Experience with scripting languages such as Python, Bash, or Go. Experience with cloud platforms (GCP preferred, AWS or Azure acceptable). Proficient in Kubernetes operations, including Helm, operators, and service meshes. Experience with Infrastructure as Code (Terraform). Solid experience with CI/CD pipelines (GitLab CI, Circleci, ArgoCD, or similar). Familiarity with monitoring and observability tools (Prometheus, Grafana, ELK, etc.). Experience with scripting languages such as Python, Bash, or Go. Knowledge of networking concepts (TCP/IP, DNS, Load Balancers, Firewalls). Preferred Qualifications Experience with advanced networking solutions. Familiarity with SRE principles such as SLOs, SLIs, and error budgets. Exposure to multi-cluster or hybrid-cloud environments. Knowledge of service meshes (Istiol). Experience participating in incident management and postmortem processes.

Posted 3 weeks ago

Apply

9.0 - 13.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Experience 8+ years Location Knowledge City, Hyderabad Work Model : Hybrid Regular work hours No. of rounds 1 internal technical round & client 2 rounds About You : The GCP CloudOps Engineer is accountable for a continuous, repeatable, secure, and automated deployment, integration, and test solutions utilizing Infrastructure as Code (IaC) and DevSecOps techniques. - 8+ years of hands-on experience in infrastructure design, implementation, and delivery - 3+ years of hands-on experience with monitoring tools (Datadog, New Relic, or Splunk) - 4+ years of hands-on experience with Container orchestration services, including Docker or Kubernetes, GKE. - Experience with working across time zones and with different cultures. - 5+ years of hands-on experience in Cloud technologies GCP is preferred. - Maintain an outstanding level of documentation, including principles, standards, practices, and project plans. - Having experience building a data warehouse using Databricks is a huge plus. - Hands-on experience with IaC patterns and practices and related automation tools such as Terraform, Jenkins, Spinnaker, CircleCI, etc., built automation and tools using Python, Go, Java, or Ruby. - Deep knowledge of CICD processes, tools, and platforms like GitHub workflows and Azure DevOps. - Proactive collaborator and can work in cross-team initiatives with excellent written and verbal communication skills. - Experience with automating long-term solutions to problems rather than applying a quick fix. - Extensive knowledge of improving platform observability and implementing optimizations to monitoring and alerting tools. - Experience measuring and modeling cost and performance metrics of cloud services and establishing a vision backed by data. - Develop tools and CI/CD framework to make it easier for teams to build, configure, and deploy applications - Contribute to Cloud strategy discussions and decisions on overall Cloud design and best approach for implementing Cloud solutions - Follow and Develop standards and procedures for all aspects of a Digital Platform in the Cloud - Identify system enhancements and automation opportunities for installing/maintaining digital platforms - Adhere to best practices on Incident, Problem, and Change management - Implementing automated procedures to handle issues and alerts proactively - Experience with debugging applications and a deep understanding of deployment architectures. Pluses : - Databricks - Experience with the Multicloud environment (GCP, AWS, Azure), GCP is the preferred cloud provider. - Experience with GitHub and GitHub Actions Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : Job TitleDevOps Engineer, AS LocationBangalore, India Role Description Deutsche Bank has set for itself ambitious goals in the areas of Sustainable Finance, ESG Risk Mitigation as well as Corporate Sustainability. As Climate Change throws new Challenges and opportunities, Bank has set out to invest in developing a Sustainability Technology Platform, Sustainability data products and various sustainability applications which will aid Banks goals. As part of this initiative, we are building an exciting global team of technologists who are passionate about Climate Change, want to contribute to greater good leveraging their Technology Skillset in Cloud / Hybrid Architecture. As part of this Role, we are seeking a highly skilled and experienced DevOps Engineer to join our growing team. In this role, you will play a pivotal role in managing and optimizing cloud infrastructure, facilitating continuous integration and delivery, and ensuring system reliability. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Create, implement, and oversee scalable, secure, and cost-efficient cloud infrastructures on Google Cloud Platform (GCP). Utilize Infrastructure as Code (IaC) methodologies with tools such as Terraform, Deployment Manager, or alternatives. Implement robust security measures to ensure data access control and compliance with regulations. Adopt security best practices, establish IAM policies, and ensure adherence to both organizational and regulatory requirements. Set up and manage Virtual Private Clouds (VPCs), subnets, firewalls, VPNs, and interconnects to facilitate secure cloud networking. Establish continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitHub Actions, or comparable tools for automated application deployments. Implement monitoring and alerting solutions through Stackdriver (Cloud Operations), Prometheus, or other third-party applications. Evaluate and optimize cloud expenditures by utilizing committed use discounts, autoscaling features, and resource rightsizing. Manage and deploy containerized applications through Google Kubernetes Engine (GKE) and Cloud Run. Deploy and manage GCP databases like Cloud SQL, BigQuery. Your skills and experience Minimum of 5+ years of experience in DevOps or similar roles with hands-on experience in GCP. In-depth knowledge of Google Cloud services (e.g., GCE, GKE, Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud Storage) and the ability to architect, deploy, and manage cloud-native applications. Proficient in using tools like Jenkins, GitLab, Terraform, Ansible, Docker, Kubernetes. Experience with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or GCP-native Deployment Manager. Solid understanding of security protocols, IAM, networking, and compliance requirements within cloud environments. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 weeks ago

Apply

4 - 9 years

15 - 19 Lacs

Pune

Work from Office

Naukri logo

About The Role : Job Title: Technical-Specialist GCP Developer LocationPune, India Role Description This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Spark and GCP technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Google Cloud platform for at least 4 years. Hands own experience in Bigquery, Dataproc, Composer, Terraform, GKE, Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of Devops. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platformsOpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Knowledge on working with APIs and microservices , integrating external and internal web services including SOAP, XML, REST, JSON . Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.

Posted 1 month ago

Apply

5 - 10 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Devops + GCP 5+ Years Location - Hyderabad Essential Experience Proficient with CICD toolchains (e.g. Azure DevOps, Jenkins, Git, Artefactory etc.) Proficient in one or more scripting languages for automation (e.g. Linux Bash, PowerShell, Python) Proficient in provisioning platforms via Infrastructure-as-Code (IaC) techniques (e.g. Terraform, YAML, Azure Resource Manager (ARM)) Working experience configuring, securing and administering platforms in Azure; knowledge in Cloud infrastructure and networking principles (e.g. Azure PaaS, IaaS) Demonstrable knowledge of working with distributed data platforms (e.g. Azure ADLS, Data Lakes) Experience working with vulnerability management and code-inspection tooling (e.g. Snyk, SonarQube) Possess an automation-firstmindset when building solutions; considerations for self-healing and fault-tolerant methods to minimize manual intervention and downtime

Posted 1 month ago

Apply

4 - 7 years

10 - 19 Lacs

Indore, Gurugram, Bengaluru

Work from Office

Naukri logo

We need GCP engineers for capacity building; - The candidate should have extensive production experience (1-2 Years ) in GCP, Other cloud experience would be a strong bonus. - Strong background in Data engineering 2-3 Years of exp in Big Data technologies including, Hadoop, NoSQL, Spark, Kafka etc. - Exposure to enterprise application development is a must Roles and Responsibilities 4-7 years of IT experience range is preferred. Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Good to have knowledge on Cloud Composer, Cloud SQL, Big Table, Cloud Function. Strong experience in Big Data technologies – Hadoop, Sqoop, Hive and Spark including DevOPs. Good hands on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built, Anthos. Ability to drive the deployment of the customers’ workloads into GCP and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for GCP cloud implementations. Experience with technical solutions based on industry standards using GCP - IaaS, PaaS and SaaS capabilities. Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. Act as a subject-matter expert OR developer around GCP and become a trusted advisor to multiple teams. Technical ability to become certified in required GCP technical certifications.

Posted 1 month ago

Apply

3 - 8 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.

Posted 1 month ago

Apply

3 - 8 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.

Posted 1 month ago

Apply

6 - 10 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation. Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive. About The Role Position Summary The Senior Product Manager plays a pivotal role in product development for F5 Distributed Cloud App Delivery strategies. This position requires an in-depth understanding of market dynamics in Kubernetes platforms, Multicloud Networking, Public Cloud and SaaS platforms as well as strong leadership, partnering and analytical abilities, to help build a shared vision and execute to establish a market leading position. Primary Responsibilities Product Delivery: Drive product management activities for F5 Network Connect and F5 Distributed Apps Build compelling technical marketing content to drive product awareness including building reference architectures and customer case studies Deliver web content, whitepapers, and demonstrations to drive customer adoption, and ensure technical marketing alignment with key partners Ensure accountability for product success and present data-backed findings during business reviews and QBRs Customer Engagement & Feedback: Engage with customers to understand their business goals, constraints, and requirements Prioritize feature enhancements based on customer feedback and business value Utilize the Digital Adoption Platform to identify areas of improvement, increase revenue and reduce churn Market Analysis: Position F5 Network Connect and Distributed Apps with a competitive edge in the market Validate market demand based on customer usage Conduct in-depth research to stay abreast of developments in Multicloud Networking as well as Kubernetes (CaaS/PaaS) ecosystem Team Collaboration: Collaborate with stakeholders to make informed decisions on product backlog prioritization Foster strong relationships with engineering, sales, marketing, and customer support teams Work with technical teams to ensure seamless product rollouts Work with key decision makers in marketing and sales to ensure smoot product delivery to customers Knowledge, Skills, and Abilities Technical Skills: Proficient with core networking technologies such as BGP, VPNs and tunneling, routing, NAT, etc. Proficient with core Kubernetes technologies and ecosystem such as CNIs, Ingress Controllers, etc. Proficient with core Public Cloud networking services – especially with AWS, Azure and GCP Proficient with PaaS services such as OpenShift, EKS (AWS), GKE (GCP), AKS (Azure) Well versed with L4/L7 load balancing & proxy technologies and protocols Stakeholder Management: Demonstrate strong leadership, negotiation, and persuasion capabilities Effectively manage and navigate expectations from diverse stakeholder groups Uphold a data-driven approach amidst a fast-paced, changing environment Analytical Skills: Ability to generate data-driven reports and transform complex data into actionable insights Proven skills in data analytics and making data-backed decisions Strong awareness of technology trends and potential influence on F5’s business Qualifications BA/BS degree in a relevant field 4+ years in technical product management or a related domain 2+ years of product management in Multicloud Networking, PaaS or an adjacent area (exSSE/SD-WAN) Experience developing relationships with suppliers and co-marketing partners highly desirable. The About The Role is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change. Please note that F5 only contacts candidates through F5 email address (ending with @f5.com) or auto email notification from Workday (ending with f5.com or @myworkday.com ) . Equal Employment Opportunity It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination. F5 offers a variety of reasonable accommodations for candidates . Requesting an accommodation is completely voluntary. F5 will assess the need for accommodations in the application process separately from those that may be needed to perform the job. Request by contacting accommodations@f5.com.

Posted 1 month ago

Apply

3 - 7 years

8 - 13 Lacs

Pune

Work from Office

Naukri logo

About The Role : J ob Title Senior Full Stack Engineer Corporate TitleAssistant Vice President LocationPune, India Role Description Enterprise SRE Team in CB is responsible for making Production Better by boosting Observability and strengthening reliability across Corporate Banking. The team actively works on building common platforms, reference architectures, tools for production engineering teams to standardize processes across CB. We work in agile environment with focus on Customer centricity and outstanding user experience with high reliability and flexibility of technical solutions in mind. With our platform we want to be an enabler for highest quality cloud-based software solutions and processes at Deustche Bank. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities What Youll Do Work on the SLO Dashboard, an application owned by the CB SRE team, ensuring its design (a highly scalable & performant solution), development, and maintenance. Participate in requirement workshops, analyze requirements, perform technical design, and take ownership of the development process. Identify and implement appropriate tools to support engineering automation, including test automation and CI/CD pipelines. Understand technical needs, prioritize requirements, and manage technical debt based on stakeholder urgency. Collaborate with the UI/UX designer while being mindful of backend changes and their impact on architecture or endpoint modifications during discussions. Produce detailed design documents and guide junior developers to align with the priorities and deliverables of the SLO Dashboard. Your skills and experience Several years relevant experiences in software architecture, design, development, and engineering, ideally in banking/finance services industry Strong engineering, solution and domain architecture background and up to date knowledge on software engineering topics such as microservices, streaming architectures, high-performance, horizontal scaling, API design, GraphQL, REST services, database systems, UI frameworks, Distributed Caching (e.g. Apache Ignite, HazelCast, Redis etc.), enterprise integration patterns, modern SDLC practices Good experience in working in GCP (Cloud based technologies) using GKE, CloudSQL (Postgres), Cloudrun, terraform. Good experience in DevOps using GitHub Actions for build, Liquibase pipelines. Fluent in application development stack such as Java/Spring-Boot (3.0+), ReactJS, Python, JavaScript/TypeScript/NodeJS, SQL Postgres DB. Ability to work in a fast-paced environment with competing and alternating priorities with a constant focus on delivery with strong interpersonal skills to manage relationships with a variety of partners and stakeholders; as well as facilitate group sessions AI Integration and Implementation (Nice to have): Leverage AI tools like GitHub Copilot, Google Gemini, Llama and other language models to optimize engineering analytics and workflows. Design and implement AI-driven dashboards and reporting tools for stakeholders Apply AI tools to automate repetitive tasks, analyze complex engineering datasets, and derive trends and patterns. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 1 month ago

Apply

3 - 5 years

7 - 11 Lacs

Chennai

Work from Office

Naukri logo

Job Title: Data Analyst Location India, Chennai Education Recommendations Telecommunications engineering, Systems engineering, Software engineering or related careers. About Team: Our team is part of the Systems Integration (SIN) group, which plays a crucial role in integrating and customizing Nokia solutions and third-party platforms to meet customer requirements. We work on processing use case and feature requirements into conceptual models, operational scenarios, technical requirements, and functional descriptions. Our projects involve customer-specific solution and software design, development, implementation, unit testing, and verification of hardware, platform, software, and systems, including maintenance support. Role Overview: Join our Systems Integration team as a Systems Integration Specialist 1, where youll integrate and customize Nokia solutions and third-party platforms to meet global customer needs. In our hybrid work environment, youll collaborate remotely and in-person, driving innovation through agile methodologies and open communication. Your skills in Python, JavaScript, JAVA, Linux, Kubernetes, Docker, and cloud platforms like AWS will be key as you deploy and troubleshoot new services for projects involving SDAN and Home WiFi. Apply Now! What is Nokia looking from me: GPON Domain Knowledge: Understanding of Gigabit Passive Optical Networks and related technologies. Hands-on Expertise in Cloud Technologies: Proficiency with AWS, GKE, and RedHat. Kubernetes and Docker: Experience with container orchestration and management. Programming Skills: Proficiency in Python, JavaScript, and JAVA. Object-Oriented Programming Understanding: Ability to design and implement solutions using OOP principles. Linux Environment Knowledge: Familiarity with Linux operating systems and command-line tools. Problem-Solving and Analytical Skills: Ability to analyze issues and develop effective solutions. Team CollaborationAbility to work collaboratively within a team, follow instructions, and contribute to team discussions. Adaptability and Flexibility: Ability to adapt to different schedules and global time zones as per project needs. It would be nice if you also had: CI/CD Methodologies: Understanding and implementing Continuous Integration and Continuous Deployment practices. Agile and SCRUM Methodologies: Familiarity with Agile and SCRUM frameworks for project management. Knowledge of NOKIA FN Products: Understanding specific Nokia products such as SDAN and Home WiFi. Team Collaboration Tools: Proficiency in using collaboration tools and platforms. As part of the team, you will have to: Integrate and customize Nokia solutions and third-party platforms to meet customer requirements. Execute pre-defined integration tasks, configuration tasks, and test cases. Understand and analyze customer requirements to provide tailored solutions. Support troubleshooting activities and resolve technical issues. Collaborate with architects, solutions, and integration teams for detailed design and integration. Perform integration of network elements into systems either on-site or remotely. Provide informal guidance and support to new team members regarding procedures and tasks. Join our team to work on cutting-edge technologies and innovative solutions in a collaborative and supportive environment. You'll have the opportunity to grow professionally, tackle challenging projects, and contribute to the future of telecommunications. Nokia offers flexible working schemes, continuous learning opportunities, and well-being programs to support your career and personal development.

Posted 1 month ago

Apply

7 - 12 years

45 - 50 Lacs

Bengaluru

Work from Office

Naukri logo

Management Level :07- I&F Decision Sci Practitioner Manager Location :Mumbai Must-have skills :Risk Analytics, Model Development, Validation, and Auditing, Performance Evaluation, Monitoring, Governance, Statistical Techniques:Linear Regression, Logistic Regression, GLM, GBM, XGBoost, CatBoost, Neural Networks, Programming Languages:SAS, R, Python, Spark, Scala, Tools:Tableau, QlikView, PowerBI, SAS VA, Regulatory Knowledge:Basel/CCAR/DFAST/CECL/IFRS9, Risk Reporting and Dashboard Solutions Good to have skills :Advanced Data Science Techniques, AML, Operational Risk Modelling, Cloud Platform Experience (AWS/Azure/GCP), Machine Learning Interpretability and Bias Algorithms Job Summary We are seeking a highly skilled I&F Decision Sci Practitioner Manager to join the Accenture Strategy & Consulting team in the Global Network – Data & AI practice. You will be responsible for leading risk model development, validation, and auditing activities, ensuring performance evaluation, monitoring, governance, and documentation. This role also provides opportunities to work with top financial clients globally, utilizing cutting-edge technologies to drive business capabilities and foster innovation. Roles & Responsibilities: Engagement Execution Lead the team in the development, validation, governance, strategy, transformation, implementation, and end-to-end delivery of risk solutions for clients. Manage workstreams for large and small projects, overseeing the quality of deliverables for junior team members. Develop and frame Proof of Concept for key clients where applicable. Practice Enablement Mentor, guide, and counsel analysts and consultants. Support the development of the practice by driving innovations and initiatives. Support efforts of sales team to identify and win potential opportunities by assisting with RFPs, RFI. Assist in designing POVs, GTM collateral. Professional & Technical Skills: 7-12 years of relevant Risk Analytics experience at one or more Financial Services firms or Professional Services / Risk Advisory with significant exposure to: Credit Risk :PD/LGD/EAD Models, CCAR/DFAST Loss Forecasting, Revenue Forecasting Models, IFRS9/CECL Loss Forecasting across Retail and Commercial portfolios. Credit Acquisition/Behavior :Modeling, Credit Policies, Limit Management, Acquisition Frauds, Collections Agent Matching/Channel Allocations across Retail and Commercial portfolios. Regulatory Capital and Economic Capital Models Liquidity Risk :Liquidity Models, Stress Testing Models, Basel Liquidity Reporting Standards Anti-Money Laundering (AML) :AML Scenarios/Alerts, Network Analysis Operational Risk :AMA Modeling, Operational Risk Reporting Modeling Techniques :Linear Regression, Logistic Regression, GLM, GBM, XGBoost, CatBoost, Neural Networks, Time Series (ARMA/ARIMA), ML Interpretability and Bias Algorithms Programming Languages & Tools :SAS, R, Python, Spark, Scala, Tableau, QlikView, PowerBI, SAS VA Strong understanding of Risk functions and their application in client discussions and project implementation. Additional Information: Master's Degree in a quantitative discipline (mathematics, statistics, economics, financial engineering, operations research) or MBA from top-tier universities Industry Certifications :FRM, PRM, CFA preferred Excellent Communication and Interpersonal Skills About Our Company | Accenture Qualification Experience :Minimum 7-12 years of relevant Risk Analytics experience, Exposure to Financial Services firms or Professional Services/Risk Advisory Educational Qualification :Master’s degree in a quantitative discipline (mathematics, statistics, economics, financial engineering, operations research) or MBA from top-tier universities, Industry certifications such as FRM, PRM, CFA preferred

Posted 1 month ago

Apply

4 - 9 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Primary Skills Java (8/11/17+) Strong proficiency in Core Java and Java EE Spring Boot Experience in building microservices using Spring Boot Microservices Architecture Deep understanding of designing and developing scalable microservices Google Cloud Platform (GCP) Hands-on experience with GCP services like GKE, Cloud Run, Cloud Functions, Cloud Pub/Sub, Firestore, etc. Google Kubernetes Engine (GKE) Strong experience in deploying and managing containerized applications on GKE Containers & Docker Experience in building, deploying, and managing containerized applications using Docker Kubernetes Knowledge of Kubernetes concepts like Pods, Deployments, Services, ConfigMaps, Secrets, and Helm Charts RESTful APIs Experience in designing and consuming RESTful APIs CI/CD Pipelines Experience with CI/CD tools like Jenkins, GitHub Actions, GitLab CI, or Cloud Build Cloud Networking Understanding of networking concepts related to cloud deployments (VPCs, Load Balancers, etc.) SQL & NoSQL Databases Experience with databases like PostgreSQL, MySQL, Firestore, or MongoDB Monitoring & Logging Experience with tools like Prometheus, Grafana, Google Cloud Logging, and Stackdriver Secondary Skills Terraform / Infrastructure as Code (IaC) Experience in automating infrastructure using Terraform Event-Driven Architecture Experience with Pub/Sub, Kafka, or RabbitMQ Security Best Practices Knowledge of authentication/authorization mechanisms like OAuth2, JWT, and IAM roles Testing Frameworks Experience with JUnit, Mockito, and integration testing frameworks GraphQL Familiarity with GraphQL APIs Agile Methodologies Experience working in Agile/Scrum environments Performance Tuning Ability to optimize application performance and troubleshoot memory leaks Multi-Cloud Exposure Understanding of AWS or Azure along with GCP is a plus DevSecOps Practices Knowledge of security scanning tools like Snyk or SonarQube

Posted 2 months ago

Apply

5 - 10 years

8 - 14 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

We are looking for candidates with 3 - 5 years of data analytics, ML and predictive modeling experience. The candidate will have opportunity to work on real-life problem in sales analytics and impact business.

Posted 2 months ago

Apply

3 - 8 years

37 - 45 Lacs

Pune

Work from Office

Naukri logo

About The Role : Job TitleGCP DevOps/Platform Engineer Corporate TitleAVP LocationPune, India Role Description We are seeking a highly motivated and senior DevOps Engineer to join our team. The successful candidate will have at least 8-13 years of experience in the field and be proficient in Google Cloud Platform (GCP), GitHub Actions, Infrastructure as Code (IaC), and Site Reliability Engineering (SRE), CI/CD using Helm Charts, Platform engineering. The person performing the role may lead delivery of other members of the team and controls their work where applicable. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Develop and maintain infrastructure code using IaC tools such as Terraform and Ansible. Design, implement, and optimize cloud-based applications and services on GCP. Collaborate with cross-functional teams to ensure successful delivery of projects, including frontend development, backend development, and quality assurance. Troubleshoot and resolve issues related to application performance, reliability, and security. Optimize the deployment process using automation tools such as GitHub Actions. Provide technical guidance and mentorship to junior team members. Stay up-to-date with industry trends and best practices in DevOps engineering. Design, deploy, manage and document CI/CD pipelines Routine application maintenance tasks are an ongoing responsibility of DevOps Engineers that they accomplish via strategy-building techniques. Identifies issues / optimization potentials and implements solutions Your skills and experience Understanding of industry standards processes for build, deploy, release and support (CI/CD, incident/problem /change management etc.) Experience in building dashboards for billing, utilization and monitoring infrastructure. Experience in optimizing infrastructure cost and reducing footprint. Strong understanding and working experience in managing GKE and GKE Cluster Services. Experience in GKE node management, auto scaling, secrets management, config management, virtual services, gateways, Anthos service mesh Strong knowledge of Linux, Apache Webserver, Java Application servers, Load balancers Experience with any cloud-based infrastructure (GCP/AWS/Azure), highly available and fault tolerantapplications. Our tech stackGCP (GKE, Cloud Composer, Big Query, GCS etc), but any other public cloud experience is relevant, Kubernetes, Terraform, Confluent Kafka, GitHub Actions, Helm. Good understanding of infrastructure and platform componentsShell scripting, Python, Linux Application layer protocols (TLS/SSL, HTTP(S), DNS, etc) Experience in supporting/building Continuous Delivery pipelines Experience with deployment strategies (such as BlueGreen, Canary, A/B) Good understanding of various design and architectural patterns Good understanding of Microservices and API Management Experience in monitoring/reporting tools such as Splunk, Grafana/Prometheus/Google Cloud Operation etc Experience in Agile practices Collaboration Skills: Proactive can-do attitude; A creative approach towards solving technical problems; Able to work efficiently with colleagues in multiple locations; Willing to collaborate across domains, for efficiency in technology sharing and reuse; Excellent communication skills in English; How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 months ago

Apply

3 - 8 years

7 - 10 Lacs

Chennai

Remote

Naukri logo

Role Overview: We are seeking a skilled and experienced Data Scientist with a minimum of 3 years of experience, including strong exposure to DevOps tools, AI/ML, and MLOps frameworks. The ideal candidate will work closely with data engineers, machine learning engineers, and product teams to develop, deploy, and maintain machine learning models while ensuring seamless integration with CI/CD pipelines. Key Responsibilities: Design, build, and optimize machine learning models for large-scale datasets. Collaborate with data engineers to preprocess, clean, and organize data for machine learning pipelines. Implement and automate the deployment of machine learning models into production using MLOps practices. Utilize DevOps tools to integrate ML models into continuous integration/continuous deployment (CI/CD) pipelines. Conduct exploratory data analysis, feature engineering, and algorithm selection. Develop and maintain model monitoring and retraining pipelines to ensure optimal model performance over time. Provide data-driven insights and recommendations to product and business teams. Collaborate with cross-functional teams including software developers, data engineers, and DevOps engineers. Required Qualifications: Experience: 3+ years in Data Science, with exposure to AI/ML model development and deployment. Education: Bachelor's or Masters degree in Data Science, Computer Science, Mathematics, Statistics, or a related field. Technical Skills: Proficiency in Python and libraries like Pandas, Scikit-learn, TensorFlow, PyTorch, etc. Strong understanding of machine learning algorithms (supervised/unsupervised learning, reinforcement learning). Experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). Familiarity with DevOps tools such as Jenkins, Git, GitLab CI, or similar for CI/CD integration. Experience with MLOps tools such as MLflow, Kubeflow, or similar for model management and deployment. Experience working with big data frameworks (Hadoop, Spark) is a plus. Knowledge of REST APIs and microservices architecture is a bonus. Soft Skills: Strong problem-solving abilities and analytical thinking. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple projects. Nice to Have: Knowledge of NLP, Computer Vision, or deep learning techniques. Experience with version control for models and data pipelines. Prior experience in A/B testing and experiment tracking.

Posted 2 months ago

Apply

6 - 8 years

5 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Senior Google Cloud Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, and maintaining scalable, secure, and reliable Google cloud infrastructure Architect, implement, and manage highly available Google cloud environment. Design VPC, Cloud DNS, VPN, Cloud Interconnect, Cloud CDN and IAM policies to enforce security standard processes. Implement robust security practices and enforce security policies using Identity and Access Management (IAM) , VPC Service Controls , and Cloud Security Command Center . Architect solutions with cost optimization in mind using Google Cloud Billing and Cloud Cost Management tools. Infrastructure as Code (IaC) & Automation Deploy and maintain Infrastructure as Code (IaC) and Site Reliability Engineering (SRE) principles using tools like Terraform , and Google Cloud Deployment Manager . Automate deployment, scaling, and monitoring using GCP-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce standard methodologies in IAM, encryption, and network security . Ensure compliance with SOC2, ISO27001, and NIST standards . Implement Google Cloud Security Command Center , Cloud Armor , and Cloud IDS for threat detection and response. Monitoring & Performance Optimization Set up Google Cloud Monitoring , Cloud Logging , Cloud Trace , and Cloud Profiler to enable proactive monitoring, trace analysis, and performance tuning of GCP resources Implement autoscaling, Cloud Load Balancing, and caching strategies for performance optimization. Troubleshoot cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents What we expect of you We are all different, yet we all use our unique contributions to serve patients. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . Blending cloud engineering and operations expertise, the individual will ensure that our cloud environment is running efficiently and securely while also being responsible for the day-to-day operational management, support, and maintenance of the cloud infrastructure. Must-Have Skills: Deep hands-on experience with GCP (IAM, Compute Engine, Google Kubernetes Engine (GKE), Cloud Functions, Cloud Pub/Sub, BigQuery, Cloud SQL, Cloud Storage, Cloud Firestore, Cloud Load Balancing, VPC, etc.). Expertise in Terraform for GCP infrastructure automation. Strong knowledge of GCP networking (VPC, Cloud DNS, VPN, Cloud Interconnect, Cloud CDN). Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, GitLab, etc.). Strong troubleshooting and debugging skills in cloud networking, storage, and security. Good-to-Have Skills: Prior experience with containerization (Docker, Kubernetes) and serverless architectures is a plus. Familiarity with cloud CDK, Ansible, or Packer for cloud automation. Exposure to hybrid and multi-cloud environments (AWS, Azure). Familiarity with HPC, DGX Cloud. Basic Qualifications: Bachelor’s degree in computer science, IT, or related field with 6-8 years of hands-on cloud experience. Professional Certifications (preferred): Certifications in GCP (e.g., Google Cloud Certified Professional – Cloud Architect and Cloud DevOps Engineer) are a plus. Terraform Associate Certification Preferred Qualifications: Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies