Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Bachelors/Masters degree in Computer Science, Information Technology or related field 5-7 years of experience in a DevOps role Strong understanding of the SDLC and experience with working on fully Agile teams Proven experience in coding & scripting DevOps, Ant/Maven, Groovy, Terraform, Shell Scripting, and Helm Chart skills. Working experience with IaC tools like Terraform, CloudFormation, or ARM templates Strong experience with cloud computing platforms (e.g. Oracle Cloud (OCI), AWS, Azure, Google Cloud) Experience with containerization technologies (e.g. Docker, Kubernetes/EKS/AKS) Experience with continuous integration and delivery tools (e.g. Jenkins, GitLab CI/CD) Kubernetes - Experience with managing Kubernetes clusters and using kubectl for managing helm chart deployments, ingress services, and troubleshooting pods. OS Services Basic Knowledge to Manage, configuring, and troubleshooting Linux operating system issues (Linux), storage (block and object), networking (VPCs, proxies, and CDNs) Monitoring and instrumentation - Implement metrics in Prometheus, Grafana, Elastic, log management and related systems, and Slack/PagerDuty/Sentry integrations Strong know-how of modern distributed version control systems (e.g. Git, GitHub, GitLab etc) Strong troubleshooting and problem-solving skills, and ability to work well under pressure Excellent communication and collaboration skills, and ability to lead and mentor junior team members Career Level - IC3 Responsibilities Design, implement, and maintain automated build, deployment, and testing systems Experience in Taking Application Code and Third Party Products and Building Fully Automated Pipelines for Java Applications to Build, Test and Deploy Complex Systems for delivery in Cloud. Ability to Containerize an Application i.e. creating Docker Containers and Pushing them to an Artifact Repository for deployment on containerization solutions with OKE (Oracle container Engine for Kubernetes) using Helm Charts. Lead efforts to optimize the build and deployment processes for high-volume, high-availability systems Monitor production systems to ensure high availability and performance, and proactively identify and resolve issues Support and Troubleshoot Cloud Deployment and Environment Issues Create and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD Continuously improve the scalability and security of our systems, and lead efforts to implement best practices Participate in the design and implementation of new features and applications, and provide guidance on best practices for deployment and operations Work with security team to ensure compliance with industry and company standards, and implement security measures to protect against threats Keep up-to-date with emerging trends and technologies in DevOps, and make recommendations for improvement Lead and mentor junior DevOps engineers and collaborate with cross-functional teams to ensure successful delivery of projects Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 6+ years of software engineering or related experience. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrows technology to tackle todays challenges. Weve partnered with industry-leaders in almost every sectorand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thats why were committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Were committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing [HIDDEN TEXT] or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You are a Senior Cloud Application Developer (AWS to Azure Migration) with 8+ years of experience. Your role involves hands-on experience in developing applications for both AWS and Azure platforms. You should have a strong understanding of Azure services for application development and deployment, including Azure IaaS and PaaS services. Your responsibilities include proficiency in AWS to Azure cloud migration, which involves service mapping and SDK/API conversion. You will also be required to perform code refactoring and application remediation for cloud compatibility. You should have a minimum of 5 years of experience in application development using Java, Python, Node.js, or .NET. Additionally, you must possess a solid understanding of CI/CD pipelines, deployment automation, and Azure DevOps. Experience with containerized applications, AKS, Kubernetes, and Helm charts is also necessary. Your role will involve application troubleshooting, support, and testing in cloud environments. Experience with the following tech stack is highly preferred: - Spring Boot REST API, NodeJS REST API - Apigee config, Spring Server Config - Confluent Kafka, AWS S3 Sync Connector - Azure Blob Storage, Azure Files, Azure Functions - Aurora PostgreSQL to Azure DB migration - EKS to AKS migration, S3 to Azure Blob Storage - AWS to Azure SDK Conversion Location options for this role include Hyderabad, Bangalore, or Pune. You should have a notice period of 10-15 days.,
Posted 3 days ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture. Show more Show less
Posted 3 days ago
8.0 - 13.0 years
20 - 32 Lacs
Bengaluru
Work from Office
Bachelor of Computer Science or Equivalent Work Experience. Azure, and/or AWS. At least 7 years of experience at a relevant technical position in large organizations with hands on automation knowledge. Banking or Financial Background is a plus. Application Server experience (Jboss, Tomcat, Apache etc.) who can deep-dive Knowledge on Security areas such as SSL, worked on VA/Compliance WRT OS, and knowledge on GPOs. Strong understanding of CI/CD concepts and technologies like GitOps (Argo CD). Hands on experience using DevOps Tools (Jenkins, GitHub, SonarQube and Checkmarx) Strong knowledge on Kubernetes, OpenShift and Container Network Interface (CNI). Experience with programing and scripting language like Spring Boot, NodeJS, Python and microservice architecture. Data Base export/import across environments (Oracle/SQL) Knowledge on monitoring tools like ELK. Strong container image management experience using Docker and distroless concepts. Strong understanding of IIS (Windows Platform), Linux, Windows OS. Knowledge and experience in writing automation using Terraform, Groovy and Ansible. Experience with Networks and Firewalls Knowledge on Web based applications setup who can help in troubleshooting day to day issues with infrastructure (any kinds of issues). Familiar with Agile methodologies and its ceremonies Strategic Thinking with Research and Development mindset. Adequate understanding of project management methodologies. Excellent communication skills (verbal, written, and presentation).
Posted 3 days ago
7.0 - 8.0 years
9 - 12 Lacs
Pune
Work from Office
This role will involve managing and scaling Docker and Kubernetes infrastructure, designing and implementing cloud architectures, and leading containerization and infrastructure automation across various projects. You will work with a broader set of DevOps and CNCF tools, applying deep expertise in CI/CD, security, and infrastructure-as-code to support high-availability applications across diverse cloud environments. What Youll Do: Design, implement, and manage Kubernetes clusters (EKS) across AWS environments, maintaining secure, scalable, and resilient solutions. Lead the development and automation of CI/CD pipelines using tools such as ArgoCD, Cilium, TeamCity, CodeBuild, CodeDeploy, and CodePipeline to streamline application deployment and configuration. Expertly manage cloud resources using Terraform, and develop reusable, version-controlled IaC modules, promoting modular, scalable infrastructure deployment. Strong understanding and experience with Helm charts, and CNCF applications such as Cilium, Karpenter, and Prometheus. Configure and optimize Kubernetes clusters, ensuring compliance with container security standards. Oversee Docker image creation, tagging, and management, including maintaining secure, efficient Docker repositories (ECR, JFrog). Utilize monitoring tools (Prometheus, Grafana, Splunk, Cloudwatch Container Insight) to ensure system performance, detect issues, and proactively address performance concerns. Act as an escalation point for critical issues, conduct root cause analyses and maintain SOPs and documentation for efficient incident response and knowledge sharing. Provide training, mentorship, and technical guidance to junior team members, fostering a culture of continuous learning within the CCoE team. What Youll Bring: BE/B.Tech or higher in CS, IT or EE Hand-on experience of 7-8yrs in delivering container-based deployments using Docker and Kubernetes Good exposure in writing Helm Charts and configuring Kubernetes CI/CD pipelines Experience in writing manifest files for Deployment, Service, Pod, Daemon Sets, Persistent Volume (PV), Persistent Volume Claim (PVC), Storage, Namespaces Hand-on experience in delivering container-based deployments using Docker and Kubernetes Experience in Cloud, DevOps, and Linux and Experience in DevOps Tools like Git, Helm, Terraform, Docker, and Kubernetes Strong hands-on experience in Python, Yaml, or similar languages. Build and deploy Docker containers to break up monolithic apps into microservices, improving developer workflow Strong understanding of container security and relevant tool experience like Sysdig, CrowdStrike, etc Strong knowledge of container performance monitoring and scaling policies Deep Linux knowledge with an understanding of the container ecosystem Good Experience to develop images using Docker container Technology Should have good communication skills and a can-do attitude.
Posted 3 days ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
At EY, you will have the opportunity to shape a career that reflects your unique identity, supported by a global network, inclusive environment, and cutting-edge technology to empower you to reach your full potential. Your individual voice and perspective are key in contributing to EY's continuous improvement. By joining us, you will create a rewarding experience for yourself while playing a role in fostering a more productive working world for everyone. As a Container Security Engineer with 1-2 years of experience, your responsibilities will include designing, deploying, and troubleshooting container deployments for security scanning solutions utilizing Helm Charts on Kubernetes Platforms such as Open Shift and EKS. You will collaborate on integrating with CI/CD pipelines and automating processes to ensure seamless security testing within the code development lifecycle. Your role will involve designing security architectures and controls to protect container orchestration platforms, ensuring value delivery for both security and development teams. Additionally, you will be responsible for enforcing network policies to secure Kubernetes namespaces and pods, as well as providing API analysis and support for integrating Security Solutions with Risk and Reporting solutions to address and prioritize code vulnerabilities effectively. To excel in this role, you should possess a minimum of 1-2 years of IT experience, with at least 1 year specializing in Container Security. Proficiency in Container Technologies like Docker and Kubernetes Platforms such as OpenShift, EKS, or GKE is required. Preferred qualifications include experience in Container Deployments using Helm Charts and Infrastructure Code, particularly with Terraform, as well as familiarity with Secure Development Pipelines like Jenkins or Electric Flow. A strong understanding of relevant Security Standards (OWASP) and their application in the software development lifecycle within an agile environment is essential. Your expertise in conducting security analysis on web applications and APIs, along with knowledge of tools like Sysdig, will be beneficial in this role. EY is dedicated to building a better working world by creating sustainable value for clients, individuals, and communities while fostering trust in the global capital markets. Through the utilization of data and technology, diverse EY teams across 150 countries deliver assurance and support clients in growth, transformation, and operations. With a focus on assurance, consulting, law, strategy, tax, and transactions, EY teams strive to address complex challenges by asking innovative questions and providing new solutions to current global issues.,
Posted 4 days ago
9.0 - 11.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: .Net Architect (.Net Core) Experience: 9+ Years (Minimum 5 years in .NET Core Development) Location: Coimbatore / Chennai Mandatory Skills: .Net Core, .Net MVC, Web API, LINQ, SQL Server, Project Architecture & Documentation (HLD/LLD), Azure (App Services, AKS), CI/CD Pipelines. JD: Required Skills .NET Core, ASP.NET MVC / ASPX, C#/VB.NET, Web API LINQ, Entity Framework / ADO.NET Strong in Object-Oriented Programming (OOP) and architectural design patterns (e.g., SOLID, Repository, DI) Deep expertise in SQL Server and database design Hands-on experience in Azure services: o Azure App Services, AKS, Azure SQL, Blob Storage, Azure AD, Key Vault CI/CD automation using Azure DevOps, GitHub Actions, or TFS Strong documentation skills HLD/LLD creation and architectural artifacts Front-end integration: HTML5, CSS3 (basic familiarity) Good to Have / Preferred Skills Experience collaborating directly with client technical teams Familiarity with third-party tools: Telerik, DevExpress Exposure to Agile management tools: JIRA, TFS Working knowledge of cloud-native architecture, containerization (Docker), Helm charts, YAML Knowledge of testing practices, static code analysis tools, and performance monitoring Soft Skills & Attributes Strong analytical, problem-solving, and communication skills Excellent email and professional communication etiquette Flexible and quick to learn new tools and frameworks Strong ownership mindset and a passion for delivering quality solutions Able to work independently or as part of a team in a dynamic environment Mandatory Technologies .NET Core ASP.NET MVC Web API LINQ SQL Server Project Architecture & Documentation (HLD/LLD) Azure (App Services, AKS) CI/CD Pipeline Show more Show less
Posted 4 days ago
9.0 - 14.0 years
27 - 32 Lacs
Gurugram, Bengaluru
Hybrid
Perform the analysis of the existing solution from data platforms and its interface customers. Perform the analysis of alternative technologies if needed. Conduct impact analysis on the product toolchain to integrate the deployment of each layer
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are seeking a Senior Software Engineer with over 7 years of experience in Node JS and React JS development. You should have a minimum of 5 years of expertise in Node JS, JavaScript, CSS3, HTML5, and React JS. Your responsibilities will include designing and developing front-end and back-end services for different business processes. Experience in Python development, Docker, Kubernetes, Helm Charts, GitHub, AWS services (S3, EC2), and databases like MongoDB/DynamoDB, Redis, and Elasticsearch (Kibana) is preferred. You should also be proficient in unit testing and test frameworks like Nodeunit and Mocha. Your role will involve participating in platform requirements development, contributing to design reviews, and engaging in platform sprint activities. You will be responsible for developing assigned stories, creating unit test cases, participating in peer code reviews, and following Agile Scrum methodology. Excellent communication skills, both in articulating technical challenges and solutions, and collaborating with internal and external resources are essential. A Bachelor of Engineering degree in a computer-related field is required. As a Senior Software Engineer, you will play a crucial part in the design and development of various business processes. Your technical expertise in Node JS, React JS, and other related technologies will be instrumental in creating efficient front-end and back-end services. Your contribution to platform requirements, design reviews, and sprint activities will be vital in delivering high-quality software solutions. Your proficiency in unit testing, versioning controls, and AWS services will ensure the reliability and scalability of the developed applications. Your communication skills and ability to work effectively in a team will be key in successfully executing your responsibilities.,
Posted 6 days ago
10.0 - 15.0 years
11 - 16 Lacs
Pune
Work from Office
BMC is looking for an experienced Backend Engineer with hands-on Goland & DevOps knowledge to join us and design, develop, and implement microservice based edge applications , using the latest technologies Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Backend development, using mainly Golang, but also some Python Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to product features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders To ensure youre set up for success, you will bring the following skillset & experience: You have 10+ years of experience in Backend development (Java/Python) of an enterprise software product and at least 1-year hands-on experience in Golang . You have experience in full product development lifecycle (whiteboard to production) You have experience with at least 2 types of databases among timeseries, RDBMS, document(e.g. opensearch), key-value (e.g. redis) You have experience with Container technologies such as Docker, Kubernetes, Helm Charts, Linux, etc. You have experience in one of the messaging technologies like MQTT, Kafka, RabbitMQ You have experience in IOT products or monitoring solutions You have knowledge & experience applying design Patterns You are a team player who can also work independently, Self-unblocker with an innovative thinking and a can-do attitude You have experience with Agile development methodology and best practice in unit testing You have basic knowledge on Machine learning Whilst these are nice to have, our team can help you develop in the following skills: Knowledge of machine learning, natural language processing , deep learning concepts & simulations Experience in message bus technologies, such as MQTT Experience in contributing to and maintaining open source projects. Experience using public cloud services such as AWS, Google Cloud, or Azure
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
andhra pradesh
On-site
As a DevOps Engineer at our company located in Visakhapatnam, Andhra Pradesh, you will be responsible for managing various aspects of cloud platforms, infrastructure automation, container orchestration, CI/CD pipelines, containerization, code quality assurance, monitoring, and logging. Your key responsibilities will include designing, implementing, and managing infrastructure as code using Terraform on AWS, deploying and managing Kubernetes clusters, automating application build and deployment processes using Jenkins, containerizing applications with Docker and managing deployments with Helm Charts, integrating SonarQube for code quality assurance, implementing monitoring solutions using Datadog, and collaborating with cross-functional teams to streamline processes. To be successful in this role, you should have at least 5 years of experience in DevOps or cloud engineering roles, with a minimum of 3 years of relevant experience in AWS, Terraform, Kubernetes, Docker, Jenkins, Helm Charts, SonarQube, and Datadog. You should also possess hands-on experience with GCP services, proficiency in scripting languages like Bash or Python, strong problem-solving abilities, excellent communication skills, and the ability to work collaboratively in a team environment. Your technical proficiency will be crucial in ensuring the efficient operation of containerized applications and automation of various processes. If you are looking for a challenging role where you can utilize your expertise in cloud technologies, automation tools, and collaboration skills, this position is perfect for you. Join our team and contribute to the continuous improvement of our infrastructure and deployment processes. The closed date for applications is Apr-30-2025.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
You will focus on Connectivity Network Engineering and develop competency in your area of expertise. You will share your expertise with others, provide guidance and support, and interpret clients" needs. You will be able to complete your tasks independently or with minimum supervision. Identifying problems and generating solutions in straightforward situations will be a part of your responsibilities. Collaboration in teamwork and interacting with customers will also be crucial. You should have strong experience in Python development and scripting. A solid background in network automation and orchestration is essential. Awareness or experience with any Network Services Orchestrator (e.g., Cisco NSO) would be a plus. Proficiency in YANG data modeling and XML for network configuration and management is necessary. Experience with cloud-native development practices, especially Microservices/Containers, on any public cloud platform is preferred. Familiarity with CI/CD tools and practices is also required. Key Responsibilities: - Experience with container orchestration like Kubernetes/Docker. - Working knowledge of Helm Charts. - Proficiency with Ansible and Terraform. - Familiarity with Continuous Integration tools like Jenkins and associated DevOps best practices. - Background in Wireline Network Automation (e.g., IP/MPLS, L3VPN, SD-WAN) and routers (e.g., Cisco/Juniper Routers) is desirable. - Experience with BPMN/Workflow managers such as Camunda, N8N, Temporal. For lead profiles, having a background in Web/UI development with frameworks like React JS and an understanding of AI/Agentic AI-based development patterns would be useful (though not mandatory). Knowledge of security best practices and their implementation is also expected.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The role of a DevOps Engineer for Sensitive Data Detection involves being responsible for a variety of key tasks including deploying cloud infrastructure/services, managing day-to-day operations, troubleshooting issues related to cloud infrastructure/services, deploying and managing AKS clusters, collaborating with development teams to integrate infrastructure and deployment pipelines in the SDLC, building and setting up new CI/CD tooling and pipelines for application build and release, and implementing migrations, upgrades, and patches in all environments. In this position, you will be a part of the Sensitive Data Detection Services team based in the Pune EON-2 Office. The team focuses on analyzing, developing, and delivering global solutions to maintain or change IT systems in collaboration with business counterparts. The team culture emphasizes partnership with businesses, transparency, accountability, empowerment, and a passion for the future. As an experienced DevOps Engineer, you will have a significant role in constructing and maintaining GIT and ADO CI/CD pipelines, establishing scalable AKS clusters, deploying applications in various environments, and more. You will work alongside a group of highly skilled engineers who excel in delivering scalable enterprise engineering solutions. To excel in this role, ideally, you should possess 8 to 12 years of experience in the DevOps field. You should have a strong grasp of DevOps principles and best practices, with practical experience in implementing CI/CD pipelines, infrastructure as code, and automation solutions. Proficiency in Azure Kubernetes Services (AKS) and Linux is crucial, including monitoring, analyzing, configuring, deploying, enhancing, and managing containerized applications on AKS. Experience in managing helm charts, ADO and Git Lab pipelines is essential. Additionally, familiarity with cloud platforms like Azure, along with experience in cloud services, infrastructure provisioning, and management, is required. You will also be expected to help implement infrastructure as code (IaC) solutions using tools such as Terraform or Ansible to automate the provisioning and configuration of cloud and on-premises environments, maintain stability in non-prod environments, support GitLab pipelines across various MS Azure resources/services, research opportunities for automation, and efficiently interact with business and development teams at all levels. Flexibility in working hours to accommodate international project setups and collaboration with cross-functional teams (Database, UNIX, Cloud, etc.) are also key aspects of this role.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
punjab
On-site
The role of Principal Solution Architect involves designing end-to-end solution architecture for Telco & Enterprise modernization projects, incorporating solutions in areas such as RAN, Core, Orchestration, OSS, BSS, Cloud, and other telecommunications system components. You will be responsible for engaging and establishing advisory relationships with Customer CxOs to drive transformation and modernization initiatives by delivering compelling solution designs and presentations. Additionally, you will serve as an E2E Solution Architect for key accounts, identifying opportunities for business growth and product development. As a Principal Solution Architect, you will be accountable for supporting the complete implementation cycle, from incubation to successful closure, by collaborating and leading efforts across cross-functional business units. You will work closely with a large partner ecosystem and internal business units to define a suite of third-party solutions. Moreover, you will be involved in building repeatable assets such as Solution / Service kits for cutting-edge solutions to support sales, business development teams, and solutions teams. Furthermore, the role entails leading new and innovative technical & best practices strategies based on extensive market research and telecommunications ecosystem evolution to create new sales avenues and value propositions for customers. You will play a key role in educating prospects and customers on transformation/modernization offerings and collaborating with them to develop successful strategies for their organizations. Additionally, you will drive and contribute to research topics in different standards forums like ORAN, TIP, TMF, Nephio. To qualify for this role, candidates should possess 10+ years of demonstrated implementation level expertise in the design, development, and delivery of Telco modernization projects with a product mindset. You should have 8+ years of telecommunication architecture experience across RAN, Core, Cloud, OSS, and customer digital experience to manage and design solutions at scale. Proficiency in integration design, microservices design, and cloud infrastructure architecture is essential. Candidates should also have at least 5 years of experience in cloud-native technologies such as Kubernetes, CICD, Helm Charts, Terraform, and other CNF landscape solutions. Moreover, demonstrated in-depth knowledge of telecom industry frameworks such as ETSI/NFV, 3GPP, NGMN, TMF, ONAP, O-RAN/TIP, and others is required. Extensive knowledge of Public, Private, and Hybrid Cloud is also a key requirement for this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
As a Kubernetes Administrator/DevOps Senior Consultant, you will be responsible for designing, provisioning, and managing Kubernetes clusters for applications based on micro-services and event-driven architectures. Your role will involve ensuring seamless integration of applications with Kubernetes orchestrated environments and configuring and managing Kubernetes resources such as pods, services, deployments, and namespaces. Monitoring and troubleshooting Kubernetes clusters to identify and resolve performance issues, system errors, and other operational challenges will be a key aspect of your responsibilities. You will also be required to implement infrastructure as code (IAC) using tools like Ansible and Terraform for configuration management. Furthermore, you will design and implement cluster and application monitoring using tools like Prometheus, Grafana, OpenTelemetry, and Datadog. Managing and optimizing AWS cloud resources and infrastructure for Managed containerized environments (ECR, EKS, Fargate, EC2) will be a part of your daily tasks. Ensuring high availability, scalability, and security of all infrastructure components, monitoring system performance, identifying bottlenecks, and implementing necessary optimizations are also crucial responsibilities. Your role will involve troubleshooting and resolving complex issues related to the DevOps stack, developing and maintaining documentation for DevOps processes and best practices, and staying current with industry trends and emerging technologies to drive continuous improvement. Creating and managing DevOps pipelines, IAC, CI/CD, and Cloud Platforms will also be part of your duties. **Required Skills:** - 4-5 years of extensive hands-on experience in Kubernetes Administration, Docker, Ansible/Terraform, AWS, EKS, and corresponding cloud environments. - Hands-on experience in designing and implementing Service Discovery, Service Mesh, and Load Balancers. - Extensive experience in defining and creating declarative files in YAML for provisioning. - Experience in troubleshooting containerized environments using a combination of Monitoring tools/logs. - Scripting and automation skills (e.g., Bash, Python) for managing Kubernetes configurations and deployments. - Hands-on experience with Helm charts, API gateways, ingress/egress gateways, and service meshes (ISTIO, etc.). - Hands-on experience in managing Kubernetes Network (Services, Endpoints, DNS, Load Balancers) and storages (PV, PVC, Storage Classes, Provisioners). - Design, enhance, and implement additional services for centralized Observability Platforms, ensuring efficient log management based on the Elastic Stack, and effective monitoring and alerting powered by Prometheus. - Design and Implement CI/CD pipelines, hands-on experience in IAC, git, monitoring tools like Prometheus, Grafana, Kibana, etc. **Good to Have Skills:** - Relevant certifications (e.g., Certified Kubernetes Administrator CKA / CKAD) are a plus. - Experience with cloud platforms (e.g., AWS, Azure, GCP) and their managed Kubernetes services. - Perform capacity planning for Kubernetes clusters and optimize costs in On-Prem and cloud environments. **Preferred Experience:** - 4-5 years of experience in Kubernetes, Docker/Containerization.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
Join our team at Ericsson, a world-leading provider of telecommunications equipment and services to mobile and fixed network operators across more than 180 countries. Over 1,000 networks worldwide rely on Ericsson's innovative solutions, with more than 40% of the global mobile traffic passing through Ericsson networks. Our mission is to empower people, businesses, and society through connectivity, working towards a Networked Society where everything beneficial can be connected. We strive to apply our innovation to create market-based solutions that drive sustainability and positive change in the world. As a global company operating in 175 countries, we foster a diverse, performance-driven culture that encourages innovation and growth. Our employees embody our vision, values, and guiding principles, demonstrating a passion for success and a commitment to meeting customer needs. At Ericsson, we provide a stimulating work environment that offers continuous learning and growth opportunities to help you realize your career aspirations. We are seeking a skilled OpenShift Engineer to join our team and contribute to designing, implementing, and managing enterprise container platforms using Red Hat OpenShift. The ideal candidate will possess expertise in Kubernetes, DevOps practices, and cloud-native technologies to ensure the scalability, security, and high performance of deployments. Key Responsibilities: OpenShift Platform Management: - Deploy, configure, and manage OpenShift clusters both on-premises and in the cloud. - Ensure the health, performance, and security of OpenShift clusters. - Troubleshoot and resolve issues related to OpenShift and Kubernetes. - Integrate OpenShift with DevSecOps tools to enhance security and compliance. Containerization & Orchestration: - Develop and maintain containerized applications using Docker and Kubernetes. - Implement best practices for Pods, Deployments, Services, ConfigMaps, and Secrets. - Optimize resource utilization and auto-scaling strategies. Cloud & Hybrid Deployments: - Deploy OpenShift clusters on AWS, Azure, or Google Cloud platforms. - Configure networking, ingress, and load balancing in OpenShift environments. - Manage multi-cluster and hybrid cloud environments. Security & Compliance: - Implement RBAC, network policies, and pod security best practices. - Monitor and secure container images using tools like Red Hat Quay, Clair, or Aqua Security. - Enforce OpenShift policies to ensure compliance with enterprise standards. Monitoring & Logging: - Set up monitoring tools such as Prometheus, Grafana, and OpenShift Monitoring. - Configure centralized logging using ELK (Elasticsearch, Logstash, Kibana) or Loki. - Analyze performance metrics and optimize OpenShift workloads. Required Skills & Qualifications: Technical Expertise: - Hands-on experience with Red Hat OpenShift (OCP 4.x+). - Proficiency in Kubernetes, Docker, and Helm charts. - Experience with Cloud Platforms (AWS, Azure, GCP) and OpenShift deployments. - Strong scripting skills in Bash and Python. - Familiarity with GitOps tools like ArgoCD or FluxCD. Certifications (Preferred but not Mandatory): - Red Hat Certified Specialist in OpenShift Administration (EX280). - Certified Kubernetes Administrator (CKA). - AWS/Azure/GCP certifications related to Kubernetes/OpenShift.,
Posted 1 week ago
0.0 - 1.0 years
1 Lacs
Bengaluru
Work from Office
Responsibilities: * Design, develop & maintain containerized applications using Docker & Helm charts. * Collaborate with cross-functional teams on CI/CD pipelines & deployment strategies. Cafeteria
Posted 1 week ago
5.0 - 10.0 years
14 - 20 Lacs
Hyderabad
Work from Office
About Position: We are looking for DevOps Engineer for Hyderabad location who has exp. into Azure devops, Helm chart, Kubernetes, Azure AKS, Powershell. Role: DevOps Engineer Location: Hyderabad Experience: 5 to 10 Years Job Type: Full Time Employment What You'll Do: Design and implement robust CI/CD pipelines using Azure DevOps, focusing on YAML for configuration and automation. Lead infrastructure automation efforts using Terraform and manage Azure resources efficiently. Implement Azure Cloud Adoption Framework (CAF) with Terraform. Architect and manage complex Azure cloud infrastructures, ensuring scalability, reliability, and performance. Collaborate closely with development, QA, and operations teams to align on project goals and provide technical leadership. Automate deployments and infrastructure management with Infrastructure as Code (IaC) using Terraform. Implement and manage container orchestration solutions using Docker and Kubernetes on Azure. Ensure adherence to security best practices, compliance standards, and governance policies in the cloud environment. Mentor and guide junior team members, fostering a culture of continuous improvement and knowledge sharing. Monitor and troubleshoot complex issues across development, test, and production environments. Expertise You'll Bring: 5+ years of experience in DevOps engineering, with a strong focus on Azure cloud services. Strong developer background with proficiency in programming languages such as C#, Java, Dotnet or Python, Powershell, Bash, etc.. Expert-level proficiency in Azure DevOps, including Azure Pipelines, Azure Repos, and Azure Artifacts, with extensive experience in YAML-based pipelines. Terraform for Infrastructure as Code, including module creation, Azure landing zone, Cloud adoption framework, Azure policies, and Azure best practices. Good knowledge of containerization and orchestration technologies (Docker, Kubernetes). Strong scripting skills in languages such as PowerShell and Bash for automation. Excellent problem-solving abilities, with a strong understanding of DevOps principles and practices. Exceptional communication and collaboration skills, with experience in leading cross-functional teams. Experience with Agile/Scrum methodologies and practices. Strong understanding of cloud security principles and best practices. Extensive experience with Azure. (Cloud Adoption Framework). Experience with automation, testing tools, and performance monitoring solutions. Proficiency in Azure DevOps and CI/CD pipeline creation. Strong Development experience. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 1 week ago
7.0 - 9.0 years
20 - 25 Lacs
Pune
Work from Office
Job Title : DevOps Engineer Experienc e: 7 to 9 Years Location: Pune Job Overview: We are looking for a highly skilled DevOps Engineer with deep expertise in Kubernetes, Helm Charts, GitOps, GitHub, and cloud platforms like AWS. The ideal candidate will have a strong background in CI/CD automation, infrastructure as code, and container orchestration, and will be responsible for managing and improving our deployment pipelines and cloud infrastructure. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions or other automation tools. Manage and optimize Kubernetes clusters for high availability and scalability. Use Helm Charts to define, install, and upgrade complex Kubernetes applications. Implement and maintain GitOps workflows (preferably using ArgoCD). Ensure infrastructure stability, scalability, and security across AWS Collaborate with development, QA, and infrastructure teams to streamline delivery processes. Monitor system performance, troubleshoot issues, and ensure reliable deployments. Automate infrastructure provisioning using tools like Terraform, Pulumi, or ARM templates (optional but preferred). Maintain clear documentation and enforce best practices in DevOps processes. Key Skills & Qualifications: 7 to 9 years of hands-on experience in DevOps Strong expertise in Kubernetes and managing production-grade clusters. Experience with Helm and writing custom Helm charts. In-depth knowledge of GitOps-based deployments (preferably using ArgoCD). Proficient in using GitHub, including GitHub Actions for CI/CD. Solid experience with AWS Familiarity with Infrastructure as Code (IaC) tools (preferably Terraform) Strong scripting skills (e.g., Bash, Python, or PowerShell). Understanding of containerization technologies like Docker. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration abilities. Nice to Have: Experience with monitoring tools like Prometheus, Grafana, or ELK stack. Knowledge of security practices in DevOps and cloud environments. Certification in AWS is a plus
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You will be part of a dynamic team at Equifax, where we are seeking creative, high-energy, and driven software engineers with hands-on development skills to contribute to various significant projects. As a software engineer at Equifax, you will have the opportunity to work with cutting-edge technology alongside a talented group of engineers. This role is perfect for you if you are a forward-thinking, committed, and enthusiastic individual who is passionate about technology. Your responsibilities will include designing, developing, and operating high-scale applications across the entire engineering stack. You will be involved in all aspects of software development, from design and testing to deployment, maintenance, and continuous improvement. By utilizing modern software development practices such as serverless computing, microservices architecture, CI/CD, and infrastructure-as-code, you will contribute to the integration of our systems with existing internal systems and tools. Additionally, you will participate in technology roadmap discussions and architecture planning to translate business requirements and vision into actionable solutions. Working within a closely-knit, globally distributed engineering team, you will be responsible for triaging product or system issues and resolving them efficiently to ensure the smooth operation and quality of our services. Managing project priorities, deadlines, and deliverables will be a key part of your role, along with researching, creating, and enhancing software applications to advance Equifax Solutions. To excel in this position, you should have a Bachelor's degree or equivalent experience, along with at least 7 years of software engineering experience. Proficiency in mainstream Java, SpringBoot, TypeScript/JavaScript, as well as hands-on experience with Cloud technologies such as GCP, AWS, or Azure, is essential. You should also have a solid background in designing and developing cloud-native solutions and microservices using Java, SpringBoot, GCP SDKs, and GKE/Kubernetes. Experience in deploying and releasing software using Jenkins CI/CD pipelines, infrastructure-as-code concepts, Helm Charts, and Terraform constructs is highly valued. Moreover, being a self-starter who can adapt to changing priorities with minimal supervision could set you apart in this role. Additional advantageous skills include designing big data processing solutions, UI development, backend technologies like JAVA/J2EE and SpringBoot, source code control management systems, build tools, working in Agile environments, relational databases, and automated testing. If you are ready to take on this exciting opportunity and contribute to Equifax's innovative projects, apply now and be part of our team of forward-thinking software engineers.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
About Us: LSEG (London Stock Exchange Group) is more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to excellence in delivering the services our customers expect from us. With extensive experience, deep knowledge and worldwide presence across financial markets, we enable businesses and economies around the world to fund innovation, manage risk and create jobs. It's how we've contributed to supporting the financial stability and growth of communities and economies globally for more than 300 years. Analytics group is part of London Stock Exchange Group's Data & Analytics Technology division. Analytics has established a very strong reputation for providing prudent and reliable analytic solutions to financial industries. With a strong presence in the North American financial markets and rapidly growing in other markets, the group is now looking to increase its market share globally by building new capabilities as Analytics as a Service - A one-stop-shop solution for all analytics needs through API and Cloud-first approach. Position Summary: Analytics DevOps group is looking for a highly motivated and skilled DevOps Engineer to join our dynamic team to help build, deploy, and maintain our cloud and on-prem infrastructure and applications. You will play a key role in driving automation, monitoring, and continuous improvement in our development, modernizations, and operational processes. Key Responsibilities & Accountabilities: Infrastructure as Code (IaC): Develop and manage infrastructure using tools like Terraform, Helm Charts, CloudFormation, or Ansible to ensure consistent and scalable environments. CI/CD Pipeline Development: Build, optimize, and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins, GitLab, GitHub, or similar tools. Cloud and on-prem infrastructure Management: Work with Cloud providers (Azure, AWS, GCP) and on-prem infrastructure (VMware, Linux servers) to deploy, manage, and monitor infrastructure and services. Automation: Automate repetitive tasks, improve operational efficiency, and reduce human intervention for building and deploying applications and services. Monitoring & Logging: Work with SRE team to set up monitoring and alerting systems using tools like Prometheus, Grafana, Datadog, or others to ensure high availability and performance of applications and infrastructure. Collaboration: Collaborate with architects, operations, and developers to ensure seamless integration between development, testing, and production environments. Security Best Practices: Implement and enforce security protocols/procedures, including access controls, encryption, and vulnerability scanning and remediation. Provide support for issue resolution related to application deployment and/or DevOps-related activities. Essential Skills, Qualifications & Experience: - Bachelor's or Master's degree in computer science, engineering, or a related field with experience (or equivalent 3-5 years of practical experience). - 5+ years of experience in practicing DevOps. - Proven experience as a DevOps Engineer or Software Engineer in an agile, cloud-based environment. - Strong understanding of Linux/Unix system management. - Hands-on experience with cloud platforms (AWS, Azure, GCP), Azure preferred. - Proficient in Infrastructure automation tools such as Terraform, Helm Charts, Ansible, etc. - Strong experience with CI/CD tools - GitLab, Jenkins. - Experience/knowledge of version control systems - Git, GitLab, GitHub. - Experience with containerization (Kubernetes, Docker) and orchestration. - Experience in modern monitoring & logging tools such as Grafana, Prometheus, Datadog. - Working experience in scripting languages such as Bash, Python, or Groovy. - Strong problem-solving and troubleshooting skills. - Excellent communication skills and ability to work in team environments. - Experience with serverless architecture and microservices is a plus. - Strong knowledge of networking concepts (DNS, Load Balancers, etc.) and security practices (Firewalls, encryptions). - Working in an Agile/Scrum environment is a plus. - Certifications in DevOps or Cloud Technologies (e.g., Azure DevOps Solutions, AWS Certified DevOps) are a plus. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence, and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision-making and everyday actions. Working with us means that you will be part of a dynamic organization of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy, and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it's used for, and how it's obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.,
Posted 1 week ago
2.0 - 5.0 years
5 - 9 Lacs
Mumbai
Work from Office
Job Profile Description To build & run platform for Digital Applications, our esteemed customer is looking for an experienced DevOps Engineer We need engg with solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka, On call Incidents handling Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and ?fixes Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Attend on call incidents Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills And Qualifications BSc in Computer Science, IT/ Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 2 5Yrs to 4 5 Yrs Proficient with git and git workflows Good knowledge of Kubernetes EKS, Teraform, CICD ,AWS Problem-solving attitude Collaborative team spirit
Posted 1 week ago
4.0 - 8.0 years
7 - 12 Lacs
Noida
Work from Office
Technical Expertise Solid Python programming (OOPS, REST, API), SQL & Linux experience, handling ODBC/JDBC/Arrowflight connections. Have done projects handling large volume of data, implemented MMP tools and strong knowledge of storage solutions (NAS, S3, HDFS) and data formats (parquet, avro, iceberg) Strong knowledge of Kubernetes, containerization, Helm chart, templates and overlay, Vault, SSL/TSL, KB stores Prior knowledge of key libraries e.g. S3, MongoDB, Elastic, Trident NAS, Grafana etc will be big plus. Used Git and built CI-CD pipelines in recent projects. Additional Criteria Proactive in asking questions, indulge with team in group conversations, actively participate in issues discussions, share inputs/ideas/suggestions. Candidate needs to be responsive, actively pick new tasks/issues by him/herself without much mentoring/monitoring. NO REMOTELY located candidate. Mandatory Competencies DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Docker Beh - Communication Programming Language - Python - Django Big Data - Big Data - Flask DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Data Science and Machine Learning - Data Science and Machine Learning - Python Middleware - API Middleware - API (SOAP, REST) Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - Amazon Elastic Container Registry (ECR), AWS Elastic Kubernetes Service (EKS)
Posted 1 week ago
4.0 - 8.0 years
6 - 11 Lacs
Mumbai
Work from Office
Your Role We are hiring a GCP Kubernetes Engineer with 912 years of experience. Ideal candidates should have strong expertise in cloud-native technologies, container orchestration, and infrastructure automation. This is a Pan India opportunity offering flexibility and growth. Join us to build scalable, secure, and innovative cloud solutions across diverse industries. Design, implement, and manage scalable, highly available systems on Google Cloud Platform (GCP). Work with GCP IaaS componentsCompute Engine, VPC, VPN, Cloud Interconnect, Load Balancing, Cloud CDN, Cloud Storage, and Backup/DR solutions. Utilize GCP PaaS servicesCloud SQL, App Engine, Cloud Functions, Pub/Sub, Firestore/Cloud Spanner, and Dataflow. Deploy and manage containerized applications using Google Kubernetes Engine (GKE), Helm charts, and Kubernetes tooling. Automate infrastructure provisioning using gcloud CLI, Deployment Manager, or Terraform. Implement CI/CD pipelines using Cloud Build for automated deployments. Monitor infrastructure and applications using Cloud Monitoring, Logging, and related tools. Manage IAM, VPC Service Controls, Cloud Armor, and Security Command Center. Troubleshoot and resolve complex infrastructure and application issues. Your Profile 6+ years of cloud engineering experience with a strong focus on GCP. Proven hands-on expertise in GCP IaaS, PaaS, and GKE. Experience with monitoring, logging, and automation tools in GCP. Strong problem-solving, analytical, and communication skills. What you"ll love about working here You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. At Capgemini, you can work oncutting-edge projects in tech and engineering with industry leaders or createsolutions to overcome societal and environmental challenges.
Posted 1 week ago
4.0 - 8.0 years
15 - 30 Lacs
Noida
Work from Office
Role & responsibilities Drive microservices architecture design and evolution, owning the roadmap (service boundaries, integration, tech choices) for scalability, and defining Kubernetes container sizing and resource allocation best practices. Deep expertise in microservices architecture, designing RESTful/event-driven services, defining boundaries, optimizing communication, with experience in refactoring/greenfield and cloud patterns (Saga, Circuit Breaker). Lead platform improvements, overseeing technical enhancements for AI-driven features like our AI Mapping Tool for smarter capabilities. Architect comprehensive observability, deploying metrics, tracing, logging tools (OpenTelemetry, Prometheus, Grafana, Loki, Tempo) for real-time monitoring and high uptime. Define container sizing and lead Kubernetes performance benchmarking, analyzing bottlenecks to guide resource tuning and scaling for platform growth. Provide deployment/infrastructure expertise, guiding Helm for Kubernetes and collaborating on infrastructure needs (Terraform a plus). Lead tooling/automation enhancements, streamlining deployment via Helm improvements, simpler YAML, and pre-deployment validation to reduce errors. Lead evolution to event-driven, distributed workflows, decoupling orchestrators with RabbitMQ and patterns like Saga/pub-sub, integrating Redis for state/caching, improving fault tolerance/scalability. Collaborate across teams and stakeholders for architectural alignment, translating requirements into design and partnering for seamless implementation. Mentor engineers on coding, design, and architecture best practices, leading reviews and fostering engineering excellence. Responsible for documenting architecture decisions (diagrams, ADRs), clearly communicating complex technical concepts for roadmap transparency. Preferred candidate profile Required 5+ years in software engineering, significant experience in designing distributed systems, and a proven track record of improving scalability/maintainability. Extensive production experience with Kubernetes and Docker, proficient in deploying, scaling, and managing apps on clusters, including cluster management on major cloud platforms. Proficient in deployment automation/config management, required Helm charts experience, familiar with CI/CD/GitOps, and Terraform/IaC exposure is a plus. Strong experience implementing observability via monitoring/logging frameworks (Prometheus, Grafana, ELK/Loki, tracing), able to instrument applications, and proven in optimizing distributed system performance. Hands-on with message brokers (RabbitMQ/Kafka) and distributed data stores like Redis, skilled in asynchronous system design and solution selection. Excellent technical communication and leadership, proven ability to lead architectural discussions/build consensus, comfortable driving projects and collaborating with Agile, cross-functional teams. Adept at technical documentation/diagrams, with an analytical mindset for evaluating new technologies and foreseeing design impacts on scalability, security, and maintainability.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough