Home
Jobs

1349 Helm Jobs - Page 46

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6 - 10 years

10 - 17 Lacs

Bangalore Rural, Bengaluru

Work from Office

Naukri logo

Job Requirements Design, implement, and manage AWS cloud infrastructure and services. Develop and maintain CI/CD pipelines using tools like Jenkins Automate deployment and configuration management using tools such as Terraform, Ansible. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Collaborate with development teams to integrate DevOps practices into the software development lifecycle. Implement security best practices for cloud environments. Maintain documentation of systems, processes, and procedures. Stay updated on industry trends and emerging technologies in DevOps and cloud computing. Work Experience - Strong development background in any language with 2-3 Yrs experience Expertise in cloud technologies like AWS. Expertise in Shell scripting. Expertise in developing and managing Pipelines. Expertise in Source code management using GIT/Bitbucket. Experience in container technologies like Docker. Experience in Build/Release management. Good understanding of Agile processes Cloud Platforms: Extensive experience on AWS platform -EC2, EKS, ECS, S3, RDS, IAM, Kubernetes, Helm. Monitoring Tools: Knowledge of monitoring and observability tools : Grafana. Automation: should be comfortable with infrastructure-as-code (e.g., Terraform, Ansible). Problem-Solving: Strong analytical skills to troubleshoot complex issues. Deployments Know-How : CI/CD, pods management, SonarQube, Git, etc

Posted 1 month ago

Apply

3 - 8 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

Job Title: DevOps Engineer (3-8+ Years Experience) Location: Bengaluru, India Job Type: Full-time Experience: 3-8+ years Industry: Financial Technology / Software Development About Us: We are a cutting-edge software development company specializing in ultra-low latency trading applications for brokers, proprietary trading firms, and institutional investors. Our solutions are designed for high-performance, real-time trading environments, and we are looking for a DevOps Engineer to enhance our deployment pipelines, infrastructure automation, and system reliability. For more info, please visit: https://tradelab.in Responsibilities: 1. CI/CD & Infrastructure Automation Design, implement, and manage CI/CD pipelines for rapid and reliable software releases. Automate deployments using Terraform, Helm, and Kubernetes . Optimize build and release processes to support high-frequency, low-latency trading applications . Good knowledge on Linux/Unix 2. Cloud & On-Prem Infrastructure Management Deploy and manage cloud-based (AWS, GCP) and on-premises infrastructure . Ensure high availability and fault tolerance of critical trading systems. Implement infrastructure as code (IaC) to standardize deployments. 3. Performance Optimization & Monitoring Monitor system performance, network latency, and infrastructure health using tools like Prometheus, Grafana, ELK . Implement automated alerting and anomaly detection for real-time issue resolution. 4. Security & Compliance Implement DevSecOps best practices to ensure secure deployments. Maintain compliance with financial industry regulations (SEBI) . Conduct vulnerability scanning, access control, and log monitoring . 5. Collaboration & Troubleshooting Work closely with development, QA, and trading teams to ensure smooth deployments. Troubleshoot server, network, and application issues under tight SLAs. Required Skills & Qualifications: ✅ 5+ years of experience as a DevOps Engineer in a software development or trading environment. ✅ Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD). ✅ Proficiency in Cloud Platforms (AWS, GCP,) and Containerization (Docker, Kubernetes). ✅ Experience with Infrastructure as Code (IaC) using Terraform , or CloudFormation. ✅ Deep understanding of Linux system administration and networking (TCP/IP, DNS, Firewalls). ✅ Knowledge of monitoring & logging tools (Prometheus, Grafana, ELK ). ✅ Experience in scripting and automation using Python, Bash, or Go. ✅ Understanding of security best practices (IAM, firewalls, encryption). Good to have but not Mandatory Skills: ➕ Experience with low-latency trading infrastructure and market data feeds . ➕ Knowledge of high-frequency trading (HFT) environments . ➕ Exposure to FIX protocol, FPGA, and network optimizations . ➕ Experience with Redis, Nginx for real-time data processing. Perks & Benefits: Competitive salary & performance bonuses Opportunity to work in the high-frequency trading and fintech industry Flexible work environment with hybrid work options Cutting-edge tech stack and infrastructure Health insurance & wellness programs Continuous learning & certification support

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana

Work from Office

Indeed logo

Join our Team Our Exciting Opportunity! We are hiring MSIP Cloud Operations Assurance for Managed Services. You Will: Overall responsibility for the day-to-day operations of managing Cloud Services including software deployment and upgrades, system setup, system administration, monitoring, incident resolution, problem management, configuration and change management, service desk, security management and monitoring, capacity planning, availability management, disaster recovery and routine update of services. Work on scope, definition, and design solution offerings; develop and drive end-to-end technical solutions. Creates and maintains strategies related to Cloud Infrastructure and Cloud Services throughout the mobility network ensuring all performance and cost curves are closely managed Drives the adoption, and articulate the risks/opportunities from leveraging Cloud Infrastructure Oversees and directs the timely evaluation, qualification, and implementation of new software products, tools, and related appliances as it pertains to the Cloud environment Ensures timely liaison with vendors regarding problems, fixes, and required enhancements Interface C-Level of customer as and when required for any critical security issues To be successful in the role you must have: - Familiar with industry standards such as ETSI, 3GPP, Open stack, CNCF, ONF, MANO, OCP and others Understanding of the hyperscale cloud providers features and capabilities specifically relating to Telecom industry and clients Previous knowledge and Hands on experience on cloud and virtualization technologies (OpenStack, OpenShift, RHOSP, VMware, GCP, AWS, etc.). Hands on working experience in Kubernetes deployments, Docker, and Helm charts Scripting/Automation experience (Bash, Python, Ansible, other). Must possess excellent understanding of Telecom Networks (LTE, 5G, Wireline), OSS (Operations Support Systems) Platforms/ Tools, Cloud & DevOps Having Experience of Managing interworking of Cloud with IP networking and Workload like Packet Core etc. Delivered or Managed Large Customer projects/Challenging Projects/Operations An excellent Inter-personal skill along with superb communication skills in written and spoken English Experience of handling Senior leadership of Customer Task oriented & capable to work in multi-functional teams Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Gurgaon Req ID: 766635

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous What you will do: Generative AI & LLM Development, 12-15 Yrs of experience as Enterprise Software Architect with strong hands-on experience Strong hands-on experience in Python and microservice architecture concepts and development Expertise in crafting technical guides, architecture designs for AI platform Experience in Elastic Stack , Cassandra or any Big Data tool Experience with advance distributed systems and tooling, for example, Prometheus, Terraform, Kubernetes, Helm, Vault, CI/CD systems. Prior experience to build multiple AI/ML based models and deployed the models into production environment and creating the data pipelines Experience in guiding teams working on AI, ML, BigData and Analytics Strong understanding of development practices like architecture design, coding, test and verification. Experience with delivering software products, for example release management, documentation What you will Bring: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766747

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra

Work from Office

Indeed logo

Work location: Mumbai Interview locaiton: Pune Interview date: 15th Feb 25 Knowledge of kubernetes concepts like pods, services, deployments, statefulsets. Must have hands on experience of installing, managing and upgrading Kubernetes cluster. Must have experience on Docker, Podman, containerd and CRIO CRI. Familiarity with Kubernetes networking, including CNI plugins, ingress controllers and service meshes. Must have knowledge on K8S deployment using Helm and manifest. Implement security measures and ensure compliance with security policies and procedures like CIS benchmark. Understanding of Kubernetes security best practices, like RBAC, network, and pod security policies. Collaborate with other teams to ensure seamless integration of environment with other systems. Create and maintain documentation related to the environment. Job Location: Mumbai

Posted 1 month ago

Apply

10 - 15 years

12 - 17 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We are looking for an experienced DevOps Architect with 10-15 years of expertise to work on a full-time . The ideal candidate should possess strong proficiency in Google Cloud Platform and Python scripting. You will be responsible for developing Infrastructure as Code modules, creating CI/CD pipelines, and implementing scalable and secure cloud-based solutions. The role involves working closely with product owners and engineering teams to set up monitoring, logging, and alerting frameworks, while ensuring timely delivery of infrastructure improvements. A hands-on approach, strong problem-solving skills, and independent troubleshooting ability are essential. Immediate joiners only. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply

5 - 10 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Integration Engineer Project Role Description : Provide consultative Business and System Integration services to help clients implement effective solutions. Understand and translate customer needs into business and technology solutions. Drive discussions and consult on transformation, the customer journey, functional/application designs and ensure technology and business solutions represent business requirements. Must have skills : Infrastructure As Code (IaC) Good to have skills : Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure Databricks Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Integration Engineer, you will provide consultative Business and System Integration services to assist clients in implementing effective solutions. Your typical day will involve engaging with clients to understand their needs, facilitating discussions on transformation, and ensuring that the technology and business solutions align with their requirements. You will work collaboratively with various teams to translate customer needs into actionable plans, driving the customer journey and application designs to achieve optimal outcomes. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Facilitate workshops and meetings to gather requirements and feedback from stakeholders. Develop and maintain documentation related to integration processes and solutions. Infrastructure as Code (IaC):Knowledge of tools like Terraform, Terraform linkage, Helm, Ansible, ansible tower dependency and package management Broad knowledge of operating systems Network management knowledge and understanding of network protocols, configuration, and troubleshooting. Proficiency in configuring and managing network settings within cloud platforms Security:Knowledge with cybersecurity principles and practices, implementing security frameworks that ensure secure workloads and data protection Expert proficiency in Linux CLI Monitoring of the environment from technical perspective. Monitoring the costs of the development environment. Professional & Technical Skills: Must To Have Skills: Proficiency in Infrastructure As Code (IaC). Good To Have Skills: Experience with Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure Databricks. Strong understanding of cloud infrastructure and deployment strategies. Experience with automation tools and frameworks for infrastructure management. Familiarity with version control systems and CI/CD pipelines. Solid understanding of Data Modelling, Data warehousing and Data platforms design. Working knowledge of databases and SQL. Proficient with version control such as:Git, GitHub or GitLab Solid understanding of Data warehousing and Data platforms design. Experience supporting BAT teams and BAT test environments. Experience with workflow and batch scheduling. Added advantage of Control-M and Informatica experience. Good know-how on Financial Markets. Know-how on Clearing, Trading and Risk business process will be added advantage Know-How on Java, Spark & BI reporting will be an added advantage. Know-how of cloud platform and affinity towards modern technology an added advantage. Experience in CI/CD pipeline and exposure to DevOps methodologies will be considered as added advantage. Additional Information: The candidate should have minimum 5 years of experience in Infrastructure As Code (IaC). This position is based in Hyderabad. A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

0 years

0 Lacs

Greater Kolkata Area

Linkedin logo

Join our Team About this opportunity: Ericsson is presently seeking a proactive and skilled DevOps Engineer to join our dynamic, inclusive team. This role provides an exciting opportunity to streamline the delivery procedures of our software solutions. The successful candidate will collaborate with the DevOps Architect to enhance delivery processes and solution designs, ensuring an effective balance of time, cost, and quality. Key activities include tool installation, configuration, integration, code development, unit and system verification tests. Continuous learning is valued here; hence the ideal candidate will be open to developing new skills, sharing innovative ideas, and fostering a culture of automation. What you will do: Aid in presales activities and pre-studies. Analyze customer needs and existing network operations, recommending appropriate DevOps solutions, and anticipating impacts on current workflows. Propose solutions, development needs, integration, testing, and acceptance strategies based on analyzed requirements. Design and develop software programs, data, and scripts, tracking and rectifying any defects. Integrate software components in the DevOps operative environment, ensuring alignment with target requirements, and supporting system tests. Drive continual improvement by analyzing and suggesting enhancements to development, performance, and quality procedures. Provide coaching and guidance to junior DevOps Engineers and steer the DevOps team in the right direction. The skills you bring: Automation using Python. Ansible. Red Hat Open Stack. Cloud Technologies (Kubernetes, Docker, AWS, Container, Microservices, spring boot). Shell Scripting. Spinnaker. Linux. CDD. CI/CD. Terraform. Tools for CI/CD (Git, Gerrit, Jenkins, Sonar, Helm). Private Cloud (AWS, GCP, Azure). BitBucket. Red Hat Openshift. 8+ years of exp with Devops is must What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. We encourage you to consider applying to jobs where you might not meet all the criteria. We recognize that we all have transferrable skills, and we can support you with the skills that you need to develop. Encouraging a diverse an inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity and Affirmative Action employer, learn more. Primary country and city: India (IN) || [[location_obj]] Job details: DevOps Engineer

Posted 1 month ago

Apply

5 - 9 years

0 Lacs

Bengaluru

Work from Office

Naukri logo

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below client services offerings are used to create the Internet solutions that make networks possible providing easy access to information anywhere, at any time. Job Title: DevOps Engineer Location: Bangalore Duration: 5 Months Work Type: Onsite Job Description: 5+ years of experience are required. Requirements: (Must have Qualifications) Solid cloud infrastructure background and operational, fixing and problem-solving experience Strong software development experience in Python. Experience in building and maintaining code distribution through automated pipelines Experience in deploying and managing (IaaS) infrastructure in Private/Public Cloud using Openstack. Experience with Ansible or Puppet for configuration management IaaC experience Terraform, Ansible, Git, GitLab, Jenkins, Helm, ArgoCD, Conjur/Vault TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Gurugram, Haryana

Remote

Indeed logo

EAZY Business Solutions (www.eazyerp.com) was incorporated in 2007, in association with the Singhal Group, one of NCR’s most reputed companies in Financial Services and Real Estate Solutions. The Singhal Group has been at the forefront of creating brands such as “Krish” in Real Estate and offers a wide range of financial consulting services in Personal, Industrial, Business and Institutional segments.Today, with a team of dynamic and experienced professionals at its helm, EAZY Business Solutions has become one of the fastest growing ERP Product, Project Development and IT consulting companies. It has a veritable Pan-India presence, allowing great reach and accessibility to companies across the nation. : Exp: - 2-6 Years Qualification: - B.Tech/MCA/BCA Mandatory Skills· Technology: C# .Net, .Net Core API, HTML, CSS, MVC, Entity Framework, JavaScript· Good to have in Database – Strong experience with Microsoft LINQ or ADO.NET. Must be highly proficient in SQL Server. Responsibilities· Participate in requirements analysis· Collaborate with internal teams to produce software design and architecture· Test and deploy applications and systems· Develop documentation throughout the software development life cycle (SDLC)· Excellent troubleshooting and communication skills· Serve as an expert on applications and provide technical support. Job Types: Full-time, Permanent Pay: ₹200,000.00 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Work from home Location Type: In-person Schedule: Day shift Fixed shift Monday to Friday Application Question(s): What is your current ctc? Experience: dot net core: 2 years (Required) Location: Gurugram, Haryana (Required) Work Location: In person Speak with the employer +91 9370739286

Posted 1 month ago

Apply

0 - 2 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Initial Azure Setup Subscription and Resource Management Set up Azure subscriptions and resource groups. Define naming conventions and organize resources systematically. Networking Design and configure virtual networks (VNets) for Azure resource connectivity. Set up VPN gateways or ExpressRoute for hybrid networking with on-premises systems. Configure DNS, subnetting, firewall rules, and Network Security Groups (NSGs) for secure communication. Operating System Administration Deploy and manage virtual machines (VMs) running Windows and Linux. Implement OS patch management and monitoring for Azure-hosted VMs. Optimize VM performance and ensure proper resource utilization. High Availability (HA) and Disaster Recovery (DR) High Availability Configure Azure Availability Zones and sets to ensure service resiliency. Implement load balancers for traffic distribution and fault tolerance. Deploy redundant systems to minimize downtime and maintain service continuity. Disaster Recovery Design and implement Azure Site Recovery (ASR) for automated failover and recovery of critical workloads. Configure backup solutions using Azure Backup to secure data and applications. Test DR plans periodically to validate recovery strategies and minimize risks. Identity and Access Management Active Directory Administration Manage and maintain on-premises Active Directory environments. Implement Azure AD Connect to synchronize on-premises AD with Azure AD. Configure hybrid identity solutions for seamless single sign-on (SSO) and authentication. OAuth And Single Sign-On Implement OAuth-based authentication for applications. Configure SSO with Azure Active Directory to simplify user access across platforms. Set up identity federation with third-party identity providers. Azure AD B2B And B2C Configuration Set up Azure AD B2B for external partner collaboration. Configure Azure AD B2C for customer identity and access management. Customize user flows, policies, and multi-factor authentication for B2B and B2C scenarios. Containerization and Orchestration Docker Create, manage, and optimize Docker containers for application deployment. Utilize Docker Compose for orchestrating multi-container applications. Kubernetes Deploy and manage Azure Kubernetes Service (AKS) clusters for container workloads. Configure resource scaling, fault tolerance, and monitoring within Kubernetes environments. Implement Helm charts and manage applications on Kubernetes clusters. Security and Compliance Security And Access Control Set up Azure Active Directory (AAD) for secure authentication and access control. Configure role-based access control (RBAC) and security groups. Ensure compliance with security standards using tools like Azure Security Center and Microsoft Defender. Automation and Monitoring Automation Use Azure Automation for routine tasks and workflow management. Write scripts using PowerShell or Azure CLI for deployment and configuration. Monitoring And Alerts Set up Azure Monitor for resource performance and health tracking. Configure alerts for anomalies or performance issues. Data Management Storage Configuration Deploy and manage Azure Blob storage, file shares, and managed disks. Implement redundancy and backup strategies. Database Management Set up and manage Azure SQL databases and other database services. Migrate data securely from on-premises to Azure. Scaling and Optimization Documentation Maintain detailed records of configurations, policies, and deployments. Team Training Train team members on Azure basics, tools, and processes.

Posted 1 month ago

Apply

5 years

0 Lacs

Trivandrum, Kerala, India

Linkedin logo

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools . Agile environments (e.g. Scrum, XP) Relational databases Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+)

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Kochi, Kerala, India

Hybrid

Linkedin logo

Job title: Lead Engineer - Java Job family: Product & Technology Location: Kochi (Hybrid) Role Accountabilities And Responsibilities In this role your key responsibilities will be: Take overall responsibility for the quality of the product area covered by your team. Provide leadership and day-to-day line management of a team of java and test engineers, including the coaching and mentoring of individual team members. Promote best practice to continually raise standards and challenge team members to look for their own solution. Collaborate with other engineering teams and product owners to ensure the successful and robust execution of seamless integration of data between multiple systems. Role model the use of Azure DevOps (ADO), ensuring accurate completion to promote continuous improvement. Ensure the team follow the SCRUM agile methodology ensuring all ceremonies are adhered to - planning, retrospectives and demos. Suggest and develop improvements to our current product offering. Respond appropriately and competently to the demands of work challenges when confronted with changes, ambiguity, adversity, and other pressures. To actively undertake research and development in improving areas of the ZIP application suite. Provide 2nd line support to our application and technical support teams. Essential Knowledge / Skills / Behavior At least three years experiencing leading an engineering team, using Agile methodologies. Strong experience in developing with the following technologies:Java - version 11 and later.Spring, Spring boot and Spring SecurityJPAKafka ConnectAzure CloudREST APIsThe ODATA protocolOAUTH authenticationJUNITMokitoKarateCucumberDockerKubernetes PostmanGood experience in using Structured Query Language (SQL). Some experience in using HELM and Terraform. Must have good analytical and proactive problem-solving skills, recommending solutions. Excellent verbal and written communication skills. Ability to work well and calmly under pressure with both internal and external stakeholders. Background in HR and payroll data validation (preferred, but not Mandatory).

Posted 1 month ago

Apply

5 - 8 years

9 - 19 Lacs

Hyderabad, Ahmedabad

Work from Office

Naukri logo

JD Devops Engineer Job Description Roles and Responsibilities Responsible for managing capacity across public and private cloud resource pools, including automating scale-down/-up of environments. Improve cloud product reliability, availability, maintainability, and cost/benefitincluding developing fault-tolerant tools to ensure the general robustness of the cloud infrastructure. Design and implement CI/CD pipeline elements to provide automated compilation, assembly, and testing of containerized and non-containerized components. Design and implement infrastructure solutions on GCP that are scalable, secure, and highly available. Automate infrastructure deployment and management using Terraform, Ansible, or equivalent tools. Create and maintain CI/CD pipelines for our applications. Monitor and troubleshoot system and application issues to ensure high availability and reliability. Work closely with development teams to identify and address infrastructure issues. Collaborate with security teams to ensure infrastructure is compliant with company policies and industry standards. Participate in on-call rotations to provide 24/7 support for production systems. Continuously evaluate and recommend new technologies and tools to improve infrastructure efficiency and performance. Mentor and guide junior DevOps engineers. Other duties as assigned. Requirements Minimum Special Certifications or Technical Skills Proficient in at least two or more software languages (e.g. Python, Java, Go, etc. concerning to designing, coding, testing, and software delivery. Strong knowledge on CI/CD, Jenkins and github action More of a application devops engineer rather than infra devops engineer Strong knowledge on maven, sonarqube Strong knowledge on scripting and some knowledge on java Strong knowledge on agocd, helm Hands-on experience with Google Cloud Platform (GCP) and its services such as Compute Engine, Cloud Storage, Kubernetes Engine, Cloud SQL, Cloud Functions, etc. Strong understanding of infrastructure-as-code principles and tools such as Terraform, Ansible, or equivalent. Experience with CI/CD tools such as Jenkins, GitLab CI, or equivalent Strong understanding of networking concepts such as DNS, TCP/IP, and load balancing Preferred candidate profile

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

This is an Internal document. Job Title: DevOps 1 Roles and Responsibilities  Work closely with the tech lead to identiy and establish devops practices in team. Engineer/own the CI/CD infrastructure and manage the CD tooling.  Automate monitoring and alerting (dynamic monitoring) to ensure continuous highly available production systems.  Build and update continuous Deployment automation built on Docker and Kubernetes Services.  Write tools and services to manage build and test environments  Help developers with multiple SCM systems and custom build tools. Train, guide the team on devops practices Skills  Strong knowledge of Linux systems. Understanding and practical experience with CI/CD process implementation. Experience in building and support of highly available solutions - HAProxy, Nginx – on AWS  Knowledge of scripting and programming languages (Bash, Python…) preferred  Experience in configuration management of medium and large environments with Ansible or equivalent…. Experienced in Git, bitbucket, Gitlab…..  Jenkins administration or job development for Jenkins  DB experience: Oracle, MongoDB  Good troubleshooting and performance tuning skills.  Strong experience in Kubernetes, Strong experience in dockers, docker compose  Good understanding on helm

Posted 1 month ago

Apply

12 years

0 Lacs

Chennai, Tamil Nadu

Work from Office

Indeed logo

Java, Spring Boot, PL/SQL, Elastic Search, Neo4j ECS Openshift, Docker, Kubernetes, Helm, Spring Core, Reactive Programming, and RESTful API We are seeking a Senior Search Domain Expert and Java Developer with 12+ years of experience and a strong background in Spring Boot, ECS Openshift, Docker, and Kubernetes (Helm). The successful candidate will be responsible for designing, developing, and implementing high-quality software solutions. Design, develop, and maintain efficient, reusable, and reliable Java code. Use Spring Boot to develop microservices and manage cross-cutting concerns. Use Docker for containerization and Kubernetes for orchestration of services. Identify bottlenecks and bugs, and devise solutions to these problems. Help maintain code quality, organization, and automatization. Collaborate with other team members and stakeholders. 12+ years of software development experience with Java. Strong experience with Spring Boot. Proficient in PL/SQL Experience with Elasticsearch including configuration, content ingestion and query building. Experience with Neo4j and familiarity with knowledge Graph database concepts. Proven implementation of Search Indexing for efficient and accurate data retrieval. Experience with ECS Openshift, Docker and Kubernetes. Experience with Helm for managing Kubernetes deployments. Solid understanding of object-oriented programming. Familiarity with concepts of Spring Core, Reactive Programming, and RESTful API development. Understanding of code versioning tools, such as Git. Familiarity with build tools such as Maven or Gradle. Excellent problem-solving skills and attention to detail. Strong communication skills and the ability to work as part of a team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana

Work from Office

Indeed logo

Job Requirements Python, MySQL, Kubernetes, and Docker Developers in Hyderabad for PAC engagement team Work Experience Position Overview: We are seeking a highly skilled and motivated Full-Stack Developer with expertise in Python (FastAPI) , MySQL , Kubernetes , and Docker . The ideal candidate will play a key role in designing, developing, deploying, and maintaining robust applications that meet the needs of our fast-paced environment. Responsibilities: Backend Development: Build and maintain RESTful APIs using FastAPI . Optimize application performance, security, and scalability. Ensure adherence to coding standards and best practices. Database Management: Design, develop, and maintain complex relational databases using MySQL . Optimize SQL queries and database schemas for performance and scalability. Perform database migrations and manage backup/restore processes. Containerization and Orchestration: Build and maintain Docker containers for application deployment. Set up and manage container orchestration systems using Kubernetes . Develop and maintain CI/CD pipelines for automated deployment. Application Deployment and Maintenance: Deploy and monitor applications in cloud-based or on-premise Kubernetes clusters. Troubleshoot and resolve application and deployment issues. Monitor system performance, and ensure system reliability and scalability. Collaboration and Documentation: Collaborate with cross-functional teams to gather requirements and deliver solutions. Document system architecture, APIs, and processes for future reference. Participate in code reviews to ensure code quality and shared knowledge. Required Skills and Qualifications: Proficient in Python with experience in building APIs using FastAPI . Solid understanding of relational databases, particularly MySQL , including database design and optimization. Hands-on experience with Docker for containerization and Kubernetes for orchestration. Familiarity with building CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab CI/CD. Strong understanding of microservices architecture and REST API principles. Knowledge of security best practices in API development and deployment. Familiarity with version control systems, particularly Git . Preferred Skills: Knowledge of Helm for Kubernetes application deployment. Familiarity with monitoring tools like Prometheus, Grafana, or ELK stack. Basic knowledge of DevOps practices and principles.

Posted 1 month ago

Apply

0 - 10 years

0 Lacs

Noida, Uttar Pradesh

Work from Office

Indeed logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity: At Adobe, we offer an outstanding opportunity to work on new and emerging technology that crafts the digital experiences of millions. As Infrastructure Engineering team of Developer Platforms in Adobe, we provide industry-leading application hosting capabilities. Our solutions support high traffic, highly visible applications with immense amounts of data, numerous third-party integrations, and exciting scalability and performance problems. As a platform engineer on the Ethos team, you will work closely with our senior engineers to develop, deploy, and maintain our Kubernetes-based infrastructure. This role offers an excellent opportunity to grow your skills in cloud-native technologies and DevOps practices. We're on a mission to hire the very best and are committed to build exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new insights can come from everywhere in the organization, and we know the next big idea could be yours! What you'll Do: Contribute to the design, development, and maintenance of the K8s platform for container orchestration. Partner with other development teams across Adobe to ensure applications are crafted to be cloud-native and scalable. Perform day-to-day operational tasks such as upgrades and patching of the Kubernetes platform. Develop and implement CI/CD pipelines for application deployment on Kubernetes. Handle tasks and projects with Agile methodologies such as Scrum. Supervise the health of the platform and applications using tools like Prometheus and Grafana. Solve issues within the platform and collaborate with development teams to resolve application issues. Opportunities to contribute to upstream CNCF projects - Cluster API, ACK, ArgoCD among several others. Stay updated with the latest industry trends and technologies in container orchestration and cloud-native development. Participate in on-call rotation to resolve and get to the bottom of root cause as part of Incident & Problem management. What you need to succeed: B.Tech Degree in Computer Science or equivalent practical experience. Minimum of 5-10 years of experience working with Kubernetes. Certified Kubernetes Administrator and/or Developer/Security certifications encouraged. Strong software development skills in Python, Node.js, Go, Bash or similar languages. Experienced with AWS, Azure, or other cloud platforms. (AWS/Azure certifications encouraged.) Understanding of cloud network architectures (VNET/VPC/Nat Gateway/Envoy etc.). A solid understanding of time-series monitoring tools (such as Prometheus, Grafana, etc.). Familiarity with the 12-factor principles and software development lifecycle. Knowledge about GitOps, ArgoCD, and Helm with equivalent experience will be have advantage. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 month ago

Apply

3 - 6 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Sia is a next-generation, global management consulting group. Founded in 1999, we were born digital. Today our strategy and management capabilities are augmented by data science, enhanced by creativity and driven by responsibility. We’re optimists for change and we help clients initiate, navigate and benefit from transformation. We believe optimism is a force multiplier, helping clients to mitigate downside and maximize opportunity. With expertise across a broad range of sectors and services, our consultants serve clients worldwide. Our expertise delivers results. Our optimism transforms outcomes. Heka.ai is the independent brand of Sia Partners dedicated to AI solutions. We host many AI-powered SaaS solutions that can be combined with consulting services or used independently, to provide our customers with solutions at scale. Job Description We are looking for a skilled Senior Software Engineer to contribute to the development of AI and machine learning (ML) integrations and back-end solutions using Python. You will play a key role in developing our AI-powered SaaS solutions Heka.ai, collaborating with cross-functional teams to solve data-centric problems. This position emphasizes Python back-end development, with additional involvement in AI and ML model integration and optimization. Key Responsibilities Back-End Development: Design, develop, and optimize back-end services using Python, focusing on microservices and data-centric applications. AI & ML Models: Work closely with data scientists to integrate AI and ML models into back-end systems and ensure seamless performance of the applications. Containerization & Orchestration: Deploy and manage containerized applications using Docker and Kubernetes. Database Management: Manage SQL (PostgreSQL) and NoSQL (MongoDB) databases, ensuring high performance and scalability. Infrastructure as Code (IaC): Use Terraform and Helm to manage cloud infrastructure. Cloud Infrastructure & CI: Work with GCP / AWS / Azure for deploying and managing applications in the cloud. Management of continuous software integration (tests writing, artifacts building, etc.) Cross-Functional Collaboration: Collaborate with DevOps, Data Scientists, and Data Engineers to build scalable AI solutions. Contribution to the back end, front-end and software architecture of applications Qualifications Education: Bachelor’s/master's degree in computer science, Software Engineering, or a related field. Experience: 3-6 years of experience in software development, with a focus on Python back-end development. Skills: Strong proficiency in Python and experience with frameworks like Flask. Experience with C#, as well as with ReactJs for front-end development is a plus. Extensive experience with cloud platforms (GCP, AWS) and microservices architecture. Working knowledge of Docker, Kubernetes, CI/CD pipelines (GitLab) and ability to write unit tests. Database management with PostgreSQL / MongoDB. Experience mentoring and leading engineering teams. Additional Information What We Offer Opportunity to lead cutting-edge AI projects in a global consulting environment. Leadership development programs and training sessions at our global centers. A dynamic and collaborative team environment with diverse projects. Position based in Mumbai (onsite) Sia is an equal opportunity employer. All aspects of employment, including hiring, promotion, remuneration, or discipline, are based solely on performance, competence, conduct, or business needs.

Posted 1 month ago

Apply

5 years

0 Lacs

Pune, Maharashtra, India

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stackDesign, develop, test, deploy, maintain, and improve software.Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.)Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset.Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality.Participate in a tight-knit, globally distributed engineering team.Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality.Manage sole project priorities, deadlines, and deliverables.Research, create, and develop software applications to extend and improve on Equifax SolutionsCollaborate on scalability issues involving access to data and information.Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience5+ years of software engineering experience5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS5+ years experience with Cloud technology: GCP, AWS, or Azure5+ years experience designing and developing cloud-native solutions5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision.Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and othersUI development (e.g. HTML, JavaScript, Angular and Bootstrap)Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle.Agile environments (e.g. Scrum, XP)Relational databases (e.g. SQL Server, MySQL)Atlassian tooling (e.g. JIRA, Confluence, and Github)Developing with modern JDK (v1.7+)Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project and Development Services What this job involves: Pillar of the team Working closely with either the project manager or the senior project manager (or both), you’ll play a pivotal role in driving project success. You’ll take ownership of small projects, and provide a boost to the major ones. Being one of the leaders at the helm, you’ll explore ways to bring out the best in your team by walking the talk when it comes to ensuring optimal outcomes for all stakeholders. Likewise, you’re in charge of the organisational structure for each project, making sure that all reporting, communication and working procedures are streamlined, and that every project has clear objectives in place. Building strong client relationships We live and breathe client satisfaction. Thus, we need someone who share the same passion and dedication. You’ll maintain a strong and positive relationship with our clients by identifying their needs, requirements and constraints. Upholding excellence in project delivery For over 200 years, JLL has become synonymous with project success—you will help continue this history of excellence. You’ll manage professional consultants necessary for the design and documentation of the project, as well as carry out contract administration of all vendors professionally and in accordance with legal requirements to protect commercial interests of client and JLL. You will also help identify project risks and implement measures to mitigate them. Furthermore, you will create project-related reports, analyses and reviews. Sound like you? To apply you need to be: An expert in the field Do you have a degree in any property-related discipline? How about at least Five years of experience in Planning, documentation, design, construction or project management? If yes, we’re keen to discuss with you. An empowering colleague In this role, you’ll work with people of different ranks and responsibilities—that is why the ideal candidate is expected to promote open, constructive and collaborative relations with superiors, subordinates, peers and clients. Likewise, you’ll strive to gain the respect of JLL staff, clients and the broader business community. What we can do for you: At JLL, we make sure that you become the best version of yourself by helping you realise your full potential in an entrepreneurial and inclusive work environment. We will empower your ambitions through our dedicated Total Rewards Program, competitive pay and benefits package. Apply today !

Posted 1 month ago

Apply

8 - 13 years

18 - 30 Lacs

Coimbatore

Remote

Naukri logo

We are seeking a highly skilled and experienced Senior DevOps Engineer to join our growing team. The ideal candidate will have a strong background in cloud infrastructure, CI/CD pipelines, automation, containerization, and a passion for delivering scalable and reliable DevOps solutions in a dynamic environment. Key Responsibilities: Design, implement, and manage CI/CD pipelines for code deployment and automation. Manage infrastructure using tools like Terraform, Ansible, or CloudFormation. Deploy and manage applications in cloud environments (AWS, Azure, GCP). Monitor systems and resolve issues to ensure high availability and performance. Work closely with development, QA, and operations teams to ensure smooth software delivery. Implement security best practices and compliance standards in DevOps processes. Automate manual processes and identify areas for continuous improvement. Containerize applications using Docker and orchestrate using Kubernetes. Maintain and improve logging, monitoring, and alerting systems (e.g., Prometheus, Grafana, ELK). Required Skills & Experience: 8+ years of experience in DevOps, SRE, or related roles. Strong knowledge of cloud platforms: AWS, Azure, or GCP. Proficiency in scripting languages like Bash, Python, or Go. Hands-on experience with Docker, Kubernetes, and Helm. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of CI/CD tools: Jenkins, GitLab CI, CircleCI, etc. Familiarity with infrastructure as code tools (Terraform/CloudFormation). Good knowledge of Git, GitOps, and version control practices. Strong problem-solving skills and attention to detail. Preferred Qualifications: Relevant certifications (AWS Certified DevOps Engineer, Kubernetes Administrator, etc.) Experience in Agile environments Exposure to microservices architecture

Posted 1 month ago

Apply

12 - 17 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Kubernetes Good to have skills : Google Kubernetes Engine, Google Cloud Compute Services Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education We are seeking a highly motivated and experienced DevOps Infra Engineer to join our team and manage Kubernetes (K8) infrastructure. You will be responsible for implementing and maintaining Infrastructure as Code (IaC) using Terraform or relevant code and ensuring the smooth deployment and management of our Kubernetes Stack in On Prem environments. You will also be instrumental in troubleshooting issues, optimizing infrastructure, and implementing / Managing monitoring tools for observability. Primary Skills: Kubernetes, Kubegres, Kubekafka, Grafana, Redis, PrometheusSecondary Skills: Keycloak,MetalLB,Ingress,ElasticSearch,Superset,OpenEBS,Istio,Secrets,Helm,NussKnacker,Valero,DruidResponsibilities:Containerization:Working experience with Kubernetes and Docker for containerized application deployments in On Prem ( GKE/K8s ). Knowledge of Helm charts and their application in Kubernetes clusters. Collaboration and Communication:Work effectively in a collaborative team environment with developers, operations, and other stakeholders. Communicate technical concepts clearly and concisely. CI/CD:Design and implement CI/CD pipelines using Jenkins, including pipelines, stages, and jobs. Utilize Jenkins Pipeline and Groovy scripting for advanced pipeline automation. Integrate Terraform with Jenkins for IaC management and infrastructure provisioning. Infrastructure as Code (IaC):Develop and manage infrastructure using Terraform, including writing Terraform tfvars and modules code. Set up IaC pipelines using Terraform, Jenkins, and cloud environments like Azure and GCP. Troubleshoot issues in Terraform code and ensure smooth infrastructure deployments. Cloud Platforms:Possess a deep understanding of both Google Cloud and Azure cloud platforms. Experience with managing and automating cloud resources in these environments. Monitoring & Logging:Configure and manage monitoring tools like Splunk, Grafana, and ELK for application and infrastructure health insights. GitOps:Implement GitOps practices for application and infrastructure configuration management. Scripting and Automation:Proficient in scripting languages like Python and Bash for automating tasks. Utilize Ansible or Chef for configuration management. Configuration Management:Experience with configuration management tools like Ansible and Chef. Qualifications:4-9 years of experience as a Kubernetes & DevOps Engineer or similar role with 12+ years of total experience in Cloud and Infra managed services. Strong understanding of CI/CD principles and practices. Proven experience with Jenkins or CI/CD, including pipelines, scripting, and plugins. Expertise in Terraform and IaC principles. Experience with Kubernetes management in On Prem platform is preferred. Exposure with monitoring and logging tools like Splunk, Grafana, or ELK. Experience with GitOps practices. Proficiency in scripting languages like Python and Bash. Experience with configuration management tools like Ansible or Chef. Hands-on experience with Kubernetes and Docker. Knowledge of Helm charts and their application in Kubernetes clusters. Must:Flexible to cover a part of US working Hours ( 24/7 business requirement ).Excellent communication and collaboration skills. Fluent in English.

Posted 1 month ago

Apply

3 - 8 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.

Posted 1 month ago

Apply

3 - 8 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies