Home
Jobs

1165 Helm Jobs - Page 18

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 years

0 Lacs

Thiruvananthapuram

On-site

GlassDoor logo

9 - 12 Years 1 Opening Trivandrum Role description We are seeking a highly skilled Application Consultant with expertise in Node.js and Python, and hands-on experience in developing and migrating applications across AWS and Azure. The ideal candidate will have a strong background in serverless computing, cloud-native development, and cloud migration, particularly from AWS to Azure. Key Responsibilities: Design, develop, and deploy applications using Node.js and Python on Azure and AWS platforms. Lead and support AWS to Azure migration efforts, including application and infrastructure components. Analyze source architecture and code to identify AWS service dependencies and remediation needs. Refactor and update codebases to align with Azure services, including Azure Functions, AKS, and Blob Storage. Develop and maintain deployment scripts and CI/CD pipelines for Azure environments. Migrate serverless applications from AWS Lambda to Azure Functions using Node.js or Python. Support unit testing, application testing, and troubleshooting in Azure environments. Work with containerized applications, Kubernetes, Helm charts, and Azure PaaS services. Handle AWS to Azure SDK conversions and data migration tasks (e.g., S3 to Azure Blob). Required Skills: 8+ years of experience in application development using Node.js and Python. Strong hands-on experience with Azure and AWS cloud platforms. Proficiency in Azure Functions, AKS, App Services, APIM, and Blob Storage. Experience with AWS Lambda to Azure Functions migration (Must Have). Solid understanding of Azure PaaS and serverless architecture. Experience with Kubernetes, Helm charts, and microservices. Strong troubleshooting and debugging skills in cloud environments. Experience with AWS to Azure SDK conversion (Must Have). Skills Python,Node.Js,Azure Cloud,Aws About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

8.0 years

15 - 25 Lacs

Cochin

Remote

GlassDoor logo

Job Title: Senior DevOps Engineer Location: Remote Experience: 8+ Years Job Type: Full-time Job Summary: We are looking for a highly skilled Senior DevOps Engineer to join our team. The ideal candidate will have extensive experience in Jenkins, Octopus Cloud, Kubernetes, Terraform, CI/CD (Bitbucket Pipelines), and AWS environments . You will be responsible for migrating pipelines, designing, implementing, and managing infrastructure and automation solutions to support our development and operational goals. Key Responsibilities: Support the development teams with migration and deployment work. Kubernetes: Deploy, manage, and optimize Kubernetes clusters for scalability and performance. Terraform: Write and maintain Infrastructure as Code (IaC) using Terraform to provision cloud resources efficiently. CI/CD Pipelines: Design, build, and optimize Bitbucket Pipelines for automated build, test, and deployment processes. AWS Environment Management: Ensure robust AWS architecture, including networking, security, and cost optimization. Implement monitoring, logging, and alerting solutions for proactive infrastructure management. Work closely with development teams to streamline and enhance deployment workflows. Ensure security best practices and compliance standards are adhered to. Required Skills & Experience: Strong hands-on experience with Kubernetes in a production environment. Expertise in Terraform for infrastructure automation. Proficiency in CI/CD tools , specifically Bitbucket Pipelines . Solid understanding of AWS services (EC2, VPC, IAM, RDS, S3, etc.). Experience with scripting languages like Bash, Python, or Go . Familiarity with monitoring tools such as Prometheus, Grafana, CloudWatch . Strong problem-solving skills and ability to troubleshoot complex infrastructure issues. Nice to Have: Experience with Helm, ArgoCD, or Flux for Kubernetes application deployments. Knowledge of container security best practices. Exposure to GitOps methodologies. If you are passionate about DevOps and want to work in a dynamic, tech-driven environment, apply now! Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,500,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 week ago

Apply

2.0 years

5 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. JD : Container Security Engg – Exp –2-5 years Responsibilities: Design, Deploy and Troubleshoot Container Deployments for Security Scanning solution containerized agents using Helm Charts, on Kubernetes Platforms (Open Shift and EKS). Support integration with CI /CD pipelines and automation efforts to ensure that security testing is an integral and painless part of code development. Ensure these tools deliver maximum value for both security and developer stakeholders. Provide training, guidance and JIRA Story Integration with Security Solutions for developers to obtain Remediation Guidance deliver secure code. Provide API analysis and support for the integration of Security Solutions with Risk and Reporting solutions to track, prioritize and drive remediation of code vulnerabilities. Develop and foster effective working relationships within both Security and IT teams to ensure that projects are delivered securely and on-time. Configure and manage OpenSSL for cryptographic operations, including SSL/TLS certificates, key generation, and encryption protocols. Implement and maintain secure communication channels between services using OpenSSL. Design, build, and maintain highly scalable, reliable, and secure AWS cloud infrastructure using Terraform. Write and manage Terraform scripts for the provisioning of AWS resources (e.g., EC2, S3, VPC, RDS, Lambda, etc.). Required: Minimum of 2 years of IT experience At least 2+ years of specialization in Container Security. At least 1+ years of application development experience with backend development, Containerized applications At least 1+ Experience with programming languages such as Java, JavaScript, Python At least 1+ Experience working with Container Technologies such as Docker, and Kubernetes Platforms such as OpenShift or EKS or GKE. Experience using or fixing vulnerabilities various container security tools. 1+ years of experience with OpenSSL, managing SSL/TLS certificates and encryption. 1+ years of hands-on experience with Terraform in AWS environments. Preferred: Experience with Container Deployments using Helm Charts and Infrastructure Code preferably Terraform. Experience working with Secure Development Pipelines such as Jenkins or Electric Flow Strong knowledge of relevant Security Standards (OWASP) and how to apply them to the software development lifecycle in a large agile environment. Experience performing security analysis on web applications and APIs. Experience working in an Agile environment. AWS certifications (e.g., AWS Solutions Architect, AWS DevOps Engineer) are a plus. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java, Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What The Role Offers Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What You Need To Succeed Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One Last Thing OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned. Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace. 46999 Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact The ESM Platform R&D team is looking for a highly skilled UI developer with expertise in Angular and/or ReactJS, Web Components to join our Global R&D team. In this role, you will contribute to the development of cutting-edge enterprise software solutions within a fast-paced, challenging, and dynamic environment. Our rapidly growing business serves demanding enterprise-class customers worldwide, leveraging a Microservices-based architecture that runs on Kubernetes with Docker Containers. What The Roles Offer As an UI Software Developer, you will Create design documents and participate in design reviews. Develop product features, write unit tests to ensure adequate code coverage. Participate in technical discussions, release planning and contribute to it in full extent. Work with stakeholders, architects, and lead developers to define UI requirements. Work under general guidance with progress reviewed on a regular basis. Follow the UX guidelines, software development best practices and industry standards while implementing UI and integration. Contribute as an Agile team member and take responsibility for own work commitments and take part in project / functional problem-solving, based on established practices. Work with QA engineers to provide inputs on test plan, write end-to-end tests to ensure the product quality and health of the CICD pipeline. Take part in quality initiatives and help deliver quality and continues improvements. Handle customer incidents (CPE by understanding customer use cases and troubleshooting and debugging of software programs. Lead quality initiatives to drive continuous improvement. What You Need To Succeed Bachelor's or Master’s degree in Computer Science, Information Systems , or equivalent from a premier institute. 2-5 years of experience with at least 2+ recent years of experience in designing and developing software application User Interface (UI) for enterprise products and solutions. Working experience with designing and developing UI Interfaces and components independently using recent versions of ReactJS, AngularJS and Web Components running on a large-scale environment. Produce high quality code according to design specifications. Strong proficiency in HTML5, CSS3, JavaScript (ES6+), and TypeScript. Experience with jQuery, SCSS, and CSS. Experience with NodeJS, npm, application packaging, deployment, and management. Hands-on experience with state management libraries (e.g., NgRx, Redux). Deep understanding of responsive design principles and frameworks (e.g., Bootstrap, Material Design). Strong grasp of cross-browser compatibility issues and solutions. Proficiency in Core Java, RESTful APIs and WebSockets. Experience with data visualization using various chart libraries like d3, nvd3, etc. Familiarity with UI mockups and design tools. Ability to follow UX standards and guidelines. Collaborate with Product Owners to plan and prioritize tasks efficiently. Strong knowledge of unit testing, UI testing frameworks (Cypress, Selenium), and test automation. Identify, debug, and fix product issues efficiently. Implement software design/coding for functional requirements while ensuring quality and adherence to standards. Strong analytical skills to troubleshoot and resolve complex code defects. Participate in the Agile development process from design to release. Current Product Engineering (CPE) efforts to resolve customer-submitted incidents. Drive innovation and integrate new technologies into the R&D organization. Software design/coding for a functional requirement, ensure quality and adherence to company standards. Ability to work independently in a cross functional distributed team culture with focus on teamwork. Work across teams and functional roles to ensure interoperability among other products, including training and consultation. Participate in the software development process from design to release in an Agile Development Framework. Excellent team player and focus on collaboration activities. Ability to take up other duties as assigned. Desirable Skills Proficiency in Docker, Kubernetes, and Helm. Understanding of data interchange technologies (XML, JSON). Exposure to Cloud technologies usage and deployments would be good (AWS, GCP, Azure etc.) and SaaS model Working knowledge of Agile or Scaled Agile Framework (SAFe). Experience in Git source control. Familiarity with CI/CD tools like Maven, Gradle, Jenkins. Experience with Windows and Linux/Unix operating systems. Strong Communication, analytical and problem-solving skills. Knowledge on Vulnerability, Compliance, Vendor Patching of Operating Systems User level on Windows and Linux/Unix Operating System OpenText is an equal opportunity employer that hires and attracts talent regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, marital status, sex, age, veteran status, or sexual orientation. At OpenText we acknowledge, value and respect diversity. We draw on diversity of thought and experience to reflect the rich array of cultures representing our broad global customer base. As a technology company, we can only be as good as the people who are part of our team. To that end, we seek talent with diversity of life experiences and perspectives from around the world! OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace. 46583 Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact The ESM Platform R&D team is looking for a highly skilled UI developer with expertise in Angular and/or ReactJS, Web Components to join our Global R&D team. In this role, you will contribute to the development of cutting-edge enterprise software solutions within a fast-paced, challenging, and dynamic environment. Our rapidly growing business serves demanding enterprise-class customers worldwide, leveraging a Microservices-based architecture that runs on Kubernetes with Docker Containers. What The Roles Offer As an UI Software Developer, you will Create design documents and participate in design reviews. Develop product features, write unit tests to ensure adequate code coverage. Participate in technical discussions, release planning and contribute to it in full extent. Work with stakeholders, architects, and lead developers to define UI requirements. Work under general guidance with progress reviewed on a regular basis. Follow the UX guidelines, software development best practices and industry standards while implementing UI and integration. Contribute as an Agile team member and take responsibility for own work commitments and take part in project / functional problem-solving, based on established practices. Work with QA engineers to provide inputs on test plan, write end-to-end tests to ensure the product quality and health of the CICD pipeline. Take part in quality initiatives and help deliver quality and continues improvements. Handle customer incidents (CPE by understanding customer use cases and troubleshooting and debugging of software programs. Lead quality initiatives to drive continuous improvement. What You Need To Succeed Bachelor's or Master’s degree in Computer Science, Information Systems , or equivalent from a premier institute. 2-5 years of experience with at least 2+ recent years of experience in designing and developing software application User Interface (UI) for enterprise products and solutions. Working experience with designing and developing UI Interfaces and components independently using recent versions of ReactJS, AngularJS and Web Components running on a large-scale environment. Produce high quality code according to design specifications. Strong proficiency in HTML5, CSS3, JavaScript (ES6+), and TypeScript. Experience with jQuery, SCSS, and CSS. Experience with NodeJS, npm, application packaging, deployment, and management. Hands-on experience with state management libraries (e.g., NgRx, Redux). Deep understanding of responsive design principles and frameworks (e.g., Bootstrap, Material Design). Strong grasp of cross-browser compatibility issues and solutions. Proficiency in Core Java, RESTful APIs and WebSockets. Experience with data visualization using various chart libraries like d3, nvd3, etc. Familiarity with UI mockups and design tools. Ability to follow UX standards and guidelines. Collaborate with Product Owners to plan and prioritize tasks efficiently. Strong knowledge of unit testing, UI testing frameworks (Cypress, Selenium), and test automation. Identify, debug, and fix product issues efficiently. Implement software design/coding for functional requirements while ensuring quality and adherence to standards. Strong analytical skills to troubleshoot and resolve complex code defects. Participate in the Agile development process from design to release. Current Product Engineering (CPE) efforts to resolve customer-submitted incidents. Drive innovation and integrate new technologies into the R&D organization. Software design/coding for a functional requirement, ensure quality and adherence to company standards. Ability to work independently in a cross functional distributed team culture with focus on teamwork. Work across teams and functional roles to ensure interoperability among other products, including training and consultation. Participate in the software development process from design to release in an Agile Development Framework. Excellent team player and focus on collaboration activities. Ability to take up other duties as assigned. Desirable Skills Proficiency in Docker, Kubernetes, and Helm. Understanding of data interchange technologies (XML, JSON). Exposure to Cloud technologies usage and deployments would be good (AWS, GCP, Azure etc.) and SaaS model Working knowledge of Agile or Scaled Agile Framework (SAFe). Experience in Git source control. Familiarity with CI/CD tools like Maven, Gradle, Jenkins. Experience with Windows and Linux/Unix operating systems. Strong Communication, analytical and problem-solving skills. Knowledge on Vulnerability, Compliance, Vendor Patching of Operating Systems User level on Windows and Linux/Unix Operating System OpenText is an equal opportunity employer that hires and attracts talent regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, marital status, sex, age, veteran status, or sexual orientation. At OpenText we acknowledge, value and respect diversity. We draw on diversity of thought and experience to reflect the rich array of cultures representing our broad global customer base. As a technology company, we can only be as good as the people who are part of our team. To that end, we seek talent with diversity of life experiences and perspectives from around the world! OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace. 46584 Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru

On-site

GlassDoor logo

Imagine what you could do here. At Apple, we believe new insights have a way of becoming excellent products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. The people here at Apple don’t just build products - they build the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. Apple's Manufacturing Systems and Infrastructure (MSI) team is responsible for capturing, consolidating and tracking all manufacturing data for Apple’s products and modules worldwide. Our tools enable teams to confidently use data to shape the next generation of product manufacturing at Apple. We seek a practitioner with experience building large-scale data platforms, analytic tools, and solutions. If you are passionate about making data easily accessible, trusted, and available across the entire business at scale, we'd love to hear from you. As a Software Engineering Manager, you are an integral part of a data-centric team driving large- scale data infrastructure and processes development, implementation, and improvement. Our organization thrives on collaborative partnerships. Join and play a key role in developing and driving the adoption of Agentic AI, LLMs, Data Mesh and data-centric micro-services. Description As an Engineering Manager, you will lead a team of engineers responsible for the development and implementation of our cloud-based data infrastructure. You will work closely with cross-functional teams to understand data requirements, design scalable solutions, and ensure the integrity and availability of our data. The ideal candidate will have a deep understanding of cloud technologies, data engineering best practices, and a proven track record of successfully delivering complex data projects. Key Responsibilities include: - Hire, develop, and retain top engineering talent - Build and nurture self-sustained, high-performing teams - Provide mentorship and technical guidance to engineers, fostering continuous learning and development - Lead the design, development, and deployment of scalable cloud-based data infrastructure and applications - Drive end-to-end execution of complex data engineering projects - Partner with Data Scientists, ML Engineers, and business stakeholders to understand data needs and translate them into scalable engineering solutions - Align technical strategy with business goals through effective communication and collaboration - Implement and enforce best practices for data security, privacy, and compliance with regulatory standards - Optimize data storage, processing, and retrieval for improved performance and cost efficiency. - Continuously evaluate and improve the system architecture and workflows - Stay current with emerging trends and technologies in cloud data engineering - Recommend and adopt tools, frameworks, and platforms that enhance productivity and reliability Minimum Qualifications Bachelor’s degree in Computer Science or a related field Minimum 8 years of experience in software development with at least 2 years in a technical leadership or management role. Proven experience as a Full stack developer, with a focus on cloud platforms. Proficient in programming languages such as Python. Strong hands-on expertise with Python frameworks (Django, Flask, or FastAPI, RESTful APIs), React.js and modern JavaScript Experience with authentication and authorization (OAuth, JWT) Strong understanding of cloud services, preferably AWS & Experience in building cloud native platforms using containerization technologies like Kubernetes, docker, helm Preferred Qualifications Knowledge of data warehouse solutions (BigQuery, Snowflake, Druid) and Big Data technologies such as Spark, Kafka, Hive, Iceberg, Trino, Flink. Experience with big data technologies (Hadoop, Spark, etc.). Experience with streaming data technologies (Kafka, Kinesis). Experience building data streaming solutions using Apache Spark / Apache Storm / Flink / Flume. Familiarity with machine learning pipelines is an added advantage. Proven ability to deliver complex, high-scale systems in a production environment. Strong people management and cross-functional collaboration skills. Submit CV

Posted 1 week ago

Apply

7.0 years

5 - 10 Lacs

Chennai

On-site

GlassDoor logo

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide. Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves. Team Five9 is a leading provider of cloud software for the enterprise contact center market, bringing the power of the cloud to thousands of customers and facilitating more than three billion customer interactions annually. Since 2001, Five9 has led the cloud revolution in contact centers, helping organizations transition from legacy premise-based solutions to the cloud. Five9 provides businesses with cloud contact center software that is reliable, secure, compliant, and scalable, which is designed to create exceptional customer experiences, increase agent productivity, and deliver tangible business results. The Platform Infrastructure Team at Five9 is responsible for building and maintaining the Cloud Infrastructure that supports the development, deployment of software hosted by Five9. The platform infrastructure team provides critical Cloud infrastructure, tools and resources that enable software developers to build and deploy software more efficiently and effectively. This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States Role purpose As part of the Cloud Platform Engineering team, you will be building Five9's Modern SaaS platform. An ideal candidate for us is an experienced engineer who is passionate about building high performance cloud platforms with automation first mindset and a brilliant problem solver and a creative self-starter. How you contribute Be part of Cloud Platform Infrastructure Team, focused on building the next generation Modern SaaS using public multi-cloud and hybrid-cloud solutions. Build automation capabilities towards common abstractions, tools, automation for CI/CD and progressive delivery of Cloud Native applications. Delivering mostly self-selected user stories efficiently and with testability and scalability - including complex tasks that span across adjacent areas. Leading engineering effort and collaborating regularly with Peers, PM and Quality and Ops, helping mentor others and developing specs. Is the go-to person who diagnoses and anticipate problems/bugs, drives toward industry standards/patterns, making sure project is delivered and deployed end to end. Enable all Five9 development teams with a Cloud Native developer workflow, conduct developer training and toolset to automate software delivery with a focus on Scale, HA. Design, and build secure, highly scalable, enterprise grade platform services. Document and communicate clearly of architecture and implementation solutions. Work closely with product managers, architects, testers, and development teams. Troubleshoot and support current Cloud platform in production. Expertise to Debug & Support Production issues. Skills, competencies and qualifications Required: 7+ years of professional DevOps / production operations experience. 5+ years of Cloud Native application delivery experience. Strong Hands-on experience with CI/CD tools like GitLab, GitHub, Jenkins, etc. Intimate knowledge of public cloud infrastructures (GCP - Preferred, AWS, Azure). Hands-on experience working on core Cloud services – Kubernetes, Compute, Storage, Network, Virtualization, Identity and Access Management (IAM). Expert level in Current technology Stack Helm, K8S, Istio, GCP, AWS, GKE, EKS, Terraform, SRE/DevOps practices or equivalent. Strong proven experience in Infrastructure as Code (IaC), to be responsible for building robust platforms using automation. Experience building automation and deploying high quality software with test frameworks and CI/CD, Progressive Delivery. Strong development experience in one or more programming languages - Python, Terraform, Golang, Java, etc. Advanced knowledge of Linux based systems and runtimes. DevOps mindset and familiarity with the concept of Site Reliability Engineering – inherent sense of ownership through the development and deployment lifecycle. You understand what it takes to run mission critical software in production. Ability to prioritize tasks, work independently and work collaboratively in an agile environment. Other requirements This position requires the ability to be On Call. Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer. View our privacy policy, including our privacy notice to California residents here: https://www.five9.com/pt-pt/legal. Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide. Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves. Team Five9 is a leading provider of cloud software for the enterprise contact center market, bringing the power of the cloud to thousands of customers and facilitating more than three billion customer interactions annually. Since 2001, Five9 has led the cloud revolution in contact centers, helping organizations transition from legacy premise-based solutions to the cloud. Five9 provides businesses with cloud contact center software that is reliable, secure, compliant, and scalable, which is designed to create exceptional customer experiences, increase agent productivity, and deliver tangible business results. The Platform Infrastructure Team at Five9 is responsible for building and maintaining the Cloud Infrastructure that supports the development, deployment of software hosted by Five9. The platform infrastructure team provides critical Cloud infrastructure, tools and resources that enable software developers to build and deploy software more efficiently and effectively. This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States Role purpose As part of the Cloud Platform Engineering team, you will be building Five9’s Modern SaaS platform. An ideal candidate for us is an experienced engineer who is passionate about building high performance cloud platforms with automation first mindset and a brilliant problem solver and a creative self-starter. How You Contribute Be part of Cloud Platform Infrastructure Team, focused on building the next generation Modern SaaS using public multi-cloud and hybrid-cloud solutions. Build automation capabilities towards common abstractions, tools, automation for CI/CD and progressive delivery of Cloud Native applications. Delivering mostly self-selected user stories efficiently and with testability and scalability - including complex tasks that span across adjacent areas. Leading engineering effort and collaborating regularly with Peers, PM and Quality and Ops, helping mentor others and developing specs. Is the go-to person who diagnoses and anticipate problems/bugs, drives toward industry standards/patterns, making sure project is delivered and deployed end to end. Enable all Five9 development teams with a Cloud Native developer workflow, conduct developer training and toolset to automate software delivery with a focus on Scale, HA. Design, and build secure, highly scalable, enterprise grade platform services. Document and communicate clearly of architecture and implementation solutions. Work closely with product managers, architects, testers, and development teams. Troubleshoot and support current Cloud platform in production. Expertise to Debug & Support Production issues. Skills, Competencies And Qualifications Required: 7+ years of professional DevOps / production operations experience. 5+ years of Cloud Native application delivery experience. Strong Hands-on experience with CI/CD tools like GitLab, GitHub, Jenkins, etc. Intimate knowledge of public cloud infrastructures (GCP - Preferred, AWS, Azure). Hands-on experience working on core Cloud services – Kubernetes, Compute, Storage, Network, Virtualization, Identity and Access Management (IAM). Expert level in Current technology Stack Helm, K8S, Istio, GCP, AWS, GKE, EKS, Terraform, SRE/DevOps practices or equivalent. Strong proven experience in Infrastructure as Code (IaC), to be responsible for building robust platforms using automation. Experience building automation and deploying high quality software with test frameworks and CI/CD, Progressive Delivery. Strong development experience in one or more programming languages - Python, Terraform, Golang, Java, etc. Advanced knowledge of Linux based systems and runtimes. DevOps mindset and familiarity with the concept of Site Reliability Engineering – inherent sense of ownership through the development and deployment lifecycle. You understand what it takes to run mission critical software in production. Ability to prioritize tasks, work independently and work collaboratively in an agile environment. Other Requirements This position requires the ability to be On Call. Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer. View our privacy policy, including our privacy notice to California residents here: https://www.five9.com/pt-pt/legal. Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9. Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

20 - 30 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Engineer who is having hands on expertise and experience in the following Python Development Expert Airflow 2.7+ skills (Both as a us and has knowledge of Airflow internals including customizations) Docker/Kubernetes Application Development and Kube Admin skills (incl. Helm) Observability Skills (Monitoring and Logging ELK, Prometheus/Grafana preferably) CICD (Azure DevOps Preferred)

Posted 1 week ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Pune

Work from Office

Naukri logo

We are hiring a DevOps / Site Reliability Engineer for a 6-month full-time onsite role in Pune (with possible extension). The ideal candidate will have 69 years of experience in DevOps/SRE roles with deep expertise in Kubernetes (preferably GKE), Terraform, Helm, and GitOps tools like ArgoCD or Flux. The role involves building and managing cloud-native infrastructure, CI/CD pipelines, and observability systems, while ensuring performance, scalability, and resilience. Experience in infrastructure coding, backend optimization (Node.js, Django, Java, Go), and cloud architecture (IAM, VPC, CloudSQL, Secrets) is essential. Strong communication and hands-on technical ability are musts. Immediate joiners only.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Bachelors degree in computer science or equivalent experience, with strong communication skills. Over 7 years of IT industry experience, with substantial expertise as a DevOps or Cloud Engineer. In-depth experience with development, configuration, and maintenance of cloud services using AWS. Strong experience using AWS services including Compute, Storage, Network, RDS, Security, and Serverless technologies such as AWS Lambda, Step Functions, and EventBridge. Experience with automated deployment tools and the principles of CI/CD using tools such as Jenkins, GitHub, GitHub Actions/Runners, CloudFormation, CDK, Terraform, and Helm. Expertise in containerization and orchestration using Docker, Kubernetes, ECS, and AWS Batch. Solid understanding of cloud design, networking concepts, and security best practices. Experience with configuration management tools such as Ansible and AWS SSM. Proficient in using Git with a good understanding of branching, Git flows, and release management. Scripting experience in Python, Bash, or similar, including virtual environment packaging. Knowledge and experience of Enterprise Identity Management solutions such as SSO, SAML, and OpenID Connect. A solid understanding of application architectures, including cloud-native approaches to infrastructure. Experience with testing tools like SonarQube, Cucumber, and Pytest. Experience in creating and running various types of tests including unit tests, integration tests, system tests, and acceptance tests to ensure software and systems function correctly. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Location(s): Remote - India, Remote, Remote, IN Line Of Business: Insurance(INSURANCE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Moody's is looking for a Senior Engineer(DevOps) to be part of a team responsible for designing and developing the Tools & Automation for Infrastructure the Core Products suite in AWS. What You'll Be Doing: You will be responsible for leading efforts to implement stability, and observability improvements to our Kubernetes container platform. You will be focused on SLI development, Automation, TOIL elimination, incident response, root cause analysis and monitoring enhancements. You should have the aptitude and enthusiasm for building and servicing highly distributed, scalable, and mission-critical systems. You should have a passion for automation and creating self-service mechanisms for customers. Create software design documents, architecture, sequence, class and related artifacts. Translate design inputs into development work items. Assist in providing estimates for levels of effort required to accomplish expected deliverables. Research new technologies and techniques to support leading-edge development. Provide an active contribution to the team responsible for the design, development, and implementation of critical enterprise scale applications. Required experience and skills: Expertise with Deploying and managing AWS services (Networking, Storage, EKS, API Gateway etc.) Expertise with Infrastructure-as-Code frameworks (Terraform, CloudFormation) Expertise in Containerization and micro service Architecture( Kubernetes, Docker, Helm Charts) Expertise in designing tools and Automation using any script (Python, PowerShell) Expertise using Observability Stack(Kibana, Prometheus, Grafana with a focus on observability and alerting) Expertise with CI/CD pipelines and automation and how to apply it with services such as Jenkins, Azure Devops, Circle CI. Experience with modern programming and scripting languages (Python, Go, PowerShell, C#) Desirable experience and skills: Familiarity with Linux and windows Administration. Experience with a Developer Background Experience in performance measurement, bottleneck analysis, and resource usage monitoring Master of Science in Computer Science or Bachelor of Science in Computer Science with 5 or more years’ experience. Experience with data access and computing in highly distributed cloud systems. Experience in agile development. Written and verbal communication skills. Technology : Kubernetes, CI/CD, Docker, AWS, Azure, Docker, Python, PowerShell, Jenkins, Helm Charts, Ansible, Terraform, Ansible. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description We are seeking a highly skilled Application Consultant with expertise in Node.js and Python, and hands-on experience in developing and migrating applications across AWS and Azure. The ideal candidate will have a strong background in serverless computing, cloud-native development, and cloud migration, particularly from AWS to Azure. Key Responsibilities Design, develop, and deploy applications using Node.js and Python on Azure and AWS platforms. Lead and support AWS to Azure migration efforts, including application and infrastructure components. Analyze source architecture and code to identify AWS service dependencies and remediation needs. Refactor and update codebases to align with Azure services, including Azure Functions, AKS, and Blob Storage. Develop and maintain deployment scripts and CI/CD pipelines for Azure environments. Migrate serverless applications from AWS Lambda to Azure Functions using Node.js or Python. Support unit testing, application testing, and troubleshooting in Azure environments. Work with containerized applications, Kubernetes, Helm charts, and Azure PaaS services. Handle AWS to Azure SDK conversions and data migration tasks (e.g., S3 to Azure Blob). Required Skills 8+ years of experience in application development using Node.js and Python. Strong hands-on experience with Azure and AWS cloud platforms. Proficiency in Azure Functions, AKS, App Services, APIM, and Blob Storage. Experience with AWS Lambda to Azure Functions migration (Must Have). Solid understanding of Azure PaaS and serverless architecture. Experience with Kubernetes, Helm charts, and microservices. Strong troubleshooting and debugging skills in cloud environments. Experience with AWS to Azure SDK conversion (Must Have). Skills Python,Node.Js,Azure Cloud,Aws Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description DevOps Engineer/ Ci-CD Job Summary Hands on experience in CICD processes and tools like Git/Jenkins/Bamboo/bitbucket. Hands on scripting in Shell/Powershell and good to have python scripting skills. Hands on experience in integrating third party Apis with CICD pipelines. Hands on experience in Aws basic services like Ec2/LBs/SecurityGroups/RDS/VPCs and good to have Aws patch manager. Hands on experience troubleshooting application infra issue, should be able to explain 2-3 issues the resource worked in previous role. Should have knowledge on ITIL and Agile processes and willing support weekend deployment and patching activities. Should be a quick learner and adopt to emerging technologies/processes and must have sense of ownership/urgency to deliver. DevOps - Hands-on in AWS EKS & EFS Strong knowledge in Kubernates Experience in Enterprise Terraform Strong knowledge in Helm Charts & Helmfile Knowledge in Linux OS - Knowledge in Git & GitOps Experience in BitBucket & Bamboo - Knowledge in systems & applicatoin security priciples Experience in automate and improve development; and release processes; CI/CD pipelines Work with developers to ensure the development process is followed accross the board Experience in setting up build & deployment pipelines for Web, iOS & Android Knowledge & experience in mobile application release process Knowlegde in InTune, MobileIron & Perfecto - Understanding of application test automation & reporting Understanding of inftrastructure & application health monitoring and reporting Skills Devops Tools,Devops,Cloud Computing Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Job Title: MLOps Engineer Location: [Insert Location – e.g., Gurugram / Remote / On-site] Experience: 2–5 years Type: Full-Time Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for seamless deployment and monitoring of ML models. Implement and manage CI/CD workflows using modern tools (e.g., GitHub Actions, Azure DevOps, Jenkins). Orchestrate ML services using Kubernetes for scalable and reliable deployments. Develop and maintain FastAPI-based microservices to serve machine learning models via RESTful APIs. Collaborate with data scientists and ML engineers to productionize models in Azure and AWS cloud environments. Automate infrastructure provisioning and configuration using Infrastructure-as-Code (IaC) tools. Ensure observability, logging, monitoring, and model drift detection in deployed solutions. Required Skills: Strong proficiency in Kubernetes for container orchestration. Experience with CI/CD pipelines and tools like Jenkins, GitHub Actions, or Azure DevOps. Hands-on experience with FastAPI for developing ML-serving APIs. Proficient in deploying ML workflows on Azure and AWS . Knowledge of containerization (Docker optional, if used during local development). Familiarity with model versioning, reproducibility, and experiment tracking tools (e.g., MLflow, DVC). Strong scripting skills (Python, Bash). Preferred Qualifications: B.Tech/M.Tech in Computer Science, Data Engineering, or related fields. Experience with Terraform, Helm, or other IaC tools. Understanding of DevOps practices and security in ML workflows. Good communication skills and a collaborative mindset. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Job Type Full-time Description About Cloudbees CloudBees is the leading software delivery platform enabling enterprises to scale software delivery while ensuring security, compliance, and operational efficiency. We empower developers with fast, self-serve workflows across hybrid and heterogeneous environments, offering unmatched flexibility for cloud transformation. As trusted partners in DevSecOps, CloudBees supports organizations using Jenkins on-premise, transitioning to the cloud, or accelerating their DevOps maturity to drive innovation and achieve their business goals. Role Overview We are looking for a Tooling Engineer to design, develop, and maintain software tools that enhance the efficiency and effectiveness of our support team and broader organization. In this role, you will work closely with support engineers and other teams to identify pain points, automate repetitive tasks, and improve workflows through custom-built tools. Your contributions will directly improve productivity, service quality, and operational performance. Collaboration & Tools Development Design, develop, and maintain internal software tools that improve the team efficiency and automation. Collaborate with the Support Team to identify tool requirements. Optimize and automate existing support processes to enhance response times and service quality. Ensure software tools are user-friendly, well-documented, and scalable. Debug, troubleshoot, and maintain existing tooling to ensure reliability and performance. Continuous Learning Stay updated on Spring and or Quarkus stack Stay updated on the Secure coding best practices Stay up to date with the latest technologies to continuously improve and refine tooling solutions. Requirements Must-Have 5 to 7 years of experience in Java web application development 1+ year of experience with either Spring or Quarkus Version Control & CI/CD: Strong experience with Git for version control, including branching strategies, pull requests, and merge conflict resolution. Familiarity with Github workflow. Build & Dependency Management: Hands-on experience with Maven for dependency management and build automation. Ability to configure, troubleshoot, and optimize Maven builds and plugins. Scripting & Automation: Proficiency in Bash scripting for automation and system administration tasks. Experience with Groovy scripting is a plus.] Containerization & Orchestration: Experience with containers (Docker, Podman) for building, running, and managing applications. Understanding of container orchestration tools (e.g., Kubernetes, Docker Compose, OpenShift). Knowledge of developer tools such as Continuous Integration/Continuous Delivery systems, test tools, code quality tools, planning tools, IDEs and debugging tools Knowledge for web application security and writing secure code Excellent problem-solving skills and the ability to work independently Strong communication skills, with fluency in English (written and verbal) Ability to work collaboratively with both technical and non-technical stakeholders. Nice-to-Have Jenkins plugin development experience Cloud platform knowledge (AWS or GCP) Experience with Kubernetes and Helm Open source contributions or Jenkins community involvement JavaScript front end development experience (Vuejs is a plus) Experience with native Java tooling (Graalvm) Familiarity with Zendesk Join us and help shape the future of DevSecOps! Why Join CloudBees? Generous PTO to recharge and spend time with loved ones A culture of inclusivity, innovation, and global diversity Opportunity to work with cutting-edge technologies and contribute to DevSecOps transformation Collaborative environment with opportunities for growth and skill development CloudBees Commitment to Diversity: We believe diversity drives innovation and enables us to serve our global customers better. We are committed to fostering a workplace that reflects the diversity of the Jenkins community and the customers we support. Note: Beware of recruitment scams. CloudBees does not request sensitive personal or financial information during the hiring process. We’re invested in you! We offer generous paid time off to allow our employees time to rest, recharge and to be present with family and friends throughout the year. At CloudBees, we truly believe that the more diverse we are, the better we serve our customers. A global community like Jenkins demands a global focus from CloudBees. Organizations with greater diversity—gender, racial, ethnic, and global—are stronger partners to their customers. Whether by creating more innovative products, or better understanding our worldwide customers, or establishing a stronger cross-section of cultural leadership skills, diversity strengthens all aspects of the CloudBees organization. In the technology industry, diversity creates a competitive advantage. CloudBees customers demand technologies from us that solve their software development, and therefore their business problems, so that they can better serve their own customers. CloudBees attributes much of its success to its worldwide work force and commitment to global diversity, which opens our proprietary software to innovative ideas from anywhere. Along the way, we have witnessed firsthand how employees, partners, and customers with diverse perspectives and experiences contribute to creative problem-solving and better solutions for our customers and their businesses. Scam Notice Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of CloudBees. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that CloudBees will never ask for any personal account information, such as cell phone, credit card details or bank account numbers, during the recruitment process. Additionally, CloudBees will never send you a check for any equipment prior to employment. All communication from our recruiters and hiring managers will come from official company email addresses (@cloudbees.com) or from Paylocity and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent CloudBees and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at tahelp@cloudbees.com. We take these matters very seriously and will work to ensure that any fraudulent activity is reported and dealt with appropriately. If you feel like you have been scammed in the US, please report it to the Federal Trade Commission at: https://reportfraud.ftc.gov/#/. In Europe, please contact the European Anti-Fraud Office at: https://anti-fraud.ec.europa.eu/olaf-and-you/report-fraud_en Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Description Are you excited by the prospect of working with innovative security products? Do solving some of the Internet's most difficult security challenges interest you? Join our cutting-edge Application & API Security Product team! We work with customers to understand their needs in API Security, implementing solutions for maximum impact. Customers depend on our platform, beginning with broad questions like, "How many APIs do we have?". You'll use your problem-solving, creativity, and skills to map APIs, assess risks, and mitigate exposure, impacting customers. Partner with the best You'll solve technical and business problems, assess alternatives, costs and consequences, and present to stakeholders. You'll learn new technologies and cloud stacks, and even develop integration tools yourself. You'll make many decisions independently, within a team that supports and challenges you as you develop. As a Solutions Architect Senior, you will be responsible for: Gathering detailed customer requirements Understand customer infrastructure (cloud and on-premises) deeply Developing architecture diagrams and integration checklists Deploying the Noname remote engine and/or on-premise platform across supported cloud and on-premises environments Integrating the platform with both inbound customer data sources and outbound workflow integrations Providing enablement and knowledge transfer to customer personnel Do What You Love To be successful in this role you will: Have a Bachelor's degree in a technical domain (or equivalent certifications) Have 6+ years experience in a technical capacity as a vendor for large enterprises Possess prior experience as a Solutions Architect, Technical Account Manager, Solutions Engineer, or similar customer facing role Have 2+ years experience with AWS, Azure, GCP, as well as Kubernetes, Docker, Load Balancing, NGINX Demonstrate clear understanding of web-based and network protocols (REST over HTTP, gRPC, GraphQL, etc.) Have prior experience with API development, technologies and infrastructure, container technologies, API Gateways and WAF Have working knowledge of Infrastructure as Code (Helm, CloudFormation, Azure Resource Manager, Terraform) Master command line interfaces, scripting (Shell, Python), and deployment tools like Jenkins, GitHub Actions, and Ansible. Work in a way that works for you FlexBase, Akamai's Global Flexible Working Program, is based on the principles that are helping us create the best workplace in the world. When our colleagues said that flexible working was important to them, we listened. We also know flexible working is important to many of the incredible people considering joining Akamai. FlexBase, gives 95% of employees the choice to work from their home, their office, or both (in the country advertised). This permanent workplace flexibility program is consistent and fair globally, to help us find incredible talent, virtually anywhere. We are happy to discuss working options for this role and encourage you to speak with your recruiter in more detail when you apply. Learn what makes Akamai a great place to work Connect with us on social and see what life at Akamai is like! We power and protect life online, by solving the toughest challenges, together. At Akamai, we're curious, innovative, collaborative and tenacious. We celebrate diversity of thought and we hold an unwavering belief that we can make a meaningful difference. Our teams use their global perspectives to put customers at the forefront of everything they do, so if you are people-centric, you'll thrive here. Working for you Benefits At Akamai, we will provide you with opportunities to grow, flourish, and achieve great things. Our benefit options are designed to meet your individual needs for today and in the future. We provide benefits surrounding all aspects of your life: Your health Your finances Your family Your time at work Your time pursuing other endeavors Our benefit plan options are designed to meet your individual needs and budget, both today and in the future. About Us Akamai powers and protects life online. Leading companies worldwide choose Akamai to build, deliver, and secure their digital experiences helping billions of people live, work, and play every day. With the world's most distributed compute platform from cloud to edge we make it easy for customers to develop and run applications, while we keep experiences closer to users and threats farther away. Join us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will energize and inspire you! Show more Show less

Posted 1 week ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Senior Big Data Developer to join our team and help build and optimize high-performance, scalable, and resilient data processing systems. You will work in a fast-paced startup environment, handling highly loaded systems and developing data pipelines that process billions of records in real time. As a key member of the Big Data team, you will be responsible for architecting and optimizing distributed systems, leveraging modern cloud-native technologies, and ensuring high availability and fault tolerance in our data infrastructure. Primary Responsibilities: Design, develop, and maintain real-time and batch processing pipelines using Apache Spark, Kafka, and Kubernetes. Architect high-throughput distributed systems that handle large-scale data ingestion and processing. Work extensively with AWS services, including Kinesis, DynamoDB, ECS, S3, and Lambda. Manage and optimize containerized workloads using Kubernetes (EKS) and ECS. Implement Kafka-based event-driven architectures to support scalable, low-latency applications. Ensure high availability, fault tolerance, and resilience of data pipelines. Work with MySQL, Elasticsearch, Aerospike, Redis, and DynamoDB to store and retrieve massive datasets efficiently. Automate infrastructure provisioning and deployment using Terraform, Helm, or CloudFormation. Optimize system performance, monitor production issues, and ensure efficient resource utilization. Collaborate with data scientists, backend engineers, and DevOps teams to support advanced analytics and machine learning initiatives. Continuously improve and modernize the data architecture to support growing business needs. Required Skills: 7-10+ years of experience in big data engineering or distributed systems development. Expert-level proficiency in Scala, Java, or Python. Deep understanding of Kafka, Spark, and Kubernetes in large-scale environments. Strong hands-on experience with AWS (Kinesis, DynamoDB, ECS, S3, etc.). Proven experience working with highly loaded, low-latency distributed systems. Experience with Kafka, Kinesis, Flink, or other streaming technologies for event-driven architectures. Expertise in SQL and database optimizations for MySQL, Elasticsearch, and NoSQL stores. Strong experience in automating infrastructure using Terraform, Helm, or CloudFormation. Experience managing production-grade Kubernetes clusters (EKS). Deep knowledge of performance tuning, caching strategies, and data consistency models. Experience working in a startup environment, adapting to rapid changes and building scalable solutions from scratch. Nice to Have Experience with machine learning pipelines and AI-driven analytics. Knowledge of workflow orchestration tools such as Apache Airflow.

Posted 1 week ago

Apply

8.0 - 10.0 years

3 - 6 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Naukri logo

Design, implement, and maintain end-to-end MLOps pipelines for model training, validation, deployment, and monitoring. Build and manage LLMOps pipelines for fine-tuning, evaluating, and deploying large language models (e.g., OpenAI, HuggingFace Transformers, custom LLMs). Use Kubeflow and Kubernetes to orchestrate reproducible, scalable ML/LLM workflows. Implement CI/CD pipelines for ML projects using GitHub Actions , Argo Workflows , or Jenkins . Automate infrastructure provisioning using Terraform , Helm , or similar IaC tools. Integrate model registry and artifact management with tools like MLflow , Weights Biases , or DVC . Manage containerization with Docker and container orchestration via Kubernetes . Set up monitoring , logging , and alerting for production models using tools like Prometheus , Grafana , and ELK Stack . Collaborate closely with Data Scientists and DevOps engineers to ensure seamless integration of models into production systems. Ensure model governance, reproducibility, auditability, and compliance with enterprise and legal standards. Conduct performance profiling, load testing, and cost optimization for LLM inference endpoints. Required Skills and Experience Core MLOps/LLMOps Expertise 5+ years of hands-on experience in MLOps/DevOps for AI/ML. 2+ years working with LLMs in production (e.g., fine-tuning, inference optimization, safety evaluations). Strong experience with Kubeflow Pipelines , KServe , and MLflow . Deep knowledge of CI/CD pipelines with GitHub Actions , GitLab CI , or CircleCI . Expert in Kubernetes , Helm , and Terraform for container orchestration and infrastructure as code. Programming Frameworks Proficient in Python , with experience in ML libraries such as scikit-learn , TensorFlow , PyTorch , Hugging Face Transformers . Familiarity with FastAPI , Flask , or gRPC for building ML model APIs. Cloud DevOps Hands-on with AWS , Azure , or GCP (preferred: EKS, S3, SageMaker, Vertex AI, Azure ML). Knowledge of model serving using Triton Inference Server , TorchServe , or ONNX Runtime . Monitoring Logging Tools: Prometheus , Grafana , ELK , OpenTelemetry , Sentry . Model drift detection and A/B testing in production environments. Soft Skills Strong problem-solving and debugging skills. Ability to mentor junior engineers and collaborate with cross-functional teams. Clear communication, documentation, and Agile/Scrum proficiency. Preferred Qualifications Experience with LLMOps platforms like Weights Biases , TruEra , PromptLayer , LangSmith . Experience with multi-tenant LLM serving or agentic systems (LangChain, Semantic Kernel). Prior exposure to Responsible AI practices (bias detection, explainability, fairness)

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Demonstrate a deep understanding of cloud native, distributed micro service based architectures Deliver solutions for complex business problems through software standard SDLC Build strong relationships with both internal and external stakeholders including product, business and sales partners Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Build and manage strong technical teams that deliver complex software solutions that scale Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters Provide deep troubleshooting skills with the ability to lead and solve production and customer issues under pressure Leverage strong experience in full stack software development and public cloud like GCP and AWS Mentor, coach and develop junior and senior software, quality and reliability engineers Lead with a data/metrics driven mindset with a maniacal focus towards optimizing and creating efficient solutions Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Collaborate with architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Drive up-to-date technical documentation including support, end user documentation and run books Lead Sprint planning, Sprint Retrospectives, and other team activity Responsible for implementation architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate What Experience You Need Bachelor's degree or equivalent experience 7+ years of software engineering experience 7+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 7+ years experience with Cloud technology: GCP, AWS, or Azure 7+ years experience designing and developing cloud-native solutions 7+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 7+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Strong communication and presentation skills Strong leadership qualities Demonstrated problem solving skills and the ability to resolve conflicts Experience creating and maintaining product and software roadmaps Experience overseeing yearly as well as product/project budgets Working in a highly regulated environment Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

🚨 We’re Hiring | DevOps Engineer | Chennai 🚨 We're hiring for a DevOps Engineer at Incedo, and we're looking for professionals with 5-8 years of experience. 📍 Location: Chennai 🧠 Skills Requirements: • Primary Skills: DevOps, Kubernetes, AWS, Docker, Jenkins, Linux, Secondary Skills: Helm, Shell / Python Script Automation, Monitoring Tools. Certification in AWS SAA (Solution Architect Associate) or CKA (Certified Kubernetes Administrator). Certification Mandatory 💼 Experience: 5 to 8 Years ⏳ Notice Period: Immediate to June Joiners Preferred 🏢 Work Mode: 5 Days from the Office 🧪 Interview Process: 2 Rounds • 1 Virtual Interview • Final Round: In-Person (Face-to-Face) 📩 Share your resume at indhu.prakash@incedoinc.com or DM me directly. Please like, share, or tag someone in your network who might be a great fit. Referrals are always appreciated! Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description About CyberArk : CyberArk (NASDAQ: CYBR), is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit our CyberArk blogs or follow us on X, LinkedIn or Facebook. Job Description CyberArk DevOps Engineers are coders who enjoy a challenge and will be responsible for automating and streamlining our operations and processes, building and maintaining tools for deployment, monitoring, and operations, and troubleshooting and resolving issues in our dev, test, and production environments. As a DevOps Engineer, you will partner closely with software engineers, QA, and product teams to design and implement robust CI/CD pipelines , define infrastructure through code, and create tools that empower developers to ship high-quality features faster. You’ll actively contribute to cloud-native development practices , introduce automation wherever possible, and champion a culture of continuous improvement, observability, and developer experience (DX) . Your day-to-day work will involve a mix of platform/DevOps engineering , build/release automation , Kubernetes orchestration , infrastructure provisioning , and monitoring/alerting strategy development . You will also help enforce secure coding and deployment standards, contribute to runbooks and incident response procedures, and help scale systems to support rapid product growth. This is a hands-on technical role that requires strong coding ability, cloud architecture experience, and a mindset that thrives on collaboration, ownership, and resilience engineering . Qualifications Collaborate with developers to ensure seamless CI/CD workflows using tools like GitHub Actions, Jenkins CI/CD, and GitOps Write automation and deployment scripts in Groovy, Python, Go, Bash, PowerShell or similar Implement and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation Build and manage containerized applications using Docker and orchestrate using Kubernetes (EKS, AKS, GKE) Manage and optimize cloud infrastructure on AWS Implement automated security and compliance checks using the latest security scanning tools like Snyk, Checkmarx, and Codacy. Develop and maintain monitoring, alerting, and logging systems using Datadog, Prometheus, Grafana, Datadog, ELK, or Loki Drive observability and SLO/SLA adoption across services Support development teams in debugging, environment management, and rollout strategies (blue/green, canary deployments) Contribute to code reviews and build automation libraries for internal tooling and shared platforms Additional Information Requirements: 3 - 5 years of experience focused on DevOps Engineering, Cloud administration, or platform engineering, and application development Strong hands-on experience in: Linux/Unix and Windows OS Network architecture and security configurations Hands-on experience with the following scripting technologies: Automation/Configuration management using either Ansible, Puppet, Chef, or an equivalent Python, Ruby, Bash, PowerShell Hands-on experience with IAC (Infrastructure as code) like Terraform, CloudFormation Hands-on experience with Cloud infrastructure such as AWS, Azure, GCP Excellent communication skills, and strong attention to detail Strong hands-on technical abilities Strong computer literacy and/or the comfort, ability, and desire to advance technically Strong understanding of Information Security in various environments Demonstrated ability to assume sole and independent responsibilities Ability to keep track of numerous detail-intensive, interdependent tasks and ensure their accurate completion Preferred Tools & Technologies: Languages: Python, Go, Bash, YAML, PowerShell Version Control & CI/CD: Git, GitHub Actions, GitLab CI, Jenkins, GitOps IaC: Terraform, CloudFormation Containers: Docker, Kubernetes, Helm Monitoring & Logging: Datadog, Prometheus, Grafana, ELK/EFK Stack Cloud Platforms: AWS (EC2, ECS, EKS, Lambda, S3, Newtorking/VPC, cost optimization) Security: HashiCorp Vault, Trivy, Aqua, OPA/Gatekeeper Databases & Caches: PostgreSQL, MySQL, Redis, MongoDB Others: NGINX, Istio, Consul, Kafka, Redis Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a skilled and passionate Senior Software Development Engineer (SD3) to join our cloud engineering team. You will be responsible for designing, building, and maintaining scalable, cloud-native applications and infrastructure. This is a hands-on role requiring strong development skills, cloud experience, and a deep understanding of modern DevOps practices. Key Responsibilities: Design and develop cloud-native applications using AWS services such as EC2, EKS, Aurora MySQL, S3, IAM, and Lambda. Manage container orchestration using Kubernetes and Helm for scalable deployments. Build and maintain CI/CD pipelines using Docker, Jenkins, and AWS CodeBuild to enable rapid delivery cycles. Write clean, efficient, and reusable code/scripts using Java or Python to support application logic, automation, and integrations. Automate infrastructure provisioning and management using Terraform. Set up and maintain effective alerting and monitoring systems using Prometheus and Grafana. Collaborate with cross-functional teams, including developers, architects, DevOps engineers, and QA to deliver high-quality solutions. Ensure security, scalability, and performance best practices across all deployments. Skills & Qualifications: Strong understanding of cloud computing concepts and cloud-native application architecture. Proficiency in AWS services and infrastructure. Hands-on experience with Kubernetes and Helm. Proficient in Java or Python. Solid experience with CI/CD tools and Docker. Expertise in Infrastructure as Code (Terraform). Working knowledge of monitoring and alerting tools (Prometheus, Grafana). Strong problem-solving skills and a collaborative mindset. Apply now if you’re excited to build resilient systems and love solving real-world engineering problems at scale! It has been brought to our attention that there have recently been instances of fraudulent job offers, purporting to be from Capillary Technologies. The individuals or organizations sending these false employment offers may pose as a Capillary Technologies recruiter or representative and request personal information, purchasing of equipment or funds to further the recruitment process or offer paid training. Be advised that Capillary Technologies does not extend unsolicited employment offers. Furthermore, Capillary Technologies does not charge prospective employees with fees or make requests for funding as a part of the recruitment process. We commit to an inclusive recruitment process and equality of opportunity for all our job applicants. Show more Show less

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

Job Description Job Title: Devops Engineer Role Type: Fixed Term Direct Contract with Talpro Duration - 6 Months Years of Experience: 7+ Yrs. CTC Offered: INR 200K Per Months Notice Period: Only Immediate Joiners Work Mode: Hybrid (3 Days from Office Weekly) Location: Delhi / NCR Mandatory Skills: CI/CD & Automation Tools: Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD Scripting: Python, Bash, PowerShell, Go Automation Tools: Ansible, Puppet, Chef, SaltStack Infrastructure as Code (IaC): Terraform, Pulumi Containerization & Orchestration: Docker, Kubernetes (EKS, AKS, GKE), Helm Monitoring Tools: Prometheus, Grafana Logging Tools: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Graylog Security & Compliance: IAM, RBAC, Firewall, TLS/SSL, VPN; ISO 27001, SOC 2, GDPR Networking & Load Balancing: TCP/IP, DNS, HTTP/S, VPN; Nginx, HAProxy, ALB/ELB Databases: MySQL, PostgreSQL, MongoDB, Redis Storage Solutions: SAN, NAS Good to Have Skills: ​ Experience with hybrid cloud and multi-cloud architectures Role Overview / Job Summary: We are looking for a highly skilled DevOps Engineer to design, implement, and maintain robust CI/CD pipelines, automation workflows, and infrastructure solutions across cloud-native and containerized environments. The ideal candidate will have deep expertise in infrastructure as code, automation, security compliance, and cloud orchestration technologies. You will work closely with development, QA, and security teams to enable seamless software delivery and reliable operations.⸻Key Responsibilities / Job Responsibilities:​ Design, implement, and manage robust CI/CD pipelines using industry-standard tools. Familiarity with serverless frameworks Knowledge of DevSecOps integrations Cloud platform certifications (AWS, Azure, GCP) Automate provisioning, configuration, and deployment using tools like Ansible, Terraform, and Pulumi. Manage containerization and orchestration with Docker and Kubernetes (EKS/AKS/GKE). Implement monitoring and alerting systems using Prometheus, Grafana, and ELK stack. Enforce security best practices including IAM, firewall rules, and data encryption. Ensure compliance with ISO 27001, SOC 2, and GDPR standards. Troubleshoot system-level issues and optimize application performance. Collaborate with cross-functional teams to support Agile and DevOps delivery practices. Manage database configurations, backups, and storage integrations. Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: ₹150,000.00 - ₹200,000.00 per month Benefits: Commuter assistance Health insurance Provident Fund Schedule: Day shift Morning shift Weekend availability Experience: DevOps: 7 years (Required) Work Location: In person Speak with the employer +91 9840916415 Application Deadline: 12/06/2025

Posted 1 week ago

Apply

Exploring Helm Jobs in India

Helm is a popular package manager for Kubernetes that simplifies the deployment and management of applications. In India, the demand for professionals with expertise in Helm is on the rise as more companies adopt Kubernetes for their container orchestration needs.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi NCR

Average Salary Range

The average salary range for helm professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can command salaries upwards of INR 15 lakhs per annum.

Career Path

Typically, a career in Helm progresses as follows: - Junior Helm Engineer - Helm Engineer - Senior Helm Engineer - Helm Architect - Helm Specialist - Helm Consultant

Related Skills

In addition to proficiency in Helm, professionals in this field are often expected to have knowledge of: - Kubernetes - Docker - Containerization - DevOps practices - Infrastructure as Code (IaC)

Interview Questions

  • What is Helm and how does it simplify Kubernetes deployments? (basic)
  • Can you explain the difference between a Chart and a Release in Helm? (medium)
  • How would you handle secrets management in Helm charts? (medium)
  • What are the limitations of Helm and how would you work around them? (advanced)
  • How do you troubleshoot Helm deployment failures? (medium)
  • Explain the concept of Helm Hooks and when they are triggered during the deployment lifecycle. (medium)
  • How do you version and manage Helm charts in a production environment? (medium)
  • What are the best practices for Helm chart organization and structure? (basic)
  • Describe a scenario where you used Helm to deploy a complex application and the challenges you faced. (advanced)
  • How do you manage dependencies between Helm charts? (medium)
  • Explain the difference between Helm 2 and Helm 3. (basic)
  • How do you perform a rollback of a Helm release? (medium)
  • What security considerations should be taken into account when using Helm? (advanced)
  • How do you customize Helm charts for different environments (dev, staging, production)? (medium)
  • Can you automate the deployment of Helm charts using CI/CD pipelines? (medium)
  • What is Tiller in Helm and why was it removed in Helm 3? (advanced)
  • How do you manage upgrades of Helm releases without causing downtime? (medium)
  • Explain how you would handle configuration management in Helm charts. (medium)
  • What are the advantages of using Helm over manual Kubernetes manifests? (basic)
  • How do you ensure the idempotency of Helm deployments? (medium)
  • How do you perform linting and testing of Helm charts? (basic)
  • Can you explain the concept of Helm repositories and how they are used? (medium)
  • How would you handle versioning of Helm charts to ensure compatibility with different Kubernetes versions? (medium)
  • Describe a situation where you had to troubleshoot a Helm chart that was failing to deploy. (advanced)

Closing Remark

As the demand for Helm professionals continues to grow in India, it is important for job seekers to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a valuable asset to organizations looking to leverage Helm for their Kubernetes deployments. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies