Home
Jobs

1205 Helm Jobs - Page 20

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 9.0 years

8 - 11 Lacs

Pune

Work from Office

Naukri logo

We are hiring a DevOps / Site Reliability Engineer for a 6-month full-time onsite role in Pune (with possible extension). The ideal candidate will have 69 years of experience in DevOps/SRE roles with deep expertise in Kubernetes (preferably GKE), Terraform, Helm, and GitOps tools like ArgoCD or Flux. The role involves building and managing cloud-native infrastructure, CI/CD pipelines, and observability systems, while ensuring performance, scalability, and resilience. Experience in infrastructure coding, backend optimization (Node.js, Django, Java, Go), and cloud architecture (IAM, VPC, CloudSQL, Secrets) is essential. Strong communication and hands-on technical ability are musts. Immediate joiners only.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Bachelors degree in computer science or equivalent experience, with strong communication skills. Over 7 years of IT industry experience, with substantial expertise as a DevOps or Cloud Engineer. In-depth experience with development, configuration, and maintenance of cloud services using AWS. Strong experience using AWS services including Compute, Storage, Network, RDS, Security, and Serverless technologies such as AWS Lambda, Step Functions, and EventBridge. Experience with automated deployment tools and the principles of CI/CD using tools such as Jenkins, GitHub, GitHub Actions/Runners, CloudFormation, CDK, Terraform, and Helm. Expertise in containerization and orchestration using Docker, Kubernetes, ECS, and AWS Batch. Solid understanding of cloud design, networking concepts, and security best practices. Experience with configuration management tools such as Ansible and AWS SSM. Proficient in using Git with a good understanding of branching, Git flows, and release management. Scripting experience in Python, Bash, or similar, including virtual environment packaging. Knowledge and experience of Enterprise Identity Management solutions such as SSO, SAML, and OpenID Connect. A solid understanding of application architectures, including cloud-native approaches to infrastructure. Experience with testing tools like SonarQube, Cucumber, and Pytest. Experience in creating and running various types of tests including unit tests, integration tests, system tests, and acceptance tests to ensure software and systems function correctly. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Location(s): Remote - India, Remote, Remote, IN Line Of Business: Insurance(INSURANCE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Moody's is looking for a Senior Engineer(DevOps) to be part of a team responsible for designing and developing the Tools & Automation for Infrastructure the Core Products suite in AWS. What You'll Be Doing: You will be responsible for leading efforts to implement stability, and observability improvements to our Kubernetes container platform. You will be focused on SLI development, Automation, TOIL elimination, incident response, root cause analysis and monitoring enhancements. You should have the aptitude and enthusiasm for building and servicing highly distributed, scalable, and mission-critical systems. You should have a passion for automation and creating self-service mechanisms for customers. Create software design documents, architecture, sequence, class and related artifacts. Translate design inputs into development work items. Assist in providing estimates for levels of effort required to accomplish expected deliverables. Research new technologies and techniques to support leading-edge development. Provide an active contribution to the team responsible for the design, development, and implementation of critical enterprise scale applications. Required experience and skills: Expertise with Deploying and managing AWS services (Networking, Storage, EKS, API Gateway etc.) Expertise with Infrastructure-as-Code frameworks (Terraform, CloudFormation) Expertise in Containerization and micro service Architecture( Kubernetes, Docker, Helm Charts) Expertise in designing tools and Automation using any script (Python, PowerShell) Expertise using Observability Stack(Kibana, Prometheus, Grafana with a focus on observability and alerting) Expertise with CI/CD pipelines and automation and how to apply it with services such as Jenkins, Azure Devops, Circle CI. Experience with modern programming and scripting languages (Python, Go, PowerShell, C#) Desirable experience and skills: Familiarity with Linux and windows Administration. Experience with a Developer Background Experience in performance measurement, bottleneck analysis, and resource usage monitoring Master of Science in Computer Science or Bachelor of Science in Computer Science with 5 or more years’ experience. Experience with data access and computing in highly distributed cloud systems. Experience in agile development. Written and verbal communication skills. Technology : Kubernetes, CI/CD, Docker, AWS, Azure, Docker, Python, PowerShell, Jenkins, Helm Charts, Ansible, Terraform, Ansible. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description We are seeking a highly skilled Application Consultant with expertise in Node.js and Python, and hands-on experience in developing and migrating applications across AWS and Azure. The ideal candidate will have a strong background in serverless computing, cloud-native development, and cloud migration, particularly from AWS to Azure. Key Responsibilities Design, develop, and deploy applications using Node.js and Python on Azure and AWS platforms. Lead and support AWS to Azure migration efforts, including application and infrastructure components. Analyze source architecture and code to identify AWS service dependencies and remediation needs. Refactor and update codebases to align with Azure services, including Azure Functions, AKS, and Blob Storage. Develop and maintain deployment scripts and CI/CD pipelines for Azure environments. Migrate serverless applications from AWS Lambda to Azure Functions using Node.js or Python. Support unit testing, application testing, and troubleshooting in Azure environments. Work with containerized applications, Kubernetes, Helm charts, and Azure PaaS services. Handle AWS to Azure SDK conversions and data migration tasks (e.g., S3 to Azure Blob). Required Skills 8+ years of experience in application development using Node.js and Python. Strong hands-on experience with Azure and AWS cloud platforms. Proficiency in Azure Functions, AKS, App Services, APIM, and Blob Storage. Experience with AWS Lambda to Azure Functions migration (Must Have). Solid understanding of Azure PaaS and serverless architecture. Experience with Kubernetes, Helm charts, and microservices. Strong troubleshooting and debugging skills in cloud environments. Experience with AWS to Azure SDK conversion (Must Have). Skills Python,Node.Js,Azure Cloud,Aws Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description DevOps Engineer/ Ci-CD Job Summary Hands on experience in CICD processes and tools like Git/Jenkins/Bamboo/bitbucket. Hands on scripting in Shell/Powershell and good to have python scripting skills. Hands on experience in integrating third party Apis with CICD pipelines. Hands on experience in Aws basic services like Ec2/LBs/SecurityGroups/RDS/VPCs and good to have Aws patch manager. Hands on experience troubleshooting application infra issue, should be able to explain 2-3 issues the resource worked in previous role. Should have knowledge on ITIL and Agile processes and willing support weekend deployment and patching activities. Should be a quick learner and adopt to emerging technologies/processes and must have sense of ownership/urgency to deliver. DevOps - Hands-on in AWS EKS & EFS Strong knowledge in Kubernates Experience in Enterprise Terraform Strong knowledge in Helm Charts & Helmfile Knowledge in Linux OS - Knowledge in Git & GitOps Experience in BitBucket & Bamboo - Knowledge in systems & applicatoin security priciples Experience in automate and improve development; and release processes; CI/CD pipelines Work with developers to ensure the development process is followed accross the board Experience in setting up build & deployment pipelines for Web, iOS & Android Knowledge & experience in mobile application release process Knowlegde in InTune, MobileIron & Perfecto - Understanding of application test automation & reporting Understanding of inftrastructure & application health monitoring and reporting Skills Devops Tools,Devops,Cloud Computing Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Job Title: MLOps Engineer Location: [Insert Location – e.g., Gurugram / Remote / On-site] Experience: 2–5 years Type: Full-Time Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for seamless deployment and monitoring of ML models. Implement and manage CI/CD workflows using modern tools (e.g., GitHub Actions, Azure DevOps, Jenkins). Orchestrate ML services using Kubernetes for scalable and reliable deployments. Develop and maintain FastAPI-based microservices to serve machine learning models via RESTful APIs. Collaborate with data scientists and ML engineers to productionize models in Azure and AWS cloud environments. Automate infrastructure provisioning and configuration using Infrastructure-as-Code (IaC) tools. Ensure observability, logging, monitoring, and model drift detection in deployed solutions. Required Skills: Strong proficiency in Kubernetes for container orchestration. Experience with CI/CD pipelines and tools like Jenkins, GitHub Actions, or Azure DevOps. Hands-on experience with FastAPI for developing ML-serving APIs. Proficient in deploying ML workflows on Azure and AWS . Knowledge of containerization (Docker optional, if used during local development). Familiarity with model versioning, reproducibility, and experiment tracking tools (e.g., MLflow, DVC). Strong scripting skills (Python, Bash). Preferred Qualifications: B.Tech/M.Tech in Computer Science, Data Engineering, or related fields. Experience with Terraform, Helm, or other IaC tools. Understanding of DevOps practices and security in ML workflows. Good communication skills and a collaborative mindset. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Job Type Full-time Description About Cloudbees CloudBees is the leading software delivery platform enabling enterprises to scale software delivery while ensuring security, compliance, and operational efficiency. We empower developers with fast, self-serve workflows across hybrid and heterogeneous environments, offering unmatched flexibility for cloud transformation. As trusted partners in DevSecOps, CloudBees supports organizations using Jenkins on-premise, transitioning to the cloud, or accelerating their DevOps maturity to drive innovation and achieve their business goals. Role Overview We are looking for a Tooling Engineer to design, develop, and maintain software tools that enhance the efficiency and effectiveness of our support team and broader organization. In this role, you will work closely with support engineers and other teams to identify pain points, automate repetitive tasks, and improve workflows through custom-built tools. Your contributions will directly improve productivity, service quality, and operational performance. Collaboration & Tools Development Design, develop, and maintain internal software tools that improve the team efficiency and automation. Collaborate with the Support Team to identify tool requirements. Optimize and automate existing support processes to enhance response times and service quality. Ensure software tools are user-friendly, well-documented, and scalable. Debug, troubleshoot, and maintain existing tooling to ensure reliability and performance. Continuous Learning Stay updated on Spring and or Quarkus stack Stay updated on the Secure coding best practices Stay up to date with the latest technologies to continuously improve and refine tooling solutions. Requirements Must-Have 5 to 7 years of experience in Java web application development 1+ year of experience with either Spring or Quarkus Version Control & CI/CD: Strong experience with Git for version control, including branching strategies, pull requests, and merge conflict resolution. Familiarity with Github workflow. Build & Dependency Management: Hands-on experience with Maven for dependency management and build automation. Ability to configure, troubleshoot, and optimize Maven builds and plugins. Scripting & Automation: Proficiency in Bash scripting for automation and system administration tasks. Experience with Groovy scripting is a plus.] Containerization & Orchestration: Experience with containers (Docker, Podman) for building, running, and managing applications. Understanding of container orchestration tools (e.g., Kubernetes, Docker Compose, OpenShift). Knowledge of developer tools such as Continuous Integration/Continuous Delivery systems, test tools, code quality tools, planning tools, IDEs and debugging tools Knowledge for web application security and writing secure code Excellent problem-solving skills and the ability to work independently Strong communication skills, with fluency in English (written and verbal) Ability to work collaboratively with both technical and non-technical stakeholders. Nice-to-Have Jenkins plugin development experience Cloud platform knowledge (AWS or GCP) Experience with Kubernetes and Helm Open source contributions or Jenkins community involvement JavaScript front end development experience (Vuejs is a plus) Experience with native Java tooling (Graalvm) Familiarity with Zendesk Join us and help shape the future of DevSecOps! Why Join CloudBees? Generous PTO to recharge and spend time with loved ones A culture of inclusivity, innovation, and global diversity Opportunity to work with cutting-edge technologies and contribute to DevSecOps transformation Collaborative environment with opportunities for growth and skill development CloudBees Commitment to Diversity: We believe diversity drives innovation and enables us to serve our global customers better. We are committed to fostering a workplace that reflects the diversity of the Jenkins community and the customers we support. Note: Beware of recruitment scams. CloudBees does not request sensitive personal or financial information during the hiring process. We’re invested in you! We offer generous paid time off to allow our employees time to rest, recharge and to be present with family and friends throughout the year. At CloudBees, we truly believe that the more diverse we are, the better we serve our customers. A global community like Jenkins demands a global focus from CloudBees. Organizations with greater diversity—gender, racial, ethnic, and global—are stronger partners to their customers. Whether by creating more innovative products, or better understanding our worldwide customers, or establishing a stronger cross-section of cultural leadership skills, diversity strengthens all aspects of the CloudBees organization. In the technology industry, diversity creates a competitive advantage. CloudBees customers demand technologies from us that solve their software development, and therefore their business problems, so that they can better serve their own customers. CloudBees attributes much of its success to its worldwide work force and commitment to global diversity, which opens our proprietary software to innovative ideas from anywhere. Along the way, we have witnessed firsthand how employees, partners, and customers with diverse perspectives and experiences contribute to creative problem-solving and better solutions for our customers and their businesses. Scam Notice Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of CloudBees. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that CloudBees will never ask for any personal account information, such as cell phone, credit card details or bank account numbers, during the recruitment process. Additionally, CloudBees will never send you a check for any equipment prior to employment. All communication from our recruiters and hiring managers will come from official company email addresses (@cloudbees.com) or from Paylocity and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent CloudBees and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at tahelp@cloudbees.com. We take these matters very seriously and will work to ensure that any fraudulent activity is reported and dealt with appropriately. If you feel like you have been scammed in the US, please report it to the Federal Trade Commission at: https://reportfraud.ftc.gov/#/. In Europe, please contact the European Anti-Fraud Office at: https://anti-fraud.ec.europa.eu/olaf-and-you/report-fraud_en Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Description Are you excited by the prospect of working with innovative security products? Do solving some of the Internet's most difficult security challenges interest you? Join our cutting-edge Application & API Security Product team! We work with customers to understand their needs in API Security, implementing solutions for maximum impact. Customers depend on our platform, beginning with broad questions like, "How many APIs do we have?". You'll use your problem-solving, creativity, and skills to map APIs, assess risks, and mitigate exposure, impacting customers. Partner with the best You'll solve technical and business problems, assess alternatives, costs and consequences, and present to stakeholders. You'll learn new technologies and cloud stacks, and even develop integration tools yourself. You'll make many decisions independently, within a team that supports and challenges you as you develop. As a Solutions Architect Senior, you will be responsible for: Gathering detailed customer requirements Understand customer infrastructure (cloud and on-premises) deeply Developing architecture diagrams and integration checklists Deploying the Noname remote engine and/or on-premise platform across supported cloud and on-premises environments Integrating the platform with both inbound customer data sources and outbound workflow integrations Providing enablement and knowledge transfer to customer personnel Do What You Love To be successful in this role you will: Have a Bachelor's degree in a technical domain (or equivalent certifications) Have 6+ years experience in a technical capacity as a vendor for large enterprises Possess prior experience as a Solutions Architect, Technical Account Manager, Solutions Engineer, or similar customer facing role Have 2+ years experience with AWS, Azure, GCP, as well as Kubernetes, Docker, Load Balancing, NGINX Demonstrate clear understanding of web-based and network protocols (REST over HTTP, gRPC, GraphQL, etc.) Have prior experience with API development, technologies and infrastructure, container technologies, API Gateways and WAF Have working knowledge of Infrastructure as Code (Helm, CloudFormation, Azure Resource Manager, Terraform) Master command line interfaces, scripting (Shell, Python), and deployment tools like Jenkins, GitHub Actions, and Ansible. Work in a way that works for you FlexBase, Akamai's Global Flexible Working Program, is based on the principles that are helping us create the best workplace in the world. When our colleagues said that flexible working was important to them, we listened. We also know flexible working is important to many of the incredible people considering joining Akamai. FlexBase, gives 95% of employees the choice to work from their home, their office, or both (in the country advertised). This permanent workplace flexibility program is consistent and fair globally, to help us find incredible talent, virtually anywhere. We are happy to discuss working options for this role and encourage you to speak with your recruiter in more detail when you apply. Learn what makes Akamai a great place to work Connect with us on social and see what life at Akamai is like! We power and protect life online, by solving the toughest challenges, together. At Akamai, we're curious, innovative, collaborative and tenacious. We celebrate diversity of thought and we hold an unwavering belief that we can make a meaningful difference. Our teams use their global perspectives to put customers at the forefront of everything they do, so if you are people-centric, you'll thrive here. Working for you Benefits At Akamai, we will provide you with opportunities to grow, flourish, and achieve great things. Our benefit options are designed to meet your individual needs for today and in the future. We provide benefits surrounding all aspects of your life: Your health Your finances Your family Your time at work Your time pursuing other endeavors Our benefit plan options are designed to meet your individual needs and budget, both today and in the future. About Us Akamai powers and protects life online. Leading companies worldwide choose Akamai to build, deliver, and secure their digital experiences helping billions of people live, work, and play every day. With the world's most distributed compute platform from cloud to edge we make it easy for customers to develop and run applications, while we keep experiences closer to users and threats farther away. Join us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will energize and inspire you! Show more Show less

Posted 1 week ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Senior Big Data Developer to join our team and help build and optimize high-performance, scalable, and resilient data processing systems. You will work in a fast-paced startup environment, handling highly loaded systems and developing data pipelines that process billions of records in real time. As a key member of the Big Data team, you will be responsible for architecting and optimizing distributed systems, leveraging modern cloud-native technologies, and ensuring high availability and fault tolerance in our data infrastructure. Primary Responsibilities: Design, develop, and maintain real-time and batch processing pipelines using Apache Spark, Kafka, and Kubernetes. Architect high-throughput distributed systems that handle large-scale data ingestion and processing. Work extensively with AWS services, including Kinesis, DynamoDB, ECS, S3, and Lambda. Manage and optimize containerized workloads using Kubernetes (EKS) and ECS. Implement Kafka-based event-driven architectures to support scalable, low-latency applications. Ensure high availability, fault tolerance, and resilience of data pipelines. Work with MySQL, Elasticsearch, Aerospike, Redis, and DynamoDB to store and retrieve massive datasets efficiently. Automate infrastructure provisioning and deployment using Terraform, Helm, or CloudFormation. Optimize system performance, monitor production issues, and ensure efficient resource utilization. Collaborate with data scientists, backend engineers, and DevOps teams to support advanced analytics and machine learning initiatives. Continuously improve and modernize the data architecture to support growing business needs. Required Skills: 7-10+ years of experience in big data engineering or distributed systems development. Expert-level proficiency in Scala, Java, or Python. Deep understanding of Kafka, Spark, and Kubernetes in large-scale environments. Strong hands-on experience with AWS (Kinesis, DynamoDB, ECS, S3, etc.). Proven experience working with highly loaded, low-latency distributed systems. Experience with Kafka, Kinesis, Flink, or other streaming technologies for event-driven architectures. Expertise in SQL and database optimizations for MySQL, Elasticsearch, and NoSQL stores. Strong experience in automating infrastructure using Terraform, Helm, or CloudFormation. Experience managing production-grade Kubernetes clusters (EKS). Deep knowledge of performance tuning, caching strategies, and data consistency models. Experience working in a startup environment, adapting to rapid changes and building scalable solutions from scratch. Nice to Have Experience with machine learning pipelines and AI-driven analytics. Knowledge of workflow orchestration tools such as Apache Airflow.

Posted 1 week ago

Apply

8.0 - 10.0 years

3 - 6 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Naukri logo

Design, implement, and maintain end-to-end MLOps pipelines for model training, validation, deployment, and monitoring. Build and manage LLMOps pipelines for fine-tuning, evaluating, and deploying large language models (e.g., OpenAI, HuggingFace Transformers, custom LLMs). Use Kubeflow and Kubernetes to orchestrate reproducible, scalable ML/LLM workflows. Implement CI/CD pipelines for ML projects using GitHub Actions , Argo Workflows , or Jenkins . Automate infrastructure provisioning using Terraform , Helm , or similar IaC tools. Integrate model registry and artifact management with tools like MLflow , Weights Biases , or DVC . Manage containerization with Docker and container orchestration via Kubernetes . Set up monitoring , logging , and alerting for production models using tools like Prometheus , Grafana , and ELK Stack . Collaborate closely with Data Scientists and DevOps engineers to ensure seamless integration of models into production systems. Ensure model governance, reproducibility, auditability, and compliance with enterprise and legal standards. Conduct performance profiling, load testing, and cost optimization for LLM inference endpoints. Required Skills and Experience Core MLOps/LLMOps Expertise 5+ years of hands-on experience in MLOps/DevOps for AI/ML. 2+ years working with LLMs in production (e.g., fine-tuning, inference optimization, safety evaluations). Strong experience with Kubeflow Pipelines , KServe , and MLflow . Deep knowledge of CI/CD pipelines with GitHub Actions , GitLab CI , or CircleCI . Expert in Kubernetes , Helm , and Terraform for container orchestration and infrastructure as code. Programming Frameworks Proficient in Python , with experience in ML libraries such as scikit-learn , TensorFlow , PyTorch , Hugging Face Transformers . Familiarity with FastAPI , Flask , or gRPC for building ML model APIs. Cloud DevOps Hands-on with AWS , Azure , or GCP (preferred: EKS, S3, SageMaker, Vertex AI, Azure ML). Knowledge of model serving using Triton Inference Server , TorchServe , or ONNX Runtime . Monitoring Logging Tools: Prometheus , Grafana , ELK , OpenTelemetry , Sentry . Model drift detection and A/B testing in production environments. Soft Skills Strong problem-solving and debugging skills. Ability to mentor junior engineers and collaborate with cross-functional teams. Clear communication, documentation, and Agile/Scrum proficiency. Preferred Qualifications Experience with LLMOps platforms like Weights Biases , TruEra , PromptLayer , LangSmith . Experience with multi-tenant LLM serving or agentic systems (LangChain, Semantic Kernel). Prior exposure to Responsible AI practices (bias detection, explainability, fairness)

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Demonstrate a deep understanding of cloud native, distributed micro service based architectures Deliver solutions for complex business problems through software standard SDLC Build strong relationships with both internal and external stakeholders including product, business and sales partners Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Build and manage strong technical teams that deliver complex software solutions that scale Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters Provide deep troubleshooting skills with the ability to lead and solve production and customer issues under pressure Leverage strong experience in full stack software development and public cloud like GCP and AWS Mentor, coach and develop junior and senior software, quality and reliability engineers Lead with a data/metrics driven mindset with a maniacal focus towards optimizing and creating efficient solutions Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Collaborate with architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Drive up-to-date technical documentation including support, end user documentation and run books Lead Sprint planning, Sprint Retrospectives, and other team activity Responsible for implementation architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate What Experience You Need Bachelor's degree or equivalent experience 7+ years of software engineering experience 7+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 7+ years experience with Cloud technology: GCP, AWS, or Azure 7+ years experience designing and developing cloud-native solutions 7+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 7+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Strong communication and presentation skills Strong leadership qualities Demonstrated problem solving skills and the ability to resolve conflicts Experience creating and maintaining product and software roadmaps Experience overseeing yearly as well as product/project budgets Working in a highly regulated environment Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

🚨 We’re Hiring | DevOps Engineer | Chennai 🚨 We're hiring for a DevOps Engineer at Incedo, and we're looking for professionals with 5-8 years of experience. 📍 Location: Chennai 🧠 Skills Requirements: • Primary Skills: DevOps, Kubernetes, AWS, Docker, Jenkins, Linux, Secondary Skills: Helm, Shell / Python Script Automation, Monitoring Tools. Certification in AWS SAA (Solution Architect Associate) or CKA (Certified Kubernetes Administrator). Certification Mandatory 💼 Experience: 5 to 8 Years ⏳ Notice Period: Immediate to June Joiners Preferred 🏢 Work Mode: 5 Days from the Office 🧪 Interview Process: 2 Rounds • 1 Virtual Interview • Final Round: In-Person (Face-to-Face) 📩 Share your resume at indhu.prakash@incedoinc.com or DM me directly. Please like, share, or tag someone in your network who might be a great fit. Referrals are always appreciated! Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description About CyberArk : CyberArk (NASDAQ: CYBR), is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit our CyberArk blogs or follow us on X, LinkedIn or Facebook. Job Description CyberArk DevOps Engineers are coders who enjoy a challenge and will be responsible for automating and streamlining our operations and processes, building and maintaining tools for deployment, monitoring, and operations, and troubleshooting and resolving issues in our dev, test, and production environments. As a DevOps Engineer, you will partner closely with software engineers, QA, and product teams to design and implement robust CI/CD pipelines , define infrastructure through code, and create tools that empower developers to ship high-quality features faster. You’ll actively contribute to cloud-native development practices , introduce automation wherever possible, and champion a culture of continuous improvement, observability, and developer experience (DX) . Your day-to-day work will involve a mix of platform/DevOps engineering , build/release automation , Kubernetes orchestration , infrastructure provisioning , and monitoring/alerting strategy development . You will also help enforce secure coding and deployment standards, contribute to runbooks and incident response procedures, and help scale systems to support rapid product growth. This is a hands-on technical role that requires strong coding ability, cloud architecture experience, and a mindset that thrives on collaboration, ownership, and resilience engineering . Qualifications Collaborate with developers to ensure seamless CI/CD workflows using tools like GitHub Actions, Jenkins CI/CD, and GitOps Write automation and deployment scripts in Groovy, Python, Go, Bash, PowerShell or similar Implement and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation Build and manage containerized applications using Docker and orchestrate using Kubernetes (EKS, AKS, GKE) Manage and optimize cloud infrastructure on AWS Implement automated security and compliance checks using the latest security scanning tools like Snyk, Checkmarx, and Codacy. Develop and maintain monitoring, alerting, and logging systems using Datadog, Prometheus, Grafana, Datadog, ELK, or Loki Drive observability and SLO/SLA adoption across services Support development teams in debugging, environment management, and rollout strategies (blue/green, canary deployments) Contribute to code reviews and build automation libraries for internal tooling and shared platforms Additional Information Requirements: 3 - 5 years of experience focused on DevOps Engineering, Cloud administration, or platform engineering, and application development Strong hands-on experience in: Linux/Unix and Windows OS Network architecture and security configurations Hands-on experience with the following scripting technologies: Automation/Configuration management using either Ansible, Puppet, Chef, or an equivalent Python, Ruby, Bash, PowerShell Hands-on experience with IAC (Infrastructure as code) like Terraform, CloudFormation Hands-on experience with Cloud infrastructure such as AWS, Azure, GCP Excellent communication skills, and strong attention to detail Strong hands-on technical abilities Strong computer literacy and/or the comfort, ability, and desire to advance technically Strong understanding of Information Security in various environments Demonstrated ability to assume sole and independent responsibilities Ability to keep track of numerous detail-intensive, interdependent tasks and ensure their accurate completion Preferred Tools & Technologies: Languages: Python, Go, Bash, YAML, PowerShell Version Control & CI/CD: Git, GitHub Actions, GitLab CI, Jenkins, GitOps IaC: Terraform, CloudFormation Containers: Docker, Kubernetes, Helm Monitoring & Logging: Datadog, Prometheus, Grafana, ELK/EFK Stack Cloud Platforms: AWS (EC2, ECS, EKS, Lambda, S3, Newtorking/VPC, cost optimization) Security: HashiCorp Vault, Trivy, Aqua, OPA/Gatekeeper Databases & Caches: PostgreSQL, MySQL, Redis, MongoDB Others: NGINX, Istio, Consul, Kafka, Redis Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a skilled and passionate Senior Software Development Engineer (SD3) to join our cloud engineering team. You will be responsible for designing, building, and maintaining scalable, cloud-native applications and infrastructure. This is a hands-on role requiring strong development skills, cloud experience, and a deep understanding of modern DevOps practices. Key Responsibilities: Design and develop cloud-native applications using AWS services such as EC2, EKS, Aurora MySQL, S3, IAM, and Lambda. Manage container orchestration using Kubernetes and Helm for scalable deployments. Build and maintain CI/CD pipelines using Docker, Jenkins, and AWS CodeBuild to enable rapid delivery cycles. Write clean, efficient, and reusable code/scripts using Java or Python to support application logic, automation, and integrations. Automate infrastructure provisioning and management using Terraform. Set up and maintain effective alerting and monitoring systems using Prometheus and Grafana. Collaborate with cross-functional teams, including developers, architects, DevOps engineers, and QA to deliver high-quality solutions. Ensure security, scalability, and performance best practices across all deployments. Skills & Qualifications: Strong understanding of cloud computing concepts and cloud-native application architecture. Proficiency in AWS services and infrastructure. Hands-on experience with Kubernetes and Helm. Proficient in Java or Python. Solid experience with CI/CD tools and Docker. Expertise in Infrastructure as Code (Terraform). Working knowledge of monitoring and alerting tools (Prometheus, Grafana). Strong problem-solving skills and a collaborative mindset. Apply now if you’re excited to build resilient systems and love solving real-world engineering problems at scale! It has been brought to our attention that there have recently been instances of fraudulent job offers, purporting to be from Capillary Technologies. The individuals or organizations sending these false employment offers may pose as a Capillary Technologies recruiter or representative and request personal information, purchasing of equipment or funds to further the recruitment process or offer paid training. Be advised that Capillary Technologies does not extend unsolicited employment offers. Furthermore, Capillary Technologies does not charge prospective employees with fees or make requests for funding as a part of the recruitment process. We commit to an inclusive recruitment process and equality of opportunity for all our job applicants. Show more Show less

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

Job Description Job Title: Devops Engineer Role Type: Fixed Term Direct Contract with Talpro Duration - 6 Months Years of Experience: 7+ Yrs. CTC Offered: INR 200K Per Months Notice Period: Only Immediate Joiners Work Mode: Hybrid (3 Days from Office Weekly) Location: Delhi / NCR Mandatory Skills: CI/CD & Automation Tools: Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD Scripting: Python, Bash, PowerShell, Go Automation Tools: Ansible, Puppet, Chef, SaltStack Infrastructure as Code (IaC): Terraform, Pulumi Containerization & Orchestration: Docker, Kubernetes (EKS, AKS, GKE), Helm Monitoring Tools: Prometheus, Grafana Logging Tools: ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Graylog Security & Compliance: IAM, RBAC, Firewall, TLS/SSL, VPN; ISO 27001, SOC 2, GDPR Networking & Load Balancing: TCP/IP, DNS, HTTP/S, VPN; Nginx, HAProxy, ALB/ELB Databases: MySQL, PostgreSQL, MongoDB, Redis Storage Solutions: SAN, NAS Good to Have Skills: ​ Experience with hybrid cloud and multi-cloud architectures Role Overview / Job Summary: We are looking for a highly skilled DevOps Engineer to design, implement, and maintain robust CI/CD pipelines, automation workflows, and infrastructure solutions across cloud-native and containerized environments. The ideal candidate will have deep expertise in infrastructure as code, automation, security compliance, and cloud orchestration technologies. You will work closely with development, QA, and security teams to enable seamless software delivery and reliable operations.⸻Key Responsibilities / Job Responsibilities:​ Design, implement, and manage robust CI/CD pipelines using industry-standard tools. Familiarity with serverless frameworks Knowledge of DevSecOps integrations Cloud platform certifications (AWS, Azure, GCP) Automate provisioning, configuration, and deployment using tools like Ansible, Terraform, and Pulumi. Manage containerization and orchestration with Docker and Kubernetes (EKS/AKS/GKE). Implement monitoring and alerting systems using Prometheus, Grafana, and ELK stack. Enforce security best practices including IAM, firewall rules, and data encryption. Ensure compliance with ISO 27001, SOC 2, and GDPR standards. Troubleshoot system-level issues and optimize application performance. Collaborate with cross-functional teams to support Agile and DevOps delivery practices. Manage database configurations, backups, and storage integrations. Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: ₹150,000.00 - ₹200,000.00 per month Benefits: Commuter assistance Health insurance Provident Fund Schedule: Day shift Morning shift Weekend availability Experience: DevOps: 7 years (Required) Work Location: In person Speak with the employer +91 9840916415 Application Deadline: 12/06/2025

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Pune,Maharashtra,India +5 more Job ID 761486 Join our Team About this opportunity: We are seeking a driven and dynamic Integration Engineer to join our team at Ericsson. This role provides an exceptional opportunity for a motivated individual to contribute meaningfully to data-driven integration services. The chosen candidate will need to analyze, develop, and execute test cases, thereby ensuring adequate configuration and seamless integration of our network systems. In addition to this, the role demands a meticulous adherence to our group directives, legal, and financial requirements. In this position, you will be working within Ericsson's global and local Environmental and Occupational Health and Safety (E+OHS) standards, while ensuring the utmost data security and privacy. This role offers a stimulating and challenging setting that promotes continuous learning and growth. What you will do: 1. Build quality data solutions and refine existing diverse datasets to simplified models encouraging self-service. 2. Build data pipelines that optimize on data quality and are resilient to poor quality data sources 3. Low-level systems debugging, performance measurement & optimization on production clusters 4. Collaborate with other data engineers, full stack developers to integrate data solutions into production systems, ensuring scalability and optimal performance. The skills you bring: Minimum 5+ years of professional experience as a software developer with end-to-end project delivery, including at least 3+ years of experience as a Data Engineer . Advanced proficiency in Python programming , with hands-on experience in implementing RESTful services, particularly using FastAPI (self-assessment rating of 8/10 or higher). Solid experience in application development using Apache Spark , with a self-assessment rating of 7/10 or higher. Strong expertise in cloud-native microservices, with experience working with Kubernetes, Docker, and Helm charts . Proficient in version control systems (e.g., GitLab, GitHub) and continuous integration tools such as Jenkins. Proven experience working with one or more major cloud platforms (AWS, GCP, or Azure). Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Indeed logo

Category: Infrastructure/Cloud Main location: India, Karnataka, Bangalore Position ID: J0425-1242 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Google Cloud Engineer (DevOps + GKE )- SSE Position: Senior Systems Engineer Experience:5+Years Category: GCP+GKE Main location: Bangalore/Chennai/Hyderabad/Pune/Mumbai Position ID: J0425-1242 Employment Type: Full Time Job Description : We are seeking a skilled and proactive Google Cloud Engineer with strong experience in DevOps with hands-on expertise in Google Kubernetes Engine (GKE) to design, implement, and manage cloud-native infrastructure . You will play a key role in automating deployments, maintaining scalable systems, and ensuring the availability and performance of our cloud services on Google Cloud Platform (GCP). Key Responsibilities and Required Skills 5+ years of experience in DevOps / Cloud Engineering roles. Design and manage cloud infrastructure using Google Cloud services such as Compute Engine, Cloud Storage, VPC, IAM, Cloud SQL, GKE, and more. Proficient in writing Infrastructure-as-Code using Terraform, Deployment Manager, or similar tools. Automate CI/CD pipelines using tools like Cloud Build, Jenkins, GitHub Actions, etc. Manage and optimize Kubernetes clusters for high availability, performance, and security. Collaborate with developers to containerize applications and streamline their deployment. Monitor cloud environments and troubleshoot performance, availability, or security issues. Implement best practices for cloud governance, security, cost management, and compliance. Participate in cloud migration and modernization projects. Ensure system reliability and high availability through redundancy, backup strategies, and proactive monitoring. Contribute to cost optimization and cloud governance practices. Strong hands-on experience with core GCP services including Compute, Networking, IAM, Storage, and optional Kubernetes (GKE). Proven expertise in Kubernetes (GKE)—managing clusters, deployments, services, autoscaling, etc. Experience in Configuring Kubernetes resources (Deployments, Services, Ingress, Helm charts, etc.) to support application lifecycles. Solid scripting knowledge (e.g., Python, Bash, Go). Familiarity with GitOps and deployment tools like ArgoCD, Helm. Experience with CI/CD tools and setting up automated deployment pipelines. Should have Google Cloud certifications (e.g., Professional Cloud DevOps Engineer, Cloud Architect, or Cloud Engineer). Behavioural Competencies : Proven experience of delivering process efficiencies and improvements Clear and fluent English (both verbal and written) Ability to build and maintain efficient working relationships with remote teams Demonstrate ability to take ownership of and accountability for relevant products and services Ability to plan, prioritise and complete your own work, whilst remaining a team player Willingness to engage with and work in other technologies Note: This job description is a general outline of the responsibilities and qualifications typically associated with the Virtualization Specialist role. Actual duties and qualifications may vary based on the specific needs of the organization. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Your future duties and responsibilities Required qualifications to be successful in this role Skills: DevOps Google Cloud Platform Kubernetes Terraform Helm What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Join our Team About this opportunity: Ericsson invites applications for the role of Senior GUI Developer. In this challenging and fulfilled position, you will be tasked with constructing customer's solutions during the building phase of the Software Development Life Cycle (SDLC). As a Software Developer, you will be responsible for performing the detailed design of application and technical architecture components and classes according to the specification provided by the System Architect. The role also involves coding Software components and contributing to the early testing phases, as well as extending your support towards system testing. What you will do: Front end design and development based on JavaScript, NodeJS and TypeScript such as Angular, React, Vue.. Consume REST APIs efficiently using from frontend frameworks. Implement robust Object-Oriented Programming (OOP) principles. Build and containerize applications using Docker and deploy them to Kubernetes clusters with Helm. Collaborate using version control systems like GitLab and contribute to CI/CD pipelines The skills you bring: JavaScript, NodeJS and TypeScript, such as Angular, React, Vue. Proficiency with containerization and orchestration tools (Docker, Kubernetes, Helm). Familiarity with software development lifecycle tools and processes, especially in Agile environments and GitLab CI pipelines. Experience in product development. Knowledge of microservices architecture and REST API design IDE: VSCode (Visual Studio Code) Experience: 5 to 8 years relevant work experience. Application below 5years will not be considered Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766681 Show more Show less

Posted 1 week ago

Apply

8.0 years

1 - 10 Lacs

Bengaluru

On-site

GlassDoor logo

Hiring Manager: Rekha Bhaskaran Talent Acquisition Advisor: Saurya Ratna Job Code Level: DSP4 Refer Your Friends! Your Impact: We are part of OpenText Cybersecurity Enterprise division specializing in Security Domain. Our product helps security operations teams to efficiently and effectively preempt and respond to threats that matter with proactive threat hunting, real-time threat detection, and response automation using AI/ML. What the role offers: Develops product architectures and methodologies for software applications design and development across multiple platforms and organizations within the Global Business Unit. Identifies and evaluates new technologies, innovations, and outsourced development partner relationships for alignment with technology roadmap and business value. Reviews and evaluates designs and project activities for compliance with development guidelines and standards; provides tangible feedback to improve product quality and mitigate failure risk. Leverages recognized domain expertise, business acumen, and experience to influence decisions of executive business leadership, outsourced development partners, and industry standards groups. Provides guidance and mentoring to less-experienced staff members to set an example of software applications design and development innovation and excellence. What you need to succeed: 8+ years of technical experience with complex technology projects within large, distributed organizations Hands-on programming expertise with C++ Strong grasp of object-oriented programming (OOP) and design Understanding of memory management and design patterns System level knowledge on Inter-process communication (IPC) – pipes, sockets, shared memory, cache optimization, memory access patterns Familiarity with profiling and tuning tools like perf, gprof etc Drive automation and CI/CD adoption in the dev pipeline Knowledge and understanding of Docker, Kubernetes, Helm, MicroServices Knowledge and understanding of GitLab Experience in overall architecture of software applications (multi-platform) for products and solutions. Excellent analytical and problem-solving skills. Experience in solving ‘scalability’ and performance problems of an enterprise application. Knowledge and experience of Agile development practices Excellent written and verbal communication skills Ability to effectively communicate product architectures, design proposals and negotiate options at business unit and executive levels. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 week ago

Apply

3.0 - 5.0 years

2 - 10 Lacs

Bengaluru

On-site

GlassDoor logo

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact: We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java,Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What the role offers: Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What you need to succeed: Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One last thing: OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned.Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us athr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

hackajob is collaborating with Wipro to connect them with exceptional tech professionals for this role. Title: Azure DevOps Engineer Requisition ID: 11556 City: Bengaluru Country/Region: IN Job Description Technical skills Ability to build custom Docker images using Dockerfile, debug running Docker containers, integrating Docker with CI/CD pipelines Ability to setup and manage a Kubernetes cluster, deploy applications on a Kubernetes cluster, knowledge of best practices; Work with Ingress controllers, service discovery, load balancers, automate deployments using Helm charts Ability to write and debug automation code/scripts, integrate third-party libraries in the code/script Tools - Any 2 Python / Shell / Linux / Unix / Groovy / Java / Powershell / Golang Ability to write & debug complex infrastructure configurations, integrate with CI/CD pipelines Any 1 Ansible / Terraform / infra - Puppet / Github / OpenShift / Open tofu Experience with large scale deployments using infrastructure-as-a-code, knowledge of best practices. Design and implement Azure App Service Apps and Azure Functions. Implement and manage Azure Load Balancer, Traffic Manager, VPN, and ExpressRoute Ability to implement/debug CI/CD on Azure platform using Azure Pipelines (or integrating with other CI/CD tools),automate testing using Azure Test Plans Manage complex projects using Azure Boards and use Azure Repos for advanced code management. Experience in configuring/managing AKS, including deploying applications on an AKS cluster Ability to scale Azure DevOps for large teams and projects, knowledge of best practices in Azure DevOps Non-technical Skills Working experience in Agile methodologies (e.g. Scrum, Kanban, SAFe) Ability to do rapid troubleshooting of issues along with Root Cause Analysis Team player with positive attitude Good communication skills Tools - Any 1 JIRA, ADO Role Purpose The purpose of this role is to work with Application teams and developers to facilitate better coordination amongst operations, development and testing functions by automating and streamlining the integration and deployment processes Do Align and focus on continuous integration (CI) and continuous deployment (CD) of technology in applications Plan and Execute the DevOps pipeline that supports the application life cycle across the DevOps toolchain — from planning, coding and building, testing, staging, release, configuration and monitoring Manage the IT infrastructure as per the requirement of the supported software code On-board an application on the DevOps tool and configure it as per the clients need Create user access workflows and provide user access as per the defined process Build and engineer the DevOps tool as per the customization suggested by the client Collaborate with development staff to tackle the coding and scripting needed to connect elements of the code that are required to run the software release with operating systems and production infrastructure Leverage and use tools to automate testing & deployment in a Dev-Ops environment Provide customer support/ service on the DevOps tools Timely support internal & external customers on multiple platforms Resolution of the tickets raised on these tools to be addressed & resolved within a specified TAT Ensure adequate resolution with customer satisfaction Follow escalation matrix/ process as soon as a resolution gets complicated or isn’t resolved Troubleshoot and perform root cause analysis of critical/ repeatable issues Display Lists The Competencies Required To Perform This Role Effectively Functional Competencies/ Skill Leveraging Technology - Knowledge of current and upcoming technology (automation, tools and systems) to build efficiencies and effectiveness in own function/ Client organization - Competent Process Excellence - Ability to follow the standards and norms to produce consistent results, provide effective control and reduction of risk - Expert Technical knowledge - knowledge of various DevOps tools, customization and its implementation at the client site - Expert Competency Levels Foundation Knowledgeable about the competency requirements. Demonstrates (in parts) frequently with minimal support and guidance. Competent Consistently demonstrates the full range of the competency without guidance. Extends the competency to difficult and unknown situations as well. Expert Applies the competency in all situations and is serves as a guide to others as well. Master Coaches others and builds organizational capability in the competency area. Serves as a key resource for that competency and is recognised within the entire organization. Behavioral Competencies Execution Excellence Passion for Results Confidence Client centricity Formulation & Prioritization Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

At Armor, we are committed to making a meaningful difference in securing cyberspace. Our vision is to be the trusted protector and de facto standard that cloud-centric customers entrust with their risk. We strive to continuously evolve to be the best partner of choice, breaking norms and tirelessly innovating to stay ahead of evolving cyber threats and reshaping how we deliver customer outcomes. We are passionate about making a positive impact in the world, and we’re looking for a highly skilled and experienced talent to join our dynamic team. Armor has unique offerings to the market so customers can a) understand their risk b) leverage Armor to co-manage their risk or c) completely outsource their risk to Armor. Learn more at: https://www.armor.com Summary We are seeking a highly skilled Cloud Engineer with expertise in Oracle Cloud Infrastructure (OCI), Microsoft Azure, and VMware, along with strong Linux experience and hands-on proficiency in Kubernetes and Microservices architecture. The ideal candidate will design, deploy, and manage cloud environments, ensuring high availability, security, and scalability for mission-critical applications. ESSENTIAL DUTIES AND RESPONSIBILITIES (Additional duties may be assigned as required) Design, implement, and manage cloud solutions on Oracle Cloud Infrastructure (OCI) and Azure. Configure and optimize VMware-based virtualized environments for cloud and on-premises deployments. Architect scalable, high-performance, and cost-effective cloud-based solutions. Deploy, configure, and maintain Kubernetes clusters for containerized applications. Design, build, and support Microservices architectures using containers and orchestration tools. Implement service mesh technologies (e.g., Istio, Linkerd) for microservices networking and security. Manage and optimize Linux-based systems, ensuring reliability and security. Automate infrastructure provisioning and configuration using Terraform, Ansible, or other Infrastructure-as-Code (IaC) tools. Implement CI/CD pipelines for automated application deployments and infrastructure updates. Ensure cloud security best practices, including identity and access management (IAM), encryption, and compliance standards. Monitor and enhance network security policies across cloud platforms. Proactively monitor cloud infrastructure performance, ensuring optimal uptime and responsiveness. Troubleshoot complex cloud, networking, and containerization issues. Work closely with DevOps, security, and development teams to optimize cloud-based deployments. Document architectures, processes, and troubleshooting guides for cloud environments. Required Skills 7+ years of experience in cloud engineering or a related field. Hands-on experience with Oracle Cloud Infrastructure (OCI) and Microsoft Azure. Expertise in VMware virtualization technologies. Strong Linux system administration skills. Proficiency in Kubernetes (EKS, AKS, OKE, or self-managed clusters). Experience with Microservices architecture and containerization (Docker, Helm, Istio). Knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Strong scripting skills (Bash, Python, or PowerShell). Familiarity with networking concepts (VPC, subnets, firewalls, DNS, VPN). Preferred Qualifications Certifications in OCI, Azure, Kubernetes (CKA/CKAD), or VMware. Experience with serverless computing and event-driven architectures. Knowledge of logging and monitoring tools (Prometheus, Grafana, ELK, Azure Monitor). WHY ARMOR Join Armor if you want to be part of a company that is redefining cybersecurity. Here, you will have the opportunity to shape the future, disrupt the status quo, and be a part of a team that celebrates energy, passion, and fresh thinking. We are not looking for someone who simply fills a role – we want talent who will help us write the next chapter of our growth story. Armor Core Values Commitment to Growth: A growth mindset that encourages continuous learning and improvement with adaptability in the face of challenges. Integrity Always: Sustain trust through transparency + honesty in all actions and interactions regardless of circumstances. Empathy In Action: Active understanding, compassion and support to the needs of others through genuine connection. Immediate Impact: Taking initiative with swift, informed actions to deliver positive outcomes. Follow-Through: Dedication to delivering finished results with attention to quality and detail to achieve the desired outcomes. WORK ENVIRONMENT The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. The noise level in the work environment is usually low to moderate. The work environment can be either in an office setting or remotely from anywhere. Equal opportunity employer - it is the policy of the company to comply with all employment laws and to afford equal employment opportunity to individuals in all aspects of employment, including in selection for job opportunities, without regard to race, color, religion, sex, national origin, age, disability, genetic information, veteran status, or any other consideration protected by federal, state or local laws. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Meet Our Team Pegasystems develops strategic applications for sales, marketing, service, and operations for our Global 500 clients include the world's largest and most sophisticated enterprises. Our Pega Cloud team focuses on delivering services that are essential for Pega as a Service success. As a member of one of the most innovative and fastest-growing groups at Pega, you will work closely with talented engineers and product owners across the globe (US/EU/India) to build a highly scalable Software as a Service (SaaS) offering. Picture Yourself At Pega Infrastructure Services Team owns the underlying technologies that supports Pega Cloud Software-as-a-Service (SaaS) product. Pega Cloud is a growing $300M+ business with a 35% market growth. As a key member to the Infrastructure Engineering team, you will work with multiple subject matter experts that are responsible for deploying an end-to-end solution. You will be part of the Infrastructure Tribe, working on cloud deployments running on AWS and GCP as well as Kubernetes, Istio, and Observability products. Contributing to a variety of technologies, vendors, architectures, implementation, and support. What You'll Do At Pega Design, build and operate highly available, scalable cloud-native systems using Java or Golang Work with Product Owners and other stakeholders along with the team members to design and document new Pega Cloud features Design and execute unit, integration, and API tests Leverage DevOps tools and CI/CD pipelines to enable automated operations of our services Assist our operations team with sophisticated operations along the entire service lifecycle Who You Are Bachelor's/master’s degree in computer science (or related experience/field) and 7 to 9 years of relevant professional experience Strong knowledge of Golang, Java, or other OO language development, ideally in the microservices aspect Be able to design a microservices-based architecture leveraging Cloud-native design patterns and serverless frameworks, with emphasis on scalability and HA Be able to collaborate with stakeholders across regions to discuss ideas, brainstorm designs or solutions and influence decisions. Very good understanding and experience in Cloud technologies (should present knowledge level of AWS Associate Architect or equivalent for Google) Good knowledge of Infrastructure as a Code (Helm/Terraform/Cloudformation) Very good knowledge of Docker and Kubernetes (having CKAD certification is a plus) Knowledge of networking is a plus Good Communication skills What You've Accomplished You are a passionate developer with sufficient experience to own and drive chunks of work to completion solely. You are seasoned professional with experience in designing, developing, deploying and maintaining highly resilient, performant and scalable systems in cloud. You research on technologies relevant to project and suggest ideas to improve service. You understand the basis of DevOps culture, and you want to grow yourself to be a technically versatile person. You want to have an impact and help us with our Pega Cloud "as-a-Service" evolution. Pega Offers You Gartner Analyst acclaimed technology leadership across our categories of products . Continuous learning and development opportunities An innovative, inclusive, agile, flexible, and fun work environment Competitive global benefits program inclusive of pay + bonus incentive, employee equity in the company Job ID: 21920 Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the role We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture; as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline design and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will lead our software developers; database architects; data analysts; and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams; systems; and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. You will be responsible for Responsibilities - Create and maintain optimal data pipeline architecture; - Assemble large; complex data sets that meet functional / non-functional business requirements. - Identify; design; and implement internal process improvements: automating manual processes; optimizing data delivery; re-designing infrastructure for greater scalability; etc. - Build the infrastructure required for optimal extraction; transformation; and loading of data from a wide variety of data sources - Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition; operational efficiency; and other key business performance metrics. - Work with stakeholders including the Executive; Product; Data; and Design teams to assist with data-related technical issues and support their data infrastructure needs. - Keep our data separated and secure - Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. - Work with data and analytics experts to strive for greater functionality in our data systems. You will need Mandatory skills: Hadoop; hive; Spark; any stream processing; Scala/Java; Kafka; and containerization/Kubernetes Good to have skills are Functional programming; Kafka Connect; Spark streaming; Helm Charts; hands-on experience in Kubernetes Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, d ifferentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. * Your fixed pay is the guaranteed pay as per your contract of employment. * Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. * In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. * Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. * We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. * Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. * Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. * Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco Bengaluru We are a multi-disciplinary team creating a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility, providing cutting-edge technological solutions and empowering our colleagues to do ever more for our customers. With cross-functional expertise in Global Business Services and Retail Technology & Engineering, a wide network of teams and strong governance we reduce complexity thereby offering high quality services for our customers. Tesco Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 4,40,000 colleagues. At Tesco Business Solutions, we have a mission to simplify, scale & partner to serve our customers, colleagues and suppliers through a best-in-class intelligent Business Services model. We do this by building a world class business services model by executing services model framework right at the heart of everything we do for our worldwide customers. The key objective is to implement and execute service model across all our functions and markets consistently. The ethos of business services is to free-up our colleagues from a regular manual operational work. We use cognitive technology to augment our key decision making. We also built a Continuous Improvement (CI) culture across functions to drive bottom-up business efficiencies by optimising processes. Business services colleagues need to act as a business partner with our group stakeholders to build a collaborative partnership driving continuous improvement across markets and functions to lead the best customer experience by serving our shoppers a little better every day. At Tesco, inclusion means that Everyone's Welcome. Everyone is treated fairly and with respect; by valuing individuality and uniqueness we create a sense of belonging. Diversity and inclusion have always been at the heart of Tesco. It is embedded in our values: we treat people how they want to be treated. We always want our colleagues to feel they can be themselves at work and we are committed to helping them be at their best. Across the Tesco group we are building an inclusive workplace, a place to actively celebrate the cultures, personalities and preferences of our colleagues, who in turn help to build the success of our business and reflect the diversity of the communities we serve. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Backend Developer Experience: 8 - 10 years Location: Bangalore / Chennai Job Type: Contract Job Overview: We are seeking a highly skilled Software Engineer (Backend) to join our team. The candidate will have 8+ years of experience in backend development, with expertise in building cross-functional APIs, multi-platform applications, and a strong background in modern cloud technologies, containerization, and DevOps practices. Technical Skills & Experience: 8+ years of experience in backend software development. Strong hands-on experience in building and maintaining REST APIs, GraphQL, and serverless applications. Proficiency in Java, Quarkus, Python, JavaScript, and Kotlin. Expertise in microservices architecture, including design, development, and deployment of microservices-based solutions. Experience with cloud platforms (AWS EC2, Azure AKS) and containerization technologies (Docker, Kubernetes). Familiarity with DevOps processes and tools like Jenkins, Git, Ansible for continuous integration and deployment. Knowledge of KNative, Hasura, JWT, and Helm to support modern cloud-native application development. Strong understanding of API security practices and best practices in scalable backend architecture. Show more Show less

Posted 1 week ago

Apply

Exploring Helm Jobs in India

Helm is a popular package manager for Kubernetes that simplifies the deployment and management of applications. In India, the demand for professionals with expertise in Helm is on the rise as more companies adopt Kubernetes for their container orchestration needs.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi NCR

Average Salary Range

The average salary range for helm professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can command salaries upwards of INR 15 lakhs per annum.

Career Path

Typically, a career in Helm progresses as follows: - Junior Helm Engineer - Helm Engineer - Senior Helm Engineer - Helm Architect - Helm Specialist - Helm Consultant

Related Skills

In addition to proficiency in Helm, professionals in this field are often expected to have knowledge of: - Kubernetes - Docker - Containerization - DevOps practices - Infrastructure as Code (IaC)

Interview Questions

  • What is Helm and how does it simplify Kubernetes deployments? (basic)
  • Can you explain the difference between a Chart and a Release in Helm? (medium)
  • How would you handle secrets management in Helm charts? (medium)
  • What are the limitations of Helm and how would you work around them? (advanced)
  • How do you troubleshoot Helm deployment failures? (medium)
  • Explain the concept of Helm Hooks and when they are triggered during the deployment lifecycle. (medium)
  • How do you version and manage Helm charts in a production environment? (medium)
  • What are the best practices for Helm chart organization and structure? (basic)
  • Describe a scenario where you used Helm to deploy a complex application and the challenges you faced. (advanced)
  • How do you manage dependencies between Helm charts? (medium)
  • Explain the difference between Helm 2 and Helm 3. (basic)
  • How do you perform a rollback of a Helm release? (medium)
  • What security considerations should be taken into account when using Helm? (advanced)
  • How do you customize Helm charts for different environments (dev, staging, production)? (medium)
  • Can you automate the deployment of Helm charts using CI/CD pipelines? (medium)
  • What is Tiller in Helm and why was it removed in Helm 3? (advanced)
  • How do you manage upgrades of Helm releases without causing downtime? (medium)
  • Explain how you would handle configuration management in Helm charts. (medium)
  • What are the advantages of using Helm over manual Kubernetes manifests? (basic)
  • How do you ensure the idempotency of Helm deployments? (medium)
  • How do you perform linting and testing of Helm charts? (basic)
  • Can you explain the concept of Helm repositories and how they are used? (medium)
  • How would you handle versioning of Helm charts to ensure compatibility with different Kubernetes versions? (medium)
  • Describe a situation where you had to troubleshoot a Helm chart that was failing to deploy. (advanced)

Closing Remark

As the demand for Helm professionals continues to grow in India, it is important for job seekers to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a valuable asset to organizations looking to leverage Helm for their Kubernetes deployments. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies