Home
Jobs

1202 Helm Jobs - Page 23

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 9.0 years

18 - 20 Lacs

Pune

Work from Office

Naukri logo

Timings: Full Time (As per company timings) Notice Period: (Immediate Joiner - Only) Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM 9:30 PM IST About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities: Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience: 510 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer. Ability to deal with ambiguity. Manage a collaborative and analytical approach. Self-confident and humble. Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Calling all Data Architects! 12 Months Contract Multiple locations - Bangalore, Hyderabad Hybrid - 3 days onsite Immediate starters needed! About Us: Join a globally recognised experience management company driving innovation for leading financial services clients. We're looking for a Lead Data Architect to take the helm in designing and implementing scalable data architecture solutions in a cloud-first environment. Key Responsibilities: Designed and implemented scalable data architectures Developed and maintained logical and physical data models Built and optimized core data platforms Integrated data from RDS and Azure Data Services (ADS) Ensured data governance, compliance, and best practices across projects Requirements: Minimum 5 years of experience working with financial clients in a data architecture role. Experience working on Finacle software. Proven experience in data modelling specific to the banking/financial domain . Solid understanding of financial data structures and reporting requirements . Experience with RDS integration with Azure Data Services (ADS) . Ability to lead architectural discussions and influence stakeholders. Ready to Make an Impact? Join us in shaping innovative solutions! Apply now and be part of a dynamic team driving technological excellence. Share your CV at d. narula @ ioassociates .co .uk for an exciting career opportunity. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Quality Engineer II About The Team Magnum is an automated underwriting solution built by Swiss Re and helps 70+ insurers across the globe to automate the riskassessment in their Life and Health insurance transactions. Magnum is a market-leading software for automated underwritingworldwide, recognized by The Forrester Wave™ as a leader in Automated Life Insurance Underwriting Engines. Magnum is cateringto a fast-growing base of installed clients with dedicated teams across the world, from the US to Europe and Asia. Our ambition is tobest serve our clients and achieve balanced growth of Magnum products. About Role To maintain ground-breaking propositions for Magnum and support the next wave of innovations, we are looking for a QualityEngineer II – Product Engineering who is enthusiastic about test automation & Cloud Infra test validation. As Sr QA Engineer you will be responsible for designing test scenario, maintaining automation test suite, and continuously improvingthe automation process. Key Responsibilities Responsible to create, update, execute automation tests. Responsible to understand and debug deployment issue on cloud Responsible to understand the application functionality delivered within sprint and able to create test scenario andautomation. Responsible to create test scripts from uses cases/requirement. Responsible to analyse the defects, communicate and follow them to closure. Responsible to maintain the test suites based on various product releases. Responsible to research and constantly improve the Automation frameworks. Responsible to follow automation best practices. Your Qualifications 5-8 years of experience in Test Automation and DevOps. Bachelor's degree level or equivalent in computer science or related field. Strong programming/QA experience and proficient in core java. Experience in testing applications deployed in containerized environments using Docker & Kubernetes. Hands-on experience with YAML/Helm Charts for configuring Kubernetes deployments and managing test environments. Understanding of Azure API management policies. Experience with Failover & Failback testing in cloud-based applications. Knowledge of API gateways, load balancers, DNS and networking concepts in cloud setup. Familiar with authorization servers such as OKTA, azure AD or other Oauth2 supported servers for authentication and access control. Experience testing applications integrated with OAuth2.0, OIDC and JWT-Based authentication flows. Good knowledge in designing test automation frameworks based on java programming language. Working experience in DevOps environment building CI/CD pipelines. Experience in Performance Testing(Jmeter). Agile methodology, Git and JIRA. Strong knowledge of any SQL or No-SQL databases and Queries. Good knowledge with Azure DevOps (Repo, Pipeline, CICD). About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, workingto make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edgeexpertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embracea workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race,ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. Inour inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positionsthat match your skills and experience. About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords: Reference Code: 134167 Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Role Summary: Pazago is seeking a highly skilled and motivated DevOps Engineer with over 3+ years of experience to join our dynamic team. The ideal candidate will have a strong background in scripting, infrastructure automation, containerization, and continuous integration/deployment practices. You will work closely with development, operations, and product teams to streamline and automate the entire software delivery lifecycle. Responsibilities: Design, build, and maintain scalable infrastructure using Terraform, Kubernetes, and Docker. Develop and maintain deployment scripts and tools using Node.js and Python. Build and optimize CI/CD pipelines to enable faster, safer deployments (e.g., GitHub Actions, CodePipeline). Collaborate with development teams to design and implement solutions that improve scalability, performance, and reliability of our SAAS products hosted on AWS. Monitor and troubleshoot production systems using tools like CloudWatch , Grafana , or similar. Collaborate with engineers to build fault-tolerant systems and ensure high availability. Enforce DevOps best practices including infrastructure testing, secrets management, and rollback strategies. Qualifications: 3+ years of hands-on experience in DevOps , SRE , or cloud infrastructure roles . Strong proficiency in Node.js and Python scripting. Solid experience with AWS services , container orchestration (Kubernetes), and IaC (Terraform). Proven experience developing and deploying SAAS products on public cloud infrastructure. Solid understanding of software development principles, version control systems (e.g., Git), and Agile methodologies. Strong problem-solving skills and a proactive mindset. Excellent communication and team collaboration skills. Nice to have : Experience with GitHub Actions, Argo CD, or Helm. Exposure to security automation and infrastructure testing. Prior experience in a fast-paced startup environment. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Requirements Python, MySQL, Kubernetes, and Docker Developers in Hyderabad for PAC engagement team Work Experience Position Overview: We are seeking a highly skilled and motivated Full-Stack Developer with expertise in Python (FastAPI) , MySQL , Kubernetes , and Docker . The ideal candidate will play a key role in designing, developing, deploying, and maintaining robust applications that meet the needs of our fast-paced environment. Responsibilities Backend Development: Build and maintain RESTful APIs using FastAPI. Optimize application performance, security, and scalability. Ensure adherence to coding standards and best practices. Database Management: Design, develop, and maintain complex relational databases using MySQL. Optimize SQL queries and database schemas for performance and scalability. Perform database migrations and manage backup/restore processes. Containerization and Orchestration: Build and maintain Docker containers for application deployment. Set up and manage container orchestration systems using Kubernetes. Develop and maintain CI/CD pipelines for automated deployment. Application Deployment and Maintenance: Deploy and monitor applications in cloud-based or on-premise Kubernetes clusters. Troubleshoot and resolve application and deployment issues. Monitor system performance, and ensure system reliability and scalability. Collaboration and Documentation: Collaborate with cross-functional teams to gather requirements and deliver solutions. Document system architecture, APIs, and processes for future reference. Participate in code reviews to ensure code quality and shared knowledge. Required Skills And Qualifications Proficient in Python with experience in building APIs using FastAPI. Solid understanding of relational databases, particularly MySQL, including database design and optimization. Hands-on experience with Docker for containerization and Kubernetes for orchestration. Familiarity with building CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab CI/CD. Strong understanding of microservices architecture and REST API principles. Knowledge of security best practices in API development and deployment. Familiarity with version control systems, particularly Git. Preferred Skills Knowledge of Helm for Kubernetes application deployment. Familiarity with monitoring tools like Prometheus, Grafana, or ELK stack. Basic knowledge of DevOps practices and principles. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Requirements Python, MySQL, Kubernetes, and Docker Developers in Hyderabad for PAC engagement team Work Experience Position Overview: We are seeking a highly skilled and motivated Full-Stack Developer with expertise in Python (FastAPI) , MySQL , Kubernetes , and Docker . The ideal candidate will play a key role in designing, developing, deploying, and maintaining robust applications that meet the needs of our fast-paced environment. Responsibilities Backend Development: Build and maintain RESTful APIs using FastAPI. Optimize application performance, security, and scalability. Ensure adherence to coding standards and best practices. Database Management: Design, develop, and maintain complex relational databases using MySQL. Optimize SQL queries and database schemas for performance and scalability. Perform database migrations and manage backup/restore processes. Containerization and Orchestration: Build and maintain Docker containers for application deployment. Set up and manage container orchestration systems using Kubernetes. Develop and maintain CI/CD pipelines for automated deployment. Application Deployment and Maintenance: Deploy and monitor applications in cloud-based or on-premise Kubernetes clusters. Troubleshoot and resolve application and deployment issues. Monitor system performance, and ensure system reliability and scalability. Collaboration and Documentation: Collaborate with cross-functional teams to gather requirements and deliver solutions. Document system architecture, APIs, and processes for future reference. Participate in code reviews to ensure code quality and shared knowledge. Required Skills And Qualifications Proficient in Python with experience in building APIs using FastAPI. Solid understanding of relational databases, particularly MySQL, including database design and optimization. Hands-on experience with Docker for containerization and Kubernetes for orchestration. Familiarity with building CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab CI/CD. Strong understanding of microservices architecture and REST API principles. Knowledge of security best practices in API development and deployment. Familiarity with version control systems, particularly Git. Preferred Skills Knowledge of Helm for Kubernetes application deployment. Familiarity with monitoring tools like Prometheus, Grafana, or ELK stack. Basic knowledge of DevOps practices and principles. Show more Show less

Posted 1 week ago

Apply

50.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: AWS Devops engineer Experience Required: 6-8Yrs Notice: immediate Work Location: Pune Mode Of Work: Hybrid Type of Hiring: Contract Primary Skills :- AWS Devops ,Helm, Terraform, EKS, Jenkins Job Summary:- We are looking for a Senior AWS DevOps Engineer to lead the infrastructure and DevOps implementation for a mission-critical orchestration platform layer built on AWS, integrating Camunda 8 for business process automation. This role is ideal for someone who can drive DevOps excellence in a secure, scalable, and Agile delivery model. Key Responsibilities:- •Lead the design and implementation of Infrastructure-as-Code (IaC) using Terraform for scalable, secure AWS environment Architect and manage Amazon EKS clusters and Kubernetes workloads, ensuring high availability and resilience • Develop and optimize Helm charts for microservice deployments and Camunda-related components •Design and maintain robust CI/CD pipelines using Jenkins, with integration to security scanning, quality gates, and change management •Collaborate with Camunda architects, backend/frontend teams, and stakeholders to ensure seamless orchestration platform delivery •Guide the adoption of DevSecOps practices aligned with bank's security and compliance standards •Define and implement observability standards (monitoring, logging, alerting) using AWS-native and open-source tools •Mentor junior DevOps engineers and promote best practices in Agile/DevOps delivery •Support governance reviews, cloud forums, and technical design walkthroughs as required by standards Required Skills & Experience:- • 5-8 years of DevOps experience, with strong expertise in AWS cloud architecture and automation • Proven hands-on experience with Terraform, EKS, Kubernetes, Helm, and Jenkins • Deep understanding of containerized application delivery, GitOps workflows, and service mesh (e.g., Istio or AWS App Mesh is a plus) • Experience integrating infrastructure and CI/CD pipelines in regulated environments (e.g., financial services) • Proficient with AWS core services (IAM, VPC, EC2, S3, CloudWatch, etc.) • Experience with observability tools (e.g., Prometheus, Grafana, ELK stack) • Familiar with Agile delivery models and Scrum ceremonies • Strong troubleshooting, documentation, and communication skills Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Linkedin logo

Company Description Wiser Solutions is a suite of in-store and eCommerce intelligence and execution tools. We're on a mission to enable brands, retailers, and retail channel partners to gather intelligence and automate actions to optimize in-store and online pricing, marketing, and operations initiatives. Our Commerce Execution Suite is available globally. Job Description When looking to buy a product, whether it is in a brick and mortar store or online, it can be hard enough to find one that not only has the characteristics you are looking for but is also at a price that you are willing to pay. It can also be especially frustrating when you finally find one, but it is out of stock. Likewise, brands and retailers can have a difficult time getting the visibility they need to ensure you have the most seamless experience as possible in selecting their product. We at Wiser believe that shoppers should have this seamless experience, and we want to do that by providing the brands and retailers the visibility they need to make that belief a reality. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze lots of structured and semi-structured data from lots of different places every day (whether it’s 20 million+ products from 500+ websites or data collected from over 300,000 brick and mortar stores across the country). We help our customers be more competitive by discovering interesting patterns in this data they can use to their advantage, while being uniquely positioned to be able to do this across both online and instore. We are looking for a lead-level software engineer to lead the charge on a team of like-minded individuals responsible for developing the data architecture that powers our data collection process and analytics platform. If you have a passion for optimization, scaling, and integration challenges, this may be the role for you. What You Will Do Think like our customers – you will work with product and engineering leaders to define data solutions that support customers’ business practices. Design/develop/extend our data pipeline services and architecture to implement your solutions – you will be collaborating on some of the most important and complex parts of our system that form the foundation for the business value our organization provides Foster team growth – provide mentorship to both junior team members and evangelizing expertise to those on others. Improve the quality of our solutions – help to build enduring trust within our organization and amongst our customers by ensuring high quality standards of the data we manage Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Technologies We Use Languages: SQL, Python Infrastructure: AWS, Docker, Kubernetes, Apache Airflow, Apache Spark, Apache Kafka, Terraform Databases: Snowflake, Trino/Starburst, Redshift, MongoDB, Postgres, MySQL Others: Tableau (as a business intelligence solution) Qualifications Bachelors/Master’s degree in Computer Science or relevant technical degree 10+ years of professional software engineering experience Strong proficiency with data languages such as Python and SQL Strong proficiency working with data processing technologies such as Spark, Flink, and Airflow Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Snowflake, etc.) Solid understanding of streaming solutions such as Kafka, Pulsar, Kinesis/Firehose, etc. Hands-on experience with Docker, Kubernetes, infrastructure as code using Terraform, and Kubernetes package management with Helm charts Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of columnar/row-oriented data structures (e.g. Parquet, ORC, Avro, etc.) Solid understanding of Apache Iceberg, or other open table formats Proven ability to transform raw unstructured/semi-structured data into structured data in accordance to business requirements Solid understanding of AWS, Linux and infrastructure concepts Proven ability to diagnose and address data abnormalities in systems Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Experience building data warehouses using conformed dimensional models Experience building data lakes and/or leveraging data lake solutions (e.g. Trino, Dremio, Druid, etc.) Experience working with business intelligence solutions (e.g. Tableau, etc.) Experience working with ML/Agentic AI pipelines (e.g. , Langchain, LlamaIndex, etc.) Understands Domain Driven Design concepts and accompanying Microservice Architecture Passion for data, analytics, or machine learning. Focus on value: shipping software that matters to the company and the customer Bonus Points Experience working with vector databases Experience working within a retail or ecommerce environment. Proficiency in other programming languages such as Scala, Java, Golang, etc. Experience working with Apache Arrow and/or other in-memory columnar data technologies Supervisory Responsibility Provide mentorship to team members on adopted patterns and best practices. Organize and lead agile ceremonies such as daily stand-ups, planning, etc Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Lead a team of Software Engineers to design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.). Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity. What Experience You Need Bachelor's degree or equivalent experience 10+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs. What Could Set You Apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 5.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Overview We are seeking a highly skilled Kubernetes Subject Matter Expert (SME) to join our team. The ideal candidate will have 7+ years of industry experience, with Minimum 3+ years of expertise in Kubernetes and DevSecOps . The role requires hands-on experience with multi-cloud environments, preferably Azure and AWS. The candidate must hold Certified Kubernetes Administrator (CKA) OR Certified Kubernetes Security Specialist (CKS) OR Certified Kubernetes Application Developer (CKAD) certifications and have a strong track record of implementing Kubernetes at scale for large production environments. Responsibilities Design, deploy, and optimize Kubernetes-based platforms for large-scale production workloads. Implement DevSecOps best practices to enhance the security and reliability of Kubernetes clusters. Manage Kubernetes environments across multi-cloud platforms (Azure, AWS) with a focus on resilience and high availability. Provide technical leadership in architecting, scaling, and troubleshooting Kubernetes ecosystems. Develop automation strategies using Infrastructure-as-Code (IaC) tools such as Terraform, Helm, and Ansible. Work with security teams to ensure compliance with industry security standards and best practices. Define and implement observability and monitoring using tools like Prometheus, Grafana, and ELK Stack. Lead incident response and root cause analysis for Kubernetes-related production issues. Guide and mentor engineering teams on Kubernetes, service mesh, and container security. Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree is a plus. 7+ years of industry experience in cloud infrastructure, container orchestration, and DevSecOps. 3+ years of hands-on experience with Kubernetes in production environments. Strong knowledge of Kubernetes security, RBAC, Network Policies, and admission controllers. Experience in multi-cloud environments (Azure, AWS preferred). Hands-on experience with Istio or other service meshes. Expertise in containerization technologies like Docker. Proficiency with CI/CD pipelines (GitOps, ArgoCD, Jenkins, Azure DevOps, or similar). Experience with Kubernetes storage and networking in enterprise ecosystems. Deep understanding of Kubernetes upgrade strategies, scaling, and optimization. Must have CKA or CKS or CKAD certification.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Overview PepsiCo eCommerce has an opportunity for a Cloud Infrastructure security or DevSecOps engineer focused on our applications running in Azure and AWS. You will be part of the DevOps and cloud infrastructure team that is responsible for Cloud security, infrastructure provisioning, maintaining existing platforms and provides our partner teams with guidance for building, maintain and optimizing integration and deployment pipelines as code for deploying our applications to run in AWS & Azure. This role offers many exciting challenges and opportunities, some of the major duties are: Work with engineering teams to develop and improve our CI / CD pipelines that enforce proper versioning and branching practices using technologies like Github, Github Actions, ArgoCD, Kubernetes, Docker and Terraform. Create, deploy & maintain Kubernetes based platforms for a variety of different workloads in AWS and Azure. Responsibilities Deploy infrastructure in Azure & AWS cloud using terraform and Infra-as-code best practices. Participate in development of Ci/CD workflows to launch application from build to deployment using modern devOps tools like Kubernetes, ArgoCD/Flux, terraform, helm. Ensure the highest possible uptime for our Kubernetes based developer productivity platforms. Partner with development teams to recommend best practices for application uptime and recommend best practices for cloud native infrastructure architecture. Collaborate in infra & application architecture discussions decision making that is part of continually improving and expanding these platforms. Automate everything. Focus on creating tools that make your life easy and benefit the entire org and business. Evaluate and support onboarding of 3rd party SaaS applications or work with teams to integrate new tools and services into existing apps. Create documentation, runbooks, disaster recovery plans and processes. Collaborate with application development teams to perform RCA. Implement and manage threat detection protocols, processes and systems. Conduct regular vulnerability assessments and ensure timely remediation of flagged incidents. Ensure compliance with internal security policies and external regulations like PCI. Lead the integration of security tools such as Wiz, Snyk, DataDog and others within the Pepsico infrastructure. Coordinate with PepsiCo's broader security teams to align Digital Commerce security practices with corporate standards. Provide security expertise and support to various teams within the organization. Advocate and enforce security best practices, such as RBAC and the principle of least privilege. Continuously review, improve and document security policies and procedures. Participate in on-call rotation to support our NOC and incident management teams. Qualifications 8+ years of IT Experience. 5+ year of Kubernetes, ideally running workloads in a production environment on AKS or EKS platforms. 4+ year of creating Ci/CD pipelines in any templatized format in Github, Gitlab or Azure ADO. 3+ year of Python, bash and any other OOP language. (Please be prepared for coding assessment in your language of choice.) 5+ years of experience deploying infrastructure to Azure platforms. 3+ year of experience with using terraform or writing terraform modules. 3+ year of experience with Git, Gitlab or GitHub. 2+ year experience as SRE or supporting micro services in containerized environment like Nomad, docker swarn or K8s. Kubernetes certifications like KCNA, KCSA, CKA, CKAD or CKS preferred Good understanding of software development lifecycle. Familiarity with: Site Reliability Engineering AWS, Azure, or similar cloud platforms Automated build process and tools Service Mesh like Istio, linkerd Monitoring tools like Datadog, Splunk etc. Able to administer and run basic SQL queries in Postgres, mySQL or any relational database. Current skills in following technologies: Kubernetes Terraform AWS or Azure (Azure Preferred). GitHub Actions or Gitlab workflow. Familiar with Agile processes and tools such as Jira; good to have experience being part of Agile teams, continuous integration, automated testing, and test-driven development BSc/MSc in computer science, software engineering or related field is a plus, alternatively completion of a devOps or Infrastructure training course or bootcamp is acceptable as well. Self-starter; bias for action and for quick iteration on ideas / concepts; strong interest in proving out ideas and technologies with rapid prototyping Ability to interact well across various engineering teams Team player; excellent listening skills; welcoming of ideas and new ways of looking at things; able to efficiently take part in brainstorming sessions

Posted 2 weeks ago

Apply

8.0 - 12.0 years

11 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

Overview Responsible for infrastructure engineering devops task for PepsiCo e-commerce. The person needs to also lead the 4 member capability in India Responsibilities Deploy infrastructure in Azure & AWS cloud using terraform and Infra-as-code best practices. Participate in development of Ci/CD workflows to launch application from build to deployment using modern devOps tools like Kubernetes, ArgoCD/Flux, terraform, helm. Ensure the highest possible uptime for our Kubernetes based developer productivity platforms. Partner with development teams to recommend best practices for application uptime and recommend best practices for cloud native infrastructure architecture. Collaborate in infra & application architecture discussions decision making that is part of continually improving and expanding these platforms. Automate everything. Focus on creating tools that make your life easy and benefit the entire org and business. Evaluate and support onboarding of 3rd party SaaS applications or work with teams to integrate new tools and services into existing apps. Create documentation, runbooks, disaster recovery plans and processes. Collaborate with application development teams to perform RCA. Implement and manage threat detection protocols, processes and systems. Conduct regular vulnerability assessments and ensure timely remediation of flagged incidents. Ensure compliance with internal security policies and external regulations like PCI. Lead the integration of security tools such as Wiz, Snyk, DataDog and others within the Pepsico infrastructure. Coordinate with PepsiCo's broader security teams to align Digital Commerce security practices with corporate standards. Provide security expertise and support to various teams within the organization. Advocate and enforce security best practices, such as RBAC and the principle of least privilege. Continuously review, improve and document security policies and procedures. Participate in on-call rotation to support our NOC and incident management teams. Qualifications BSc/MSc in computer science, software engineering or related field is a plus, alternatively completion of a devOps or Infrastructure training course or bootcamp is acceptable as well. 8+ year of Kubernetes, ideally running workloads in a production environment on AKS or EKS platforms. 4+ year of creating Ci/CD pipelines in any templatized format in Github, Gitlab or Azure ADO. 3+ year of Python, bash and any other OOP language. (Please be prepared for coding assessment in your language of choice.) 5+ years of experience deploying infrastructure to Azure platforms. 3+ year of experience with using terraform or writing terraform modules. 3+ year of experience with Git, Gitlab or GitHub. 2+ year experience as SRE or supporting micro services in containerized environment like Nomad, docker swarn or K8s. Kubernetes certifications like KCNA, KCSA, CKA, CKAD or CKS preferred Good understanding of software development lifecycle. Familiarity with: Site Reliability Engineering, AWS, Azure, or similar cloud platforms, Automated build process and tools ,Service Mesh like Istio, linkerd,,Monitoring tools like Datadog, Splunk etc. Able to administer and run basic SQL queries in Postgres, mySQL or any relational database. Current skills in following technologies: Kubernetes, Terraform, AWS or Azure (Azure Preferred). GitHub Actions or Gitlab workflow. Familiar with Agile processes and tools such as Jira; good to have experience being part of Agile teams, continuous integration, automated testing, and test-driven development

Posted 2 weeks ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

locationsHYDERABAD, IND time typeFull time posted onPosted 3 Days Ago job requisition idR1146264 . TITLE Senior DevOps Engineer LOCATION Hyderabad, India GRADE 11 About NCR Atleos: NCR Atleos (NYSENATL) is a global technology company creating exceptional self-service banking experiences. We offer all the services, software and hardware solutions needed for a comprehensive self-service channel. NCR Atleos ( ) is headquartered in Atlanta, Georgia. POSITION SUMMARY & KEY AREAS OF RESPONSIBILITY: Uses NCR/PS system and tools with minimal direction Understands professional services implementation methodology and/or custom development methodology and applies aspects to own role Works effectively with NCR team (direct and indirect) Actively engaged with client counterpart to produce deliverables communicate associated status as necessary Engages in higher level communication internally and externally leads meeting such as weekly status meetings, issue tracking or informal design sessions Is able to identify with minimal input from team leader/manager Works effectively with team, proactively providing guidance or support to more junior members Exposure to Agile development methodologies Is driven to deliver quality results Proactively identifies areas to expand work product May act as a team lead on a specific set of deliverables Mentor to team more junior members Performs more complex development and testing activities, with less supervision Developing expertise in specific product, technology or methodology Participates in activities to expand knowledge in area of focus - e.g. reading materials, attending conferences DevelopingRequires knowledge and experience in own discipline; still acquiring higher-level knowledge and skills; Builds knowledge of the organization, processes and customers; Solves a range of straightforward problems; analyses possible solutions using standard procedures; Receives a moderate level of guidance and direction. BASIC QUALIFICATIONS: Azure Knowledge of Azure Cloud Engineering, understanding how to leverage services for computing, storage, anddatabase management CI/CD Pipelines Experience with CI/CD pipelines, Gitops, containerization (Docker, Kubernetes), and monitoring tools. Infrastructure as Code (IaC) Proficiency with tools like Terraform, Azure Resource Manager to automate the provisioning and management of cloud resources A few other topics that we discussed: Hiring and Onboarding Pavan requested assistance in identifying the must-have skills for hiring new resources to support the dashboard as a service. Suresh and Brian offered to help with screening candidates and providing direction for the new hires. Azure Cloud Training Assign Pluralsight training on Azure Developer to all developers. Suresh offered a 1 hour walk-thru intro with theteam as needed (Suresh, can you provide the link please for Pluralsight). Post VM Creation Configuration Schedule a separate meeting to review and discuss post VM creation configuration requirements. (Suresh, Praveen) Resiliency Concepts Schedule a deep dive session to discuss resiliency concepts, including RTO and RPO. ( Automation & Config Management HELM/ARM/Terraform/ASO familiarity to maintain consistent and repeatable environments. Resiliency Proficiency in Azure resiliency is essential for ensuring high availability, disaster recovery, and fault tolerance in cloud-based applications. Repository Proficiency with version control systems like GitHub. This includes managing Gitops, branching models(Gitflow/Trunk Based). Networking Understanding of virtual networks, subnets, load balancers, firewalls, DNS, and VPNs in a cloud environment Security Knowledge of cloud security best practices, including identity and access management, encryption, key vaults and compliance Hybrid #LI-PS1 EEO Statement NCR Atleos is an equal - opportunity employer. It is NCR Atleos policy to hire, train, promote , and pay associates based on their job-related qualifications, ability , and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation , gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status , or any other factor protected by law . Offers of employment are conditional upon passage of screening criteria applicable to the job.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

locationsHYDERABAD, IND time typeFull time posted onPosted 3 Days Ago job requisition idR1146263 . TITLE Senior DevOps Engineer LOCATION Hyderabad, India GRADE 11 About NCR Atleos: NCR Atleos (NYSENATL) is a global technology company creating exceptional self-service banking experiences. We offer all the services, software and hardware solutions needed for a comprehensive self-service channel. NCR Atleos ( ) is headquartered in Atlanta, Georgia. POSITION SUMMARY & KEY AREAS OF RESPONSIBILITY: Uses NCR/PS system and tools with minimal direction Understands professional services implementation methodology and/or custom development methodology and applies aspects to own role Works effectively with NCR team (direct and indirect) Actively engaged with client counterpart to produce deliverables communicate associated status as necessary Engages in higher level communication internally and externally leads meeting such as weekly status meetings, issue tracking or informal design sessions Is able to identify with minimal input from team leader/manager Works effectively with team, proactively providing guidance or support to more junior members Exposure to Agile development methodologies Is driven to deliver quality results Proactively identifies areas to expand work product May act as a team lead on a specific set of deliverables Mentor to team more junior members Performs more complex development and testing activities, with less supervision Developing expertise in specific product, technology or methodology Participates in activities to expand knowledge in area of focus - e.g. reading materials, attending conferences DevelopingRequires knowledge and experience in own discipline; still acquiring higher-level knowledge and skills; Builds knowledge of the organization, processes and customers; Solves a range of straightforward problems; analyses possible solutions using standard procedures; Receives a moderate level of guidance and direction. BASIC QUALIFICATIONS: Bachelors Degree in a related discipline 4-8 years of related experience focused on the self-service financial industry and CEN XFS based solutions. 3-5 years of self-service financial industry software design and development experience. Ability to work under tight timelines in a demanding environment Related operational experience Excellent verbal and written communication skills Strong presentation skills Ability to takes a leadership role in challenging standard approaches TECHNICAL SKILLS: DevOps : Azure Knowledge of Azure Cloud Engineering, understanding how to leverage services for computing, storage, anddatabase management CI/CD Pipelines Experience with CI/CD pipelines, Gitops, containerization (Docker, Kubernetes), and monitoring tools. Infrastructure as Code (IaC) Proficiency with tools like Terraform, Azure Resource Manager to automate the provisioning and management of cloud resources Resiliency Concepts Schedule a deep dive session to discuss resiliency concepts, including RTO and RPO. ( A few other topics that we discussed Hiring and Onboarding Pavan requested assistance in identifying the must-have skills for hiring new resources to support the dashboard as a service. Suresh and Brian offered to help with screening candidates and providing direction for the new hires. Azure Cloud Training Assign Pluralsight training on Azure Developer to all developers. Suresh offered a 1 hour walk-thru intro with theteam as needed (Suresh, can you provide the link please for Pluralsight). Post VM Creation Configuration Schedule a separate meeting to review and discuss post VM creation configuration requirements. (Suresh, Praveen) Automation & Config Management HELM/ARM/Terraform/ASO familiarity to maintain consistent and repeatable environments. Resiliency Proficiency in Azure resiliency is essential for ensuring high availability, disaster recovery, and fault tolerance in cloud-based applications. Repository Proficiency with version control systems like GitHub. This includes managing Gitops, branching models(Gitflow/Trunk Based). Networking Understanding of virtual networks, subnets, load balancers, firewalls, DNS, and VPNs in a cloud environment Security Knowledge of cloud security best practices, including identity and access management, encryption, key vaults and compliance Agile Practices: Provide input and technical content for technical documentation for user help materials and customer training. Experience in Agile practices like Kanban/SCRUM Through understanding of Root cause analysis Experience of working in Continuous test and build environments PREFERRED QUALIFICATIONS: Able to evaluate risk independently and propose contingency plans Good business acumen and ability to negotiate up-sell and margin improvement Able to provide Business Impact Analysis at an enterprise level Hybrid #LI-PS1 EEO Statement NCR Atleos is an equal - opportunity employer. It is NCR Atleos policy to hire, train, promote , and pay associates based on their job-related qualifications, ability , and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation , gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status , or any other factor protected by law . Offers of employment are conditional upon passage of screening criteria applicable to the job.

Posted 2 weeks ago

Apply

4.0 - 5.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Job ID: 198521 Required Travel :Minimal Managerial - Yes Location: :India- Pune (Amdocs Site) Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence What will your job look like Engineering - Design, develop, modify and integrate complex infrastructure automation and deployment systems; Ensure code and configuration are maintainable (IaaC), scalable and supportable; Use extensive knowledge & expertise of the Amdocs product/solution and technologies to support the direction of the infrastructure solution. Follow Amdocs software engineering standards, applicable software development methodology (such as design-led thinking, DevOps) and release processes. Promote system reliability and operability. Serve as an expert on specific technology. Investigation - Investigate infrastructure issues by reviewing/debugging CI/CD pipeline and runtime environments and providing fixes (analyze and fix bugs) and workarounds. Review changes for operability to maintain existing software solutions. Highlight risks and help mitigate them from technical aspects. Technical Expertise - Serve as a highly specialized Technology / Product expert, acting with high autonomy to deliver agreed technical objectives. Provide technical expertise in terms of Infrastructure architecture usage for functional and non-functional aspects; Obtain a strong technologies context understanding while making technical decisions and solving technical issues; Able to resolve significant and unique problems that have a high level of complexity. Professional Leadership - Provide guidance to DevOps engineers for the E2E software deployment, maintenance & lifecycle; Enforce and guide on technical standards (e.g. tools, and platforms), ensure that documentation related to the application life cycle architecture is correct and up to date; Enforce quality processes, such as performing technical root cause analysis and outlining corrective action for given problems, promote Developers autonomy. Innovation & Continuous Improvement - Promote continuous improvements/efficiencies to the product life cycle and business processes by utilizing common tools and different innovation techniques and guiding the reuse of existing ones. Stay up to date with new technologies in practice in the industry Team Work and Collaboration - Collaborate and add value through participation in peer reviews, provide comments and suggestions, and work with teams to achieve goals. Serve as the technical focal point with other teams to resolve issues. Quality and SLAs - Contribute to meet various SLAs and KPIs as applicable for the organization - for example, Responsiveness, Resolution, Software Quality SLAs, etc. Ensure that assigned tasks are completed on time and that delivery timelines are met in accordance with the organization s quality targets. Onboarding and Knowledge Sharing - Onboard new hires and train them on processes, share knowledge with team members, take an active role in technical mentoring within the team and elevate the team s knowledge. All you need is... Proven experience in CI/CD and MS areas, including Configuration Management and continuous integration tools (Jenkins, BitBucket, Nexus), in designing complex solutions based on Automation Tools and Deployment on common cloud computing platforms (OpenShift, AWS, Azure and/or GCP, etc.), skilled with several third party tools (e.g. Postgres, Cassandra, Helm, Groovy, Nagios, Liquidbase, Couchbase) 4-5 years of experience with infrastructure-as-code and configuration-as-code, including one or more tools - CloudFormation, Ansible, Chef, Puppet, SaltStack, Terragrunt and Terraform Script development experience of 4-5 years + IT experience in one of the common languages (Python, Groovy, Bash) Deep understanding and experience in DevOps ecosystem and IT operation systems Hands on experience in build, release, deployment and monitoring of cloud-based scalable distributed systems Rich experience in infrastructure / foundations / IT for at least 5-6 years, working in an agile development environment Experience designing & implementing TLS, and Load balancing and High availability for cluster/Hybrid production-like environments Why you will love this job: Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce

Posted 2 weeks ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Pune

Work from Office

Naukri logo

Job ID: 199131 Required Travel :Minimal Managerial - No LocationIndia- Pune (Amdocs Site) Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence We are looking for a dynamic Consultant Lead to drive large-scale cloud infrastructure and DevOps automation initiatives. This role demands not only deep technical expertise in Infrastructure as Code (IaC), automation, and cloud architecture but also leadership capabilities to guide teams, set strategic direction, and ensure delivery excellence . What will your job look like Architect and lead the implementation of secure, scalable, and highly available infrastructure on Azure using Terraform, Azure DevOps, and CloudFormation. Define and drive Infrastructure as Code (IaC) best practices across global teams. Lead complex automation initiatives using Ansible and other tools like Helm, Chef, and Puppet. Provide technical leadership and mentor cross-functional teams on scripting (Bash, PowerShell, Python, Groovy) and DevOps best practices. Collaborate with cloud architects and security teams to design robust cloud-native solutions across Azure and hybrid environments. Oversee CI/CD pipelines using tools such as Jenkins, GitLab, and Bitbucket for efficient release and deployment cycles. Contribute to strategic planning, capacity building, and performance tuning of infrastructure platforms. All you need is... Infrastructure as Code (IaC): Terraform, Azure DevOps, CloudFormation Automation & Configuration Management (5+ years): Ansible (mandatory) Helm (mandatory) Chef, Puppet, GitLab Scripting Languages (5+ years): Bash (mandatory) PowerShell (mandatory) Python, Groovy Cloud Expertise: Azure (must-have) with deep hands-on experience across services such as: o Azure Kubernetes Service (AKS), DNS, VNets, Private Link, Application Gateway, Azure Monitor, Entra ID, Azure SQL/MySQL DB, Azure Functions, and more Experience designing and implementing Cloud Solution Architectures Leadership: Proven experience leading DevOps/Infrastructure teams or cross-functional cloud projects Strong stakeholder management and decision-making skills Why you will love this job: You will be challenged to strategize, direct and build a high performance organization; have the opportunity to use your forward-thinking and critical thinking to carry out regional responsibility or global responsibility based on your job; coordinate driving penetration into the existing and new opportunities and accounts to grow the overall business. Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position Overview Job Title: DevOps Engineer Location: Pune, India Role Description We develop and manage CI/CD, monitoring, and various automation solutions as a service for Security Services Technologies under the Corporate Bank division of Deutsche Bank. Our environment currently relies on Linux-based stack, open-source tools such as Jenkins, Helm, Ansible, Docker as well as other popular tools like OpenShift and Terraform. As a DevOps/Platform Engineer, you will be responsible for designing, implementing, and supporting the reusable engineering solutions, as well as building and promoting a strong engineering culture. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Develop, maintain, and continuously improve the shared CI/CD, automation, and monitoring components keeping focus on quality and user experience. Contribute to introduction of modern industry practices into the teamwork and promoting them among the development teams. Assist the development teams with their ongoing activities, issues, and adopting our solutions. Your Skills And Experience Understanding of common development tasks and problems. Background in Development, Quality Assurance, or SRE is a plus Knowledge of the SDLC and experience with the tools that we use: Application development: Spring Boot, Kotlin/Java VCS: Git, Bitbucket. Knowledge of GitHub is a plus CI/CD: Jenkins. Knowledge of TeamCity and GitHub Actions is a plus Build tools: Jib, Maven, Gradle, NPM DevSecOps: SonarQube, JFrog Xray, Veracode Deployments, configuration, and infrastructure management: Docker, Helm, Ansible, Terraform, Liquibase Monitoring & SRE: Prometheus, Grafana. Knowledge of New Relic and Splunk is a plus Scripting: Groovy, Python, Shell Hands-on experience with container-based environments (Docker Compose, Minikube, Kubernetes). Knowledge of OpenShift and GCP is a plus Strong communication and troubleshooting skills, readiness to take ownership of your tasks Proactive mindset, attention to details, and constant wish to improve How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.ht We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 7.0 years

10 - 15 Lacs

Noida

Work from Office

Naukri logo

Software DevelopmentWrite clean, maintainable, and efficient code or various software applications and systems. Design and ArchitectureParticipate in design reviews with peers and stakeholders Code ReviewReview code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines TestingBuild testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and TroubleshootingTriage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and QualityContribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences. Dev Ops ModelUnderstanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production. DocumentationProperly document new features, enhancements or fixes to the product, and also contribute to training materials. Automation & InfrastructureDesign, build, and support deployment automation for Java-based microservices using Cloud Native solutions, scripting (Shell, Python, Groovy), and Google Cloud APIs for efficiency and cost-effectiveness. CI/CD & Source ControlEnsure seamless code delivery through well-structured CI/CD pipelines using Bitbucket, GitHub, GitHub Actions, and Jenkins. Infrastructure as Code (IaC)Automate and maintain scalable infrastructure with Terraform and Ansible for consistency across environments. Issue Resolution & TroubleshootingQuickly analyze, identify, and resolve deployment and infrastructure issues to minimize downtime. Tooling & Development SupportBuild and maintain tools for automation, monitoring, and streamlined operations, leveraging Kubernetes on GCP. Technology EvaluationStay updated on emerging DevOps technologies and assess their adoption potential. Basic Qualifications Bachelor’s degree in computer science, Engineering, or a related technical field, or equivalent practical experience. 2+ years of professional software development experience. Proficiency in one or more programming languages such as C, C++, C#, .NET, Python, Java, or JavaScript. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles. Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services. Preferred Qualifications Experience with cloud platforms like Azure, AWS, or GCP. Experience with test automation frameworks and tools. Knowledge of agile development methodologies. Ability to code in multiple languages (Shell Scripting/Python/Groovy/Ruby/Java). Development Tools – GIT(Stash), Jira, Confluence, Artifactory, Gradle. Hands on experience with public cloud providers (AWS, GCP). Well versed with containerization techniques such as Docker/Kubernetes and knowledge of developing Terraform/Ansible/Helm scripts Commitment to continuous learning and professional development. Good communication and interpersonal skills, with the ability to work effectively in a collaborative team

Posted 2 weeks ago

Apply

6.0 - 9.0 years

18 - 20 Lacs

Pune

Work from Office

Naukri logo

Notice Period: (Immediate Joiner - Only) Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM 9:30 PM IST About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities: Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience: 5-10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer. Ability to deal with ambiguity. Manage a collaborative and analytical approach. Self-confident and humble. Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people

Posted 2 weeks ago

Apply

5.0 - 8.0 years

2 - 5 Lacs

Pune

Work from Office

Naukri logo

Job Information Job Opening ID ZR_2211_JOB Date Opened 18/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title Kubernete Engineer City Pune Province Maharashtra Country India Postal Code 411014 Number of Positions 4 : This position is for developing and supporting a one-of-its-kind Kubernetes based platform that is used for hosting a number of security analytics applications. These applications ingest and process gigabytes to terabytes of data and pose interesting scale and performance challenges for the platform Proficiency in Golang, and heavy use of Kubernetes and Helm. Experience with big data platforms along with expertise in k8s ecosystem operators. GCP knowledge preferred. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

The Indian School Psychology Association (InSPA) is dedicated to promoting school psychology services and supporting the holistic development of children across India. Recognizing the diverse educational standards in India, InSPA emphasizes the necessity of psychological guidance for all children, irrespective of their school's economic status. School Psychologists are crucial in ensuring data-driven decision-making and creating conducive environments for learning. We're seeking a passionate and highly experienced Lead Graphic Designer Volunteer to spearhead and manage our graphic design efforts. This pivotal role involves leading a team of talented volunteers, setting the creative vision, and ensuring all visual communications powerfully convey InSPA's mission. Primary Responsibilities As the Lead Graphic Designer Volunteer, you will: Provide strategic creative direction and oversee all graphic design projects from concept through completion. Lead, mentor, and empower a team of graphic design volunteers, fostering a collaborative and productive environment. This includes task delegation, providing constructive feedback, and facilitating skill development. Develop and enforce brand guidelines to ensure consistency and a cohesive visual identity across all InSPA materials. Collaborate closely with the content development team to translate complex ideas into impactful visual narratives for various platforms (posters, videos, reels, social media, website). Manage project timelines and workflow for the design team, ensuring timely delivery of high-quality assets. Act as the primary point of contact for design-related inquiries and coordination with other InSPA teams. Research and recommend new design tools, techniques, and trends to keep InSPA's visual content fresh and engaging. Why Volunteer with InSPA? Drive Creative Vision: Take the helm of a design team and directly shape the visual communication of a meaningful cause. Elevate Your Leadership Profile: Gain invaluable experience in team management, project oversight, and strategic planning within a non-profit setting. Gain Significant Visibility: Your leadership and your team's collective work will be prominently featured on InSPA's website, social media, and at our conferences. Expand Your Network: Connect with industry leaders, fellow professionals, and potential collaborators within the InSPA community. Develop Valuable Skills: Enhance your design, visual communication, digital media, and critically, your leadership, team management, and strategic thinking skills. Gain Recognition: Your contributions and leadership will be acknowledged and appreciated, fostering a sense of accomplishment and belonging. Flexible Schedule: Contribute remotely and on your own schedule, ensuring you are available for pre-scheduled team meetings and check-ins. Personal Fulfillment: Experience the satisfaction of contributing your expertise to a cause that truly makes a difference in promoting child well-being. Receive a Letter of Recommendation or Volunteer Experience Certificate , relevant to your contributions, after the minimum commitment period. Eligibility Passion for InSPA's Mission: A genuine interest in promoting mental health awareness, school psychology, and the well-being of children and adolescents. Minimum 3+ Years of Graphic Design Experience: Proven professional experience with a strong portfolio showcasing diverse design projects. Demonstrated Leadership Experience: Prior experience in leading a design team, managing projects, or mentoring junior designers is essential. Strong Communication Skills: Excellent written and verbal communication abilities, crucial for team leadership, stakeholder communication, and presenting concepts. Reliability and Commitment: Ability to dedicate at least 2 hours per day or compensate within a week , with consistent availability for team meetings and leadership responsibilities. Exceptional Team Player and Leader: Proven ability to collaborate effectively, inspire, and guide a team of volunteers. Expert Proficiency in Design Software: Master-level proficiency in design software (e.g., Adobe Creative Suite - Photoshop, Illustrator, InDesign; Canva). Deep Understanding of Visual Design Principles: Deep knowledge of typography, color theory, layout, visual hierarchy, and user experience principles. Ability to Create Engaging and Visually Appealing Content: A strong portfolio demonstrating innovative and effective visual communication. Professionalism: Adherence to ethical standards and a highly professional demeanor. (Optional) Relevant Background: Experience in psychology, education, social work, or related fields is a plus. Duration Flexible timings, at least 2 hours x 5 days a week, with a minimum commitment of 2 months. Compensation This is an unpaid volunteer opportunity. Letter of Recommendation or Volunteer Experience Certificate, relevant to your contributions after the minimum commitment period. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Position Senior Engineer/Technical Lead (DevOps Engineer - Azure) Job Description Key Responsibilities: Key Responsibilities: Azure Cloud Management: Design, deploy, and manage Azure cloud environments. Ensure optimal performance, scalability, and security of cloud resources using services like Azure Virtual Machines, Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure SQL Database. Automation & Configuration Management: Use Ansible for configuration management and automation of infrastructure tasks. Implement Infrastructure as Code (IaC) using Azure Resource Manager (ARM) templates or Terraform. Containerization: Implement and manage Docker containers. Develop and maintain Dockerfiles and container orchestration strategies with Azure Kubernetes Service (AKS) or Azure Container Instances. Server Administration: Administer and manage Linux servers. Perform routine maintenance, updates, and troubleshooting. Scripting: Develop and maintain Shell scripts to automate routine tasks and processes. Helm Charts: Create and manage Helm charts for deploying and managing applications on Kubernetes clusters. Monitoring & Alerting: Implement and configure Prometheus and Grafana for monitoring and visualization of metrics. Use Azure Monitor and Azure Application Insights for comprehensive monitoring, logging, and diagnostics. Networking: Configure and manage Azure networking components such as Virtual Networks, Network Security Groups (NSGs), Azure Load Balancer, and Azure Application Gateway. Security & Compliance: Implement and manage Azure Security Center and Azure Policy to ensure compliance and security best practices. Required Skills and Qualifications: Experience: 5+ years of experience in cloud operations, with a focus on Azure. Azure Expertise: In-depth knowledge of Azure services, including Azure Virtual Machines, Azure Kubernetes Service, Azure App Services, Azure Functions, Azure Storage, Azure SQL Database, Azure Monitor, Azure Application Insights, and Azure Security Center. Automation Tools: Proficiency in Ansible for configuration management and automation. Experience with Infrastructure as Code (IaC) tools like ARM templates or Terraform. Containerization: Hands-on experience with Docker for containerization and container management. Linux Administration: Solid experience in Linux server administration, including installation, configuration, and troubleshooting. Scripting: Strong Shell scripting skills for automation and task management. Helm Charts: Experience with Helm charts for Kubernetes deployments. Monitoring Tools: Familiarity with Prometheus and Grafana for metrics collection and visualization. Networking: Experience with Azure networking components and configurations. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues. Communication: Excellent communication skills, both written and verbal, with the ability to work effectively in a team environment. Preferred Qualifications: Certifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect) are a plus. Additional Tools: Experience with other cloud platforms (AWS, GCP) or tools (Kubernetes, Terraform) is beneficial. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Key Responsibilities: Develop and maintain automation scripts using Bash, Python, or Shell for provisioning, deployment, and monitoring tasks. Manage cloud infrastructure on AWS, Azure, or GCP, ensuring scalability, security, and performance. Implement and maintain orchestration tools like Kubernetes, Docker Swarm, or Ansible. Build and optimize CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or equivalent. Monitor system performance and availability; troubleshoot infrastructure issues on Linux/UNIX servers. Work closely with Development and QA teams to streamline application deployment. Manage and respond to ticketing systems such as JIRA, ServiceNow, or Zendesk. Ensure system reliability, uptime, and recovery by implementing robust automation and backup strategies. Apply best practices in infrastructure security, secrets management, and access control. Qualifications & Experience: 1–3 years of hands-on experience in a DevOps role. Proficiency in scripting languages: Bash, Python, or Shell. Strong experience with Linux/UNIX system administration. Hands-on experience with at least one major cloud provider: AWS, Azure, or GCP. Proficiency in containerization and orchestration tools: Docker, Kubernetes, Helm. Experience building and maintaining CI/CD pipelines. Familiarity with configuration management tools such as Ansible, Puppet, or Chef. Exposure to ticketing platforms like JIRA or ServiceNow. Experience with infrastructure monitoring tools (e.g., Prometheus, Grafana, ELK stack, Datadog). If this sounds like you, please share your resume along with CTC details and notice period at pawan.shukla@dotpe.in to discuss further. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

RabbitMQ Administrator - Prog Leasing1 Job Title: RabbitMQ Cluster Migration Engineer Job Summary We are seeking an experienced RabbitMQ Cluster Migration Engineer to lead and execute the seamless migration of our existing RabbitMQ infrastructure to a AWS - new high-availability cluster environment. This role requires deep expertise in RabbitMQ, clustering, messaging architecture, and production-grade migrations with minimal downtime. Key Responsibilities Design and implement a migration plan to move existing RabbitMQ instances to a new clustered setup. Evaluate the current messaging architecture, performance bottlenecks, and limitations. Configure, deploy, and test RabbitMQ clusters (with or without federation/mirroring as needed). Ensure high availability, fault tolerance, and disaster recovery configurations. Collaborate with development, DevOps, and SRE teams to ensure smooth cutover and rollback plans. Automate setup and configuration using tools such as Ansible, Terraform, or Helm (for Kubernetes). Monitor message queues during migration to ensure message durability and delivery guarantees. Document all aspects of the architecture, configurations, and migration process. Required Qualifications Strong experience with RabbitMQ, especially in clustered and high-availability environments. Deep understanding of RabbitMQ internals: queues, exchanges, bindings, vhosts, federation, mirrored queues. Experience with RabbitMQ management plugins, monitoring, and performance tuning. Proficiency with scripting languages (e.g., Bash, Python) for automation. Hands-on experience with infrastructure-as-code tools (e.g., Ansible, Terraform, Helm). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong understanding of messaging patterns and guarantees (at-least-once, exactly-once, etc.). Experience with zero-downtime migration and rollback strategies. Preferred Qualifications Experience migrating RabbitMQ clusters in production environments. Working knowledge of cloud platforms (AWS, Azure, or GCP) and managed RabbitMQ services. Understanding of security in messaging systems (TLS, authentication, access control). Familiarity with alternative messaging systems (Kafka, NATS, ActiveMQ) is a plus. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Your Team Responsibilities Data Technology group in MSCI is responsible to build and maintain state-of-the-art data management platform that delivers Reference. Market & other critical datapoints to various products of the firm. The platform, hosted on firms’ data centers and Azure & GCP public cloud, processes 100 TB+ data and is expected to run 24*7. With increased focus on automation around systems development and operations, Data Science based quality control and cloud migration, several tech stack modernization initiatives are currently in progress. To accomplish these initiatives, we are seeking a highly motivated and innovative individual to join the automation team for the purpose of supporting our next generation of developer tools and infrastructure. The team is the hub around which Engineering, and Operations team revolves for automation and is committed to provide self-serve tools to our internal customers. The position is based in Mumbai, India office. Your Key Responsibilities Deploy and manage Airflow infrastructure. Monitor Airflow DAG execution and performance. Build and maintain Azure, GCP & on-prem datacenter infrastructure Build and maintain Azure DevOps CD / CI pipelines Automate Terraform deployment using Azure DevOps Implement actionable monitoring solution that can assist in building highly observable systems Provide support and continuous feedback to developers Work closely with development team to implement new build processes and strategies to meet new product requirements Independently engage with stakeholders and finalize requirements and bring them to a closure in a timely and effective manner. Address vulnerabilities in both private and public cloud infrastructure, with a focus on application vulnerabilities (Mend) and infrastructure (Wiz). Collaborate with the central DevOps teams to design and develop CI/CD pipelines and processes in accordance with MSCI policies and standards. Your Skills And Experience That Will Help You Excel Self-motivated, collaborative individual with passion for excellence E Computer Science or equivalent with 5+ years of total experience and at least 2 years of experience in working with Azure DevOps tools and technologies Good working knowledge of source control applications like git with prior experience of building deployment workflows using this tool Good working knowledge of YAML, Python, Bash, Terraform Debugging maven builds Setting up YAML based CD/CI pipeline on Azure Containerizing applications using Docker Kubernetes administration, architecture and deployments Linux administration and scripting Helm charts Experience with creation, maintenance and deployment to the below resources Kubernetes (Rancher, Azure Kubernetes Services, Google Kubernetes Engine) Azure & GCP public cloud infrastructure Experience of working with Airflow is a big plus Experience with PostgresSQL is an added advantage About MSCI What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com Show more Show less

Posted 2 weeks ago

Apply

Exploring Helm Jobs in India

Helm is a popular package manager for Kubernetes that simplifies the deployment and management of applications. In India, the demand for professionals with expertise in Helm is on the rise as more companies adopt Kubernetes for their container orchestration needs.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi NCR

Average Salary Range

The average salary range for helm professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can command salaries upwards of INR 15 lakhs per annum.

Career Path

Typically, a career in Helm progresses as follows: - Junior Helm Engineer - Helm Engineer - Senior Helm Engineer - Helm Architect - Helm Specialist - Helm Consultant

Related Skills

In addition to proficiency in Helm, professionals in this field are often expected to have knowledge of: - Kubernetes - Docker - Containerization - DevOps practices - Infrastructure as Code (IaC)

Interview Questions

  • What is Helm and how does it simplify Kubernetes deployments? (basic)
  • Can you explain the difference between a Chart and a Release in Helm? (medium)
  • How would you handle secrets management in Helm charts? (medium)
  • What are the limitations of Helm and how would you work around them? (advanced)
  • How do you troubleshoot Helm deployment failures? (medium)
  • Explain the concept of Helm Hooks and when they are triggered during the deployment lifecycle. (medium)
  • How do you version and manage Helm charts in a production environment? (medium)
  • What are the best practices for Helm chart organization and structure? (basic)
  • Describe a scenario where you used Helm to deploy a complex application and the challenges you faced. (advanced)
  • How do you manage dependencies between Helm charts? (medium)
  • Explain the difference between Helm 2 and Helm 3. (basic)
  • How do you perform a rollback of a Helm release? (medium)
  • What security considerations should be taken into account when using Helm? (advanced)
  • How do you customize Helm charts for different environments (dev, staging, production)? (medium)
  • Can you automate the deployment of Helm charts using CI/CD pipelines? (medium)
  • What is Tiller in Helm and why was it removed in Helm 3? (advanced)
  • How do you manage upgrades of Helm releases without causing downtime? (medium)
  • Explain how you would handle configuration management in Helm charts. (medium)
  • What are the advantages of using Helm over manual Kubernetes manifests? (basic)
  • How do you ensure the idempotency of Helm deployments? (medium)
  • How do you perform linting and testing of Helm charts? (basic)
  • Can you explain the concept of Helm repositories and how they are used? (medium)
  • How would you handle versioning of Helm charts to ensure compatibility with different Kubernetes versions? (medium)
  • Describe a situation where you had to troubleshoot a Helm chart that was failing to deploy. (advanced)

Closing Remark

As the demand for Helm professionals continues to grow in India, it is important for job seekers to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a valuable asset to organizations looking to leverage Helm for their Kubernetes deployments. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies