Jobs
Interviews

2805 Helm Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

25 Lacs

Bengaluru

On-site

Job Information Date Opened 21/07/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 25LPA City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.

Posted 2 weeks ago

Apply

10.0 years

10 Lacs

Bengaluru

On-site

Job Title Product Definition Analyst 4 Title: Senior Product Definition Analyst Outpayce: What do we do? The Outpayce business line sits under the Travel unit and serves every part of the global travel ecosystem, processing payments for airlines, hotels, travel sellers and corporations across the globe. We aim to be the global leader in travel payments by delivering smooth and connected experiences, helping our customers and their travelers to benefit from new advances and trends in payments. Outpayce processes millions of payments every day through credit card networks, acquiring banks or alternative methods of payment providers and provides a complete ecosystem of payment solutions. Our processing reaches 100 billion euros per year with more than 400 acquiring banks and more than 300 local payment methods. Outpayce RND is growing Outpayce’s R&D division has the mission to provide payment solutions deeply integrated into the reservation/ticketing flow and PCI/DSS products for all Amadeus customers and across all channels. Amadeus has been investing in travel payments for over 10 years. Today, the Outpayce RND division has teams in Nice, Madrid, London, Erding, Dallas, Bangalore and soon Istanbul. The division is developing and operating products and solutions supported by transversal functions : Products and solutions Gateway focusing; Merchant and Hotel IT Acquiring and Back Office; Acquiring, Reconciliation, Chargeback and Risk Payer; B2BWallet, Corporate Travel and Hotel Distribution Orion; Financial services Airlines Programs and Services Competency Centers for Airlines Transversal functions Security, DevOps and Customer Care Platform Cloud To further maximize the growing opportunities in the travel payments space, Outpayce RND is recruiting! What job do we have to offer? With the public cloud migration and the ambition to develop a payment open platform, we are moving towards native cloud architecture and technology based on micro services approach and embracing latest Amadeus recommendations as : Private and public cloud with OpenShift, Quarkus, Kubernetes and Docker (S-Kube Amadeus framework to write Cloud Native Applications based on micro services) Mixed Event Driven and Service Oriented Architecture Kafka streaming technology Open API with REST JSON over HTTPS Full CI/CD stack (based on Jenkins, Helm etc …) to ensure seamless code deployment and validation from developer machine to actual end to end platforms. As a Product Definition Analyst (PDA), you will join a newly created Agile Release Train, “Orion Train” which goals is to build an issuing platform to issue virtual credit cards to be used by our Travel Agencies customers to pay their providers. It implies design, develop, deploy and maintain some cloud native applications composed of micro services to be deployed on an open platform. You’ll be part of scrum team located in BLR As you’ve read, you’ll have the chance to be at the START of a big project! Main accountabilities By joining our team, you will have a unique opportunity to participate to the definition of a strategical evolution in Outpayce offer. Your deliverables and support will be key in the success of Outpayce Open platform. Be involved in the global open platform definition and more particularly of some key components developing new features and technical enhancements with a high degree of quality. Assess requirements Build, maintain and share the functional knowledge of our processes, services and usage of payment open platform Analyse business requirements submitted by Product Management in order to define the best functional solution. Size specification and validation work. Carry out functional design Write Feasibility Studies, Solution Overview Documents, Interface Control Documents, Product Specifications , Sequence Diagram and present functional walk-throughs to all concerned stakeholders Interface with relevant divisions and departments to identify interactions with other Amadeus applications and ensure functional compatibility. Manage relations with key stakeholders Interface and communicate with Product Management, Architects, Project management and Development teams (composed of PDAs, DEVs and QAEs). We are looking for... Post-secondary degree in Computer Science or related technical field or equivalent experience Experience in the Travel IT and/or Payment industry (retail payments, card payment, banking experience) is a plus Retail payments ( Cards, mobile banking etc) - Processor, ISO 8583 messages, 3DS Banking experience (CBS - Onboarding, account management, payments etc) Familiarity with PCI- DSS compliance API design involvement Computing: Capability to synthise thoughts and flows (e.g. UML modelling etc.) In depth REST/Json API Design Knowledge. SOAP XML knowledge. Detailed functional specification writing (Error management, Interface mapping, Flows). Languages: Fluent English Soft skills: Team spirit, Multicultural approach, good communication skills Experience in an Agile environment Scrum. SAFe ia a plus On top, analytical & conceptual thinking isa must, customer focus too. Microservice knowledge. T-shape mindset is a plus Experience in scripting (python) as a bonus. Final word? Outpayce R&D is at the crossroads between Amadeus and travel players, with promising opportunities for growth and exciting technology challenges ahead of us. Combine that with a fun and collaborative working environment, it’s the place to be! Diversity & Inclusion Amadeus aspires to be a leader in Diversity, Equity and Inclusion in the tech industry, enabling every employee to reach their full potential by fostering a culture of belonging and fair treatment, attracting the best talent from all backgrounds, and as a role model for an inclusive employee experience. Amadeus is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to gender, race, ethnicity, sexual orientation, age, beliefs, disability or any other characteristics protected by law.

Posted 2 weeks ago

Apply

2.0 years

5 - 9 Lacs

Bengaluru

On-site

AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. Your Impact: Looking for a strong Senior QA candidate who will be responsible for product quality. He/She will involve in understanding the Business requirements and the software design, works with team members to formulate& develop effective test strategies and test suits and automates them. He/She will own& lead functional/non-functional testing, proactively evaluate the quality at various stages, perform root-cause analysis for customer bugs and take measures to improve product quality. Be well versed with industry trends and cater to customer needs. He/she needs to ensure that the deliverables are of high quality. He/she needs to interact with the Documentation and Technical Support to take the product to the customers. What the role offers: Use-case identification: Understand the requirement and formulate customer use cases that can be used as reference during implementation and testing of the features. Test Strategy development: Analyze the test requirement and build strategies to make sure proper test coverage and quality at various stages of the product are met. Test case Design: Design test cases based on the use cases and considering various quality aspects. Test Automation: Using appropriate automation framework write code automate all test cases, including functional and non-functional. Embrace DevOps and CI/CD. Test execution: Lead and perform in-depth and thorough testing of the owned features/areas. Maintain the test cases and test results in test management system. Maintain the defects in defect Management system. Plan and realize the test environment, platforms, test cycles and regression cycles based on the strategy and quality evaluation at different stages of the product. Quality evaluation: Monitor the quality of the product at various stages and re-plan the execution strategies to ensure release quality. Customer support: Support in reproduction of customer issues, qualification of fixes and providing information against customer queries. Analyze customer defects and implement improvements to the test environment, testing strategy and test cases. Technical expertise: Develop knowledge about the overall product, deployment scenarios and expertise in specific areas of the domain technology. Keep updated on the latest testing and domain technologies, test coverage methodologies and quality metrics. Develop skills on automation and testing tools. What you need to succeed: BE/BTech in Electronics/Communication/Computer Engineering with 2+ Years of experience in software testing activities. Strong knowledge of Microsoft Windows OS internals and Microsoft Technologies – Entra ID, Active Directory, Windows Hello for Business, etc. Experience in automation of test cases using Playwright/Cypress/Selenium or equivalent. Very good experience in testing of Standalone and Web based applications and Security/Authentication Applications. Good GUI& user- oriented testing skills. Exposure to load and performance testing. Knowledge of Cloud computing like, Kubernetes, Helm and Terraform is an added advantage. Excellent communication, problem solving skills and ability to provide technical guidance to junior members. Dynamic and Confident – with hands on, “can- do” approach. Willingness to own and be accountable for subjects within scope of role and Energetic and passionate about being successful and willing to new ideas/technologies Personable – able to get on with many different types of people and organization. With ability to build excellent, meaningful relationships which are based on trust and respect. High integrity – makes and keeps commitments Excellent time management and organizational skills. High attention to detail, self-motivated, creative and flexible. A good problem solver. Ability to identify key issues and barriers to success, then resolve them. One last thing: OpenText is a leading Cloud and AI company that provides organizations around the world with a comprehensive suite of Business AI, Business Clouds, and Business Technology. We help organizations grow, innovate, become more efficient and effective, and do so in a trusted and secure way – through Information Management. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 2 weeks ago

Apply

0 years

3 - 4 Lacs

Bengaluru

On-site

Join our Team Ericsson Overview Ericsson is a world-leading provider of telecommunications equipment & services to mobile & fixed network operators. Over 1,000 networks in more than 180 countries use Ericsson equipment, & more than 40 percent of the world's mobile traffic passes through Ericsson networks. Using innovation to empower people, business & society, we are working towards the Networked Society, in which everything that can benefit from a connection will have one. At Ericsson, we apply our innovation to market-based solutions that empower people & society to help shape a more sustainable world. We are truly a global company, working across borders in 175 countries, offering a diverse, performance-driven culture & an innovative & engaging environment where employees enhance their potential everyday. Our employees live our vision, core values & guiding principles. They share a passion to win & a high responsiveness to customer needs that in turn makes us a desirable partner to our clients. To ensure professional growth, Ericsson offers a stimulating work experience, continuous learning & growth opportunities that allow you to acquire the knowledge & skills necessary to reach your career goals. Job Summary: We are looking for a skilled OpenShift Engineer to design, implement, and manage enterprise container platforms using Red Hat OpenShift. The ideal candidate will have expertise in Kubernetes, DevOps practices, and cloud-native technologies to ensure scalable, secure, and high-performance deployments. Key Responsibilities: ✅ OpenShift Platform Management: Deploy, configure, and manage OpenShift clusters (on-premises and cloud). Maintain cluster health, performance, and security. Troubleshoot and resolve issues related to OpenShift and Kubernetes. Integrate OpenShift with DevSecOps tools for security and compliance. ✅ Containerization & Orchestration: Develop and maintain containerized applications using Docker & Kubernetes. Implement best practices for Pods, Deployments, Services, ConfigMaps, and Secrets. Optimize resource utilization and auto-scaling strategies. ✅ Cloud & Hybrid Deployments: Deploy OpenShift clusters on AWS, Azure, or Google Cloud. Configure networking, ingress, and load balancing in OpenShift environments. Manage multi-cluster and hybrid cloud environments. ✅ Security & Compliance: Implement RBAC, network policies, and pod security best practices. Monitor and secure container images using Red Hat Quay, Clair, or Aqua Security. Enforce OpenShift policies for compliance with enterprise standards. ✅ Monitoring & Logging: Set up monitoring tools like Prometheus, Grafana, and OpenShift Monitoring. Configure centralized logging using ELK (Elasticsearch, Logstash, Kibana) or Loki. Analyze performance metrics and optimize OpenShift workloads. Required Skills & Qualifications: ✅ Technical Expertise: Strong hands-on experience with Red Hat OpenShift (OCP 4.x+). Proficiency in Kubernetes, Docker, and Helm charts. Experience in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Strong scripting skills in Bash, Python. Understanding of GitOps tools like ArgoCD or FluxCD. ✅ Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP certifications related to Kubernetes/OpenShift

Posted 2 weeks ago

Apply

8.0 years

18 Lacs

India

On-site

We are currently seeking a technically proficient and hands-on DevOps Manager/Team Lead to spearhead our DevOps initiatives across a diverse portfolio of applications, encompassing both modern and legacy systems. This includes platforms such as Odoo (Python), Magento (PHP), Node.js, and other web-based applications.The ideal candidate will bring significant expertise in continuous integration/continuous delivery (CI/CD), automation, containerization, and cloud infrastructure (AWS, Azure, GCP). We highly value candidates holding relevant professional certifications.In this role, your key responsibilities will include:* Leading, mentoring, and fostering the growth of our DevOps engineers.* Overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), Node.js (JavaScript/TypeScript), and other LAMP/LEMP stack applications.* Designing and managing CI/CD pipelines tailored to each application, utilizing tools like Jenkins, GitHub Actions, and GitLab CI.* Managing environment-specific configurations for staging, production, and QA environments.* Implementing containerization for both legacy and modern applications using Docker and orchestrating deployments with Kubernetes (EKS/AKS/GKE) or Docker Swarm.* Establishing and maintaining Infrastructure as Code practices using Terraform, Ansible, or CloudFormation.* Implementing and maintaining application and infrastructure monitoring solutions using tools like Prometheus, Grafana, ELK, or Datadog.* Ensuring the security, resilience, and compliance of our systems with industry standards.* Optimizing cloud costs and infrastructure performance.* Collaborating effectively with development, QA, and IT support teams to ensure seamless delivery processes.* Troubleshooting performance, deployment, and scaling challenges across various technology stacks.We are looking for someone with the following essential skills:* Over 8 years of hands-on experience in DevOps, Cloud, or System Engineering roles.* At least 2 years of experience in managing or leading DevOps teams.* Proven experience supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups.* Solid experience with AWS, Azure, or GCP infrastructure in production environments.* Strong scripting abilities in Bash, Python, PHP CLI, or Node CLI.* A deep understanding of Linux system administration and networking fundamentals.* Experience with Git, SSH, reverse proxies (Nginx), and load balancers.* Excellent communication skills, including experience in managing client interactions.While not mandatory, the following certifications are highly valued:* AWS Certified DevOps Engineer – Professional* Azure DevOps Engineer Expert* Google Cloud Professional DevOps Engineer* Bonus: Magento Cloud DevOps or Odoo Deployment ExperienceAdditionally, the following skills would be a valuable asset:* Experience with multi-region failover, high availability (HA) clusters, or Recovery Point Objective (RPO)/Recovery Time Objective (RTO)-based design.* Familiarity with MySQL/PostgreSQL optimization and message brokers like Redis, RabbitMQ, or Celery.* Previous experience with GitOps practices and tools like ArgoCD, Helm, or Ansible Tower.* Knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices.Thank you for your time and consideration. Job Type: Full-time Pay: Up to ₹1,800,000.00 per year Work Location: In person Speak with the employer +91 8861265053

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS) AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#68B54C;border-color:#68B54C;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">

Posted 2 weeks ago

Apply

0 years

3 - 4 Lacs

Noida

On-site

Join our Team Ericsson Overview Ericsson is a world-leading provider of telecommunications equipment & services to mobile & fixed network operators. Over 1,000 networks in more than 180 countries use Ericsson equipment, & more than 40 percent of the world's mobile traffic passes through Ericsson networks. Using innovation to empower people, business & society, we are working towards the Networked Society, in which everything that can benefit from a connection will have one. At Ericsson, we apply our innovation to market-based solutions that empower people & society to help shape a more sustainable world. We are truly a global company, working across borders in 175 countries, offering a diverse, performance-driven culture & an innovative & engaging environment where employees enhance their potential everyday. Our employees live our vision, core values & guiding principles. They share a passion to win & a high responsiveness to customer needs that in turn makes us a desirable partner to our clients. To ensure professional growth, Ericsson offers a stimulating work experience, continuous learning & growth opportunities that allow you to acquire the knowledge & skills necessary to reach your career goals. Job Summary: We are looking for a skilled OpenShift Engineer to design, implement, and manage enterprise container platforms using Red Hat OpenShift. The ideal candidate will have expertise in Kubernetes, DevOps practices, and cloud-native technologies to ensure scalable, secure, and high-performance deployments. Key Responsibilities: ✅ OpenShift Platform Management: Deploy, configure, and manage OpenShift clusters (on-premises and cloud). Maintain cluster health, performance, and security. Troubleshoot and resolve issues related to OpenShift and Kubernetes. Integrate OpenShift with DevSecOps tools for security and compliance. ✅ Containerization & Orchestration: Develop and maintain containerized applications using Docker & Kubernetes. Implement best practices for Pods, Deployments, Services, ConfigMaps, and Secrets. Optimize resource utilization and auto-scaling strategies. ✅ Cloud & Hybrid Deployments: Deploy OpenShift clusters on AWS, Azure, or Google Cloud. Configure networking, ingress, and load balancing in OpenShift environments. Manage multi-cluster and hybrid cloud environments. ✅ Security & Compliance: Implement RBAC, network policies, and pod security best practices. Monitor and secure container images using Red Hat Quay, Clair, or Aqua Security. Enforce OpenShift policies for compliance with enterprise standards. ✅ Monitoring & Logging: Set up monitoring tools like Prometheus, Grafana, and OpenShift Monitoring. Configure centralized logging using ELK (Elasticsearch, Logstash, Kibana) or Loki. Analyze performance metrics and optimize OpenShift workloads. Required Skills & Qualifications: ✅ Technical Expertise: Strong hands-on experience with Red Hat OpenShift (OCP 4.x+). Proficiency in Kubernetes, Docker, and Helm charts. Experience in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Strong scripting skills in Bash, Python. Understanding of GitOps tools like ArgoCD or FluxCD. ✅ Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP certifications related to Kubernetes/OpenShift

Posted 2 weeks ago

Apply

2.0 - 3.0 years

2 - 3 Lacs

India

On-site

** Company Name: Klizo Solutions Pvt. Ltd. ** Experience Required : 2 - 3 years ** Location: Astra Tower, Newtown, Akanksha More (Near City Centre 2 ). ** No. of vacancies: 1 ** Job Type: IN OFFICE, Full-time ** Shift timing: Needs to be flexible ( Drop service is available for shifts ending after 11 PM ) ** Working Days: Monday to Friday (5 Days) ** Week Off: Saturday & Sunday (Fixed off) ** Salary: Open to discuss (based on current salary, experience, and interview performance) Love developing, designing, implementing, and managing engineering infrastructure, using DevOps methodologies? Have a knack for setting up tools and required infrastructure, planning activities, team structure, and project management activities? Then apply with us! Job Description : We are looking for FULL-TIME ON-SITE DevOps engineers who can combine the understanding of both coding and engineering to oversee code releases, close the gap between actions required for quickly changing applications, reduce complexity and perform tasks needed to maintain their reliability. As a DevOps engineer, you will be responsible for introducing processes, methodologies, and tools to balance needs throughout the life-cycle of software development and take care of its coding, deployment, updates, and maintenance. Do you think, as a DevOps engineer, you can boost workplace productivity by overseeing the code releases, creating and implementing system software, analyzing data, and being instrumental in application management, maintenance, and combining code? Then, Klizo Solutions can be a wonderful place to take your career in the right direction! **Expected Responsibilities : Manage and maintain Cloud (AWS) and On-Premise infrastructure Develop concepts and tools for managing infrastructure, performing continuous delivery, and optimizing monitoring Assist in analyzing requirements, designing, and building deployment architecture for micro-service-based and IoT applications Design and implement concepts to monitor our applications to ensure performance, security, availability, and scalability Support the planning and implementation of continuous improvement of existing systems Manage customer communication in case of failures, change requests, or new releases Work with the following technologies: Docker, Kubernetes, Helm, Elasticsearch, MongoDB Atlas. Work on different aspects of DevOps – Automation, Infrastructure as Code, Configuration Management, CI/CD, Monitoring and Logging, Containers and Container Orchestration Platforms, Operating Systems, Source Code Management, Securing the platform Build platforms in the public cloud and private data center environments Required Skills And Qualifications Knowledge of one or more programming languages Ability to write code to solve problems Strong analysis, testing, planning, and execution skills A knack for problem-solving and a passion for building things The mindset of a self-starter and the ability to work on different technology stacks Ability to work on tasks with little or no oversight Willingness to learn new technology stacks. **Required Skills And Qualifications : Knowledge of one or more programming languages Ability to write code to solve problems Strong analysis, testing, planning, and execution skills A knack for problem-solving and a passion for building things The mindset of a self-starter and the ability to work on different technology stacks Ability to work on tasks with little or no oversight Willingness to learn new technology stacks **Must-Haves: High proficiency with Git (Source control management) 2- 3 years of experience **Nice To Have: Knowledge of the Python programming language. Interested candidates are requested to send us their updated CV through indeed.com or email us at jobs@klizos.com for scheduling interview with us. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹25,000.00 per month Benefits: Paid time off Provident Fund Ability to commute/relocate: New Town, Kolkata, West Bengal: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): How many years of experience do you have in DevOps engineering? How many days of notice period do you have? What is your current salary? What is your expected salary? Education: Bachelor's (Preferred) Experience: AWS: 2 years (Required) CI/CD: 2 years (Required) Docker: 2 years (Required) MongoDB: 2 years (Preferred) Shift availability: Day Shift (Required) Night Shift (Required) Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

PLEASE READ THE REQUIREMENTS IN DETAIL TO APPLY: If you think you are a go-getter, can demonstrate all requirements to be a backbone of a growth led company into media, fast paced, can align multiple tasks within short deadlines alongside attention to detail, be the Donna to Harvey (if you know), then please apply, we would want to welcome you to our ever growing team! About the ROLE: Act as the primary contact + backbone for the CEO, managing correspondence and schedule. Coordinate schedules, meetings, and appointments, while organising any requirements for the same. Handle travel arrangements, expense reports, and crucial deadlines. Prepare reports, presentations, and documentation aligned to the business. Implement administrative procedures Ordering, storing and distributing office supplies. Maintaining & repairing office equipment. Helm HR responsibilities on need basis – Conducting telephonic interviews, scheduling further rounds and also operating HR portal for salary pay-outs Attending Meetings & generating reports out of the meeting, circulating to the HOD's and sending timely reminders for the same in pursuit of accomplishment of jobs Assisting in maintaining deliverables from various Team's HOD in swift implementation of day in and day out activities Keep regular follow ups for day to day and other important Tasks Have Presentation skills as well as an interest in technology or Cloud. Excellent verbal and written communication skills, including ability to effectively communicate with internal and external customers. Responsibilities Calendar management for executive Maintain decorum with the office team Aid executive in preparing for meetings Responding to emails and document requests on behalf of executives Draft slides, meeting notes and documents for executives Job Type: Full-time Salary: ₹35,000.00 to ₹55,000.00 / month Timing: 11:00am to 08:00pm - Days: Mon to Fri (1 Saturday a month working half day) Qualifications Bachelor's degree or equivalent experience Proficient in Microsoft Office suite Experience in managing multiple priorities, administrative coordination, and logistics Well-organized, detail-oriented, ability to multi-task with great follow-up skills Strong written and verbal communication skills

Posted 2 weeks ago

Apply

3.0 - 5.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Job Summary As a Interop Test Engineer, you will work as part of a team responsible for Interop testing of NetApp software solutions Interoperability with other hardware and software solutions. You will test storage and data management services to be deployed as containers in Kubernetes or native virtual machines and work with various public and private cloud providers with high focus on customer requirements and product quality meting the project timelines. As part of your work, you would be required to work with other partner teams, engineer and developers to understand the test requirements, execution of required testing and automation of the testcases. Debug and troubleshoot issues seen during setup and testing. Job Requirements • Should have relevant experience working on storage or Host OS Interoperability testing • Should be familiar with Server Hardware and Operating systems • Should have skills for deploying and troubleshooting OS on server (OS install, driver, firmware upgrades etc) • Should have skills in deploying and troubleshooting Kubernetes environments (Vanilla Kubernetes / OpenShift / Anthos / Tanzu / Rancher) • Should have skills in managing, Administrating and troubleshooting cloud (AWS, GCP and Azure) • Experience working with python for test automation or development • Strong oral and written communication skills • Ability to work collaboratively within a team to meet aggressive goals and high quality standards • Strong aptitude for learning new technologies Nice to have skills: • Experience with REST API, Ansible, Terraform, Helm, Golang • OS Administration skills ( Linux, Windows, ESXi) • Familiarity with AWS, Azure or Google Cloud compute. • Working experience of configuring and troubleshooting NAS/SAN environments • Have relevant experience working on storage or Host OS Interoperability testing • Familiar with Server Hardware and Operating systems • Experience with ONTAP, CVS, or other NetApp products, or cloud-native storage platforms would be a plus. Education • Bachelor's or Masters in Computer Science Engineering with 2-4 years of relevant work experience

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Acts as liason between Development teams and Platform (PAAS) teams to translate requirements into technical tasks or support requests Uses coding languages or scripting methodologies to solve a problem with a custom workflow Documents problems, articulates solutions or workarounds Acts as a key contributor on given projects to brainstorm about the best way to tackle a complex technological infrastructure, security or development problem Learn methodologies to perform incremental testing actions on code using a test driven approach where possible (TDD) Oral and written communication skills with a keen sense of customer service Problem-solving and troubleshooting skills Process-oriented with great documentation skills Knowledge of best practices in a micro-service architecture in an always-up, always-available service Experience with or knowledge of Agile Software Development methodologies Knoweldge of Seucrity best-practices in a containerized or cloud-based Architecture Familiarity with event drive Architecture and related concepts Experience Familiarity with container orchestration services, preferably Kubernetes Competency with container runtimes like docker, cri-o, mesos, rkt (Core OS) Working knowledge of Kubernetes templating tools such as Helm or Kustomize Proficiency in infrastructure scripting / templating solutions such as BASH, GO, Python Demonstrated experience with Infrastructure code tools such as Terraform, CloudFormation, Chef, Puppet, SaltStack, Ansible or equivalent Competency administering and deploying development lifecycle tooling such as Git, Jira, GitLab, CircleCI or Jenkins Knowledge of logging and monitoring tools such as Splunk, Logz.io, Prometheus, Grafana or full suites of tools like Datadog or New Relic Significant experience with multiple Linux operating systems in both a virtual or containerized platform. Experience with Infrastructure as Code principals utilizing GitOps Experience with Secrets management tools such as Vault, AWS Secrets Manager, Azure Key Vault or equivalent

Posted 2 weeks ago

Apply

5.0 - 8.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Your Impact: We are now seeking a talented and motivated individual to contribute to our cloud computing and As-A-Service story in OpenText Identity Governance space. The successful candidate will have experience as a Quality Assurance system integrator working closely with software development and information technology groups in the life cycle of cloud services. The individual will work with minimal supervision and oversight in a geographically distributed team. Ability to clearly comprehend customer needs in a cloud environment, excellent troubleshooting skills, and the ability to focus on problem resolution until completion are a requirement. What the role offers: Experience in Python CI/CD Pipeline knowledge API Testing experience, Thorough understanding of RESTful and SOAP APIs. Selenium/Playwright. Cloud & Microservices Testing: Robot Framework automation Knowledge on Kubernetes, Helm, Microservices Ability to design, build the automation frameworks from scratch. Basic understanding of common web application vulnerabilities (OWASP Top 10). What you need to succeed: Years of experience / Type of experience: 5 to 8 years Should have very good knowledge and strong experience in Software Testing & Quality Assurance. Proficient in Software Development Life Cycle and Methodologies. Experienced with different levels of testing - functional, regression, integration and system testing. Experienced with test automation, preferably worked on Selenium Tool. Experienced in testing of web based applications. Knowledge of Agile/SRCUM development processes

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Greater Bengaluru Area

On-site

About Us: Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningful? Challenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! About Team: This team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. Roles & Responsibilities: Design and implement system solutions , propose process alternatives , and enhance business viewpoints to adopt standard solutions. Specify and design end-to-end solutions with high- and low-level architecture design to meet customer needs. Apply solution architecture standards, processes, and principles to maintain solution integrity, ensuring compliance with client requirements . Develop full-scope solutions , working across organizations to achieve operational success. Research, design, plan, develop, and evaluate effective solutions in specialized domains to meet customer requirements and outcomes . Solve complex technical challenges and develop innovative solutions that impact business performance. Mandatory skills: Around 3 to 6 Years Strong expertise in Cloud-Native, Microservices, and Virtualization technologies such as Docker, Kubernetes, OpenShift, and VMware . Experience in Istio or Nginx Ingress, Load balancer, OVS, SRIOV and dpdk etc. Hands-on experience in creating Kubernetes clusters , virtual machines, virtual networks & bridges in bare metal servers . Expertise in server virtualization techniques such as VMware, Red Hat OpenStack, KVM . Solid understanding of cloud concepts , including Virtualization, Hypervisors, Networking, and Storage . Knowledge of software development methodologies, build tools, and product lifecycle management . Experience in creating and updating Helm charts for carrier-grade deployments. Deep understanding of IP networking in both physical and virtual environments . Implementation of high availability, scalability, and disaster recovery measures . Proficiency in Python/Shell scripting (preferred). Experience in automation scripting using Ansible and Python for tasks such as provisioning, monitoring, and configuration management . Desired skills: Ability to debug applications and infrastructure to ensure low latency and high availability . Collaboration with cross-functional teams to resolve escalated incidents and ensure seamless operations on deployed cloud platforms . Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field . Certifications in Kubernetes (CKA/CKS) Or OpenShift is a plus. Experience working in 5G Core networks or telecom industry solutions is advantageous. Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Company Description CodeChavo is a global digital transformation solutions provider, working closely with top technology companies to make a significant impact through transformation. Driven by technology, inspired by people, and led by purpose, CodeChavo partners with clients from design to operation. With a focus on embedding innovation and agility, CodeChavo brings deep domain expertise and a future-proof philosophy to its clients' organizations. We help companies outsource their digital projects and build quality tech teams. Join our Product & Engineering team as a DevOps Engineer and drive scalable infrastructure, automation, and observability in a high-growth environment. We're seeking a DevOps expert with 4–6 years of experience in cloud-native environments who can build and maintain robust deployment pipelines, Kubernetes clusters, and infrastructure as code. Work Mode - Work from office, alternate Saturdays off. Key Responsibilities: Design and manage scalable infrastructure on AWS . Build and maintain Kubernetes clusters (EKS/self-managed). Create modular and secure Terraform IaC. Containerise apps with Docker , optimise images. Set up and maintain CI/CD pipelines (Jenkins, GitHub Actions, etc.). Implement SRE practices including SLIs/SLOs, alerts, and postmortems. Manage observability using Prometheus, Grafana, ELK, Datadog . Automate ops tasks using Python, Bash , etc. Optimise infra for cost, performance, and resilience. Work with developers to enable secure DevOps workflows. Must-Have Technical Skills: Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular setups, state management) AWS (EC2, VPC, IAM, S3, RDS, ECS/EKS) Linux system admin CI/CD: Jenkins, GitHub Actions, GitLab CI Monitoring & Logging: ELK, Prometheus, Grafana SRE principles and High Availability setups Good-to-Have: Exposure to GCP or Azure Service Mesh (Istio/Linkerd) Secrets Management (Vault, AWS Secrets Manager) Infra/Container Security Chaos Engineering basics Networking (DNS, VPNs, Firewalls) What We're Looking For: Calm under pressure with strong debugging skills Great documentation and communication Passion for automation and DevSecOps Ownership mindset, team collaborator

Posted 2 weeks ago

Apply

16.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. At TechBlocks, we believe technology is only as powerful as the people behind it. We foster a culture of collaboration, creativity, and continuous learning, where big ideas turn into real impact. Whether you're building seamless digital experiences, optimizing enterprise platforms, or tackling complex integrations, you'll be part of a dynamic, fast-moving team that values innovation and ownership. The Role As DevOps Manager, you will be responsible for leading the DevOps function—hands on in technology while managing people, process improvements, and automation strategy. You will set the vision for DevOps practices at Techblocks India and drive cross-team efficiency. Roles And Responsibilities Architect and guide implementation of enterprise-grade CI/CD pipelines that support multi-environment deployments, microservices architecture, and zero downtime delivery practices. Oversee Infrastructure-as-Code initiatives to establish consistent and compliant cloud provisioning using Terraform, Helm, and policy-as-code integrations. Champion DevSecOps practices by embedding security controls throughout the pipeline—ensuring image scanning, secrets encryption, policy checks, and runtime security enforcement Lead and manage a geographically distributed DevOps team, setting performance expectations, development plans, and engagement strategies. Drive cross-functional collaboration with engineering, QA, product, and SRE teams to establish integrated DevOps governance practices. Develop a framework for release readiness, rollback automation, change control, and environment reconciliation processes. Monitor deployment health, release velocity, lead time to recovery, and infrastructure cost optimization through actionable DevOps metrics dashboards Serve as the primary point of contact for C-level stakeholders during major infrastructure changes, incident escalations, or audits. Own the budgeting and cost management strategy for DevOps tooling, cloud consumption, and external consulting partnerships. Identify, evaluate, and onboard emerging DevOps technologies, ensuring team readiness through structured onboarding, POCs, and knowledge sessions. Foster a culture of continuous learning, innovation, and ownership—driving internal tech talks, hackathons, and community engagement Ideal Profile Mandatory Technical Knowledge and Skills: Cloud: GCP (Complete stack from IAM to GKE) CI/CD: End-to-end pipeline ownership (GitHub Actions, Jenkins, Argo CD) IaC: Terraform, Helm Containers: Docker, Kubernetes DevSecOps: Vault, Trivy, OWASP Experience Required: 12+ years total experience, with 3–5 years in DevOps leadership roles. What's on Offer? Excellent career development opportunities

Posted 2 weeks ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

New Town, Kolkata, West Bengal

On-site

** Company Name: Klizo Solutions Pvt. Ltd. ** Experience Required : 2 - 3 years ** Location: Astra Tower, Newtown, Akanksha More (Near City Centre 2 ). ** No. of vacancies: 1 ** Job Type: IN OFFICE, Full-time ** Shift timing: Needs to be flexible ( Drop service is available for shifts ending after 11 PM ) ** Working Days: Monday to Friday (5 Days) ** Week Off: Saturday & Sunday (Fixed off) ** Salary: Open to discuss (based on current salary, experience, and interview performance) Love developing, designing, implementing, and managing engineering infrastructure, using DevOps methodologies? Have a knack for setting up tools and required infrastructure, planning activities, team structure, and project management activities? Then apply with us! Job Description : We are looking for FULL-TIME ON-SITE DevOps engineers who can combine the understanding of both coding and engineering to oversee code releases, close the gap between actions required for quickly changing applications, reduce complexity and perform tasks needed to maintain their reliability. As a DevOps engineer, you will be responsible for introducing processes, methodologies, and tools to balance needs throughout the life-cycle of software development and take care of its coding, deployment, updates, and maintenance. Do you think, as a DevOps engineer, you can boost workplace productivity by overseeing the code releases, creating and implementing system software, analyzing data, and being instrumental in application management, maintenance, and combining code? Then, Klizo Solutions can be a wonderful place to take your career in the right direction! **Expected Responsibilities : Manage and maintain Cloud (AWS) and On-Premise infrastructure Develop concepts and tools for managing infrastructure, performing continuous delivery, and optimizing monitoring Assist in analyzing requirements, designing, and building deployment architecture for micro-service-based and IoT applications Design and implement concepts to monitor our applications to ensure performance, security, availability, and scalability Support the planning and implementation of continuous improvement of existing systems Manage customer communication in case of failures, change requests, or new releases Work with the following technologies: Docker, Kubernetes, Helm, Elasticsearch, MongoDB Atlas. Work on different aspects of DevOps – Automation, Infrastructure as Code, Configuration Management, CI/CD, Monitoring and Logging, Containers and Container Orchestration Platforms, Operating Systems, Source Code Management, Securing the platform Build platforms in the public cloud and private data center environments Required Skills And Qualifications Knowledge of one or more programming languages Ability to write code to solve problems Strong analysis, testing, planning, and execution skills A knack for problem-solving and a passion for building things The mindset of a self-starter and the ability to work on different technology stacks Ability to work on tasks with little or no oversight Willingness to learn new technology stacks. **Required Skills And Qualifications : Knowledge of one or more programming languages Ability to write code to solve problems Strong analysis, testing, planning, and execution skills A knack for problem-solving and a passion for building things The mindset of a self-starter and the ability to work on different technology stacks Ability to work on tasks with little or no oversight Willingness to learn new technology stacks **Must-Haves: High proficiency with Git (Source control management) 2- 3 years of experience **Nice To Have: Knowledge of the Python programming language. Interested candidates are requested to send us their updated CV through indeed.com or email us at jobs@klizos.com for scheduling interview with us. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹25,000.00 per month Benefits: Paid time off Provident Fund Ability to commute/relocate: New Town, Kolkata, West Bengal: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): How many years of experience do you have in DevOps engineering? How many days of notice period do you have? What is your current salary? What is your expected salary? Education: Bachelor's (Preferred) Experience: AWS: 2 years (Required) CI/CD: 2 years (Required) Docker: 2 years (Required) MongoDB: 2 years (Preferred) Shift availability: Day Shift (Required) Night Shift (Required) Work Location: In person

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Equifax is seeking creative, high-energy and driven Mainframe software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream PL/I, CICS, VSAM, MVS/TSO, COBOL, JCL 1+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience, Servicenow, TWS / OPC, IBM File Manager, IBM Debugger, IBM Fault Analyser, SCLM, Use of FTP (Globalscape) Agile environments (e.g. Scrum, XP)

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Delhi, India

On-site

Job Title: OCP Platform Architect & Automation Lead (Platform SME) Role Focus: Architecture, Lifecycle Management, Platform Governance Experience: 6+ years Location: Delhi-NCR (Onsite) Key Responsibilities: Own lifecycle management: upgrades, patching, cluster DR, backup strategy. Automate platform operations via GitOps, Ansible, Terraform. Lead SEV1 issue resolution, post-mortems, and RCA reviews. Define compliance standards: RBAC, SCCs, Network Segmentation, CIS hardening. Integrate OCP with IDPs (ArgoCD, Vault, Harbor, GitLab). Drive platform observability and performance tuning initiatives. Mentor L1/L2 team members and lead operational best practices. Core Tools & Technology Stack: Container Platform: OpenShift, Kubernetes CLI Tools: oc, kubectl, Helm, Kustomize Monitoring: Prometheus, Grafana, Thanos Logging: Fluentd, EFK Stack, Loki CI/CD: Jenkins, GitLab CI, ArgoCD, Tekton Automation: Ansible, Terraform Security: Vault, SCCs, RBAC, NetworkPolicies

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

👩‍💻The role: Kubernetes Hands on - Help with working on Kubernetes clusters on Cloud and optimize workloads on the clusters. Exposure to helm charts, ArgoCD etc Automate Workflows - Help improve CI/CD workflows with GIthub Actions(or similar) that runs tests, builds docker images, publish artifacts, canary releases etc Instrument services - add basic metrics, logs, and health checks to micro‑services; build Grafana panels that surface p95 latency and error rates Learn Application Performance Tuning - Pair with senior engineers to build and deploy tooling that boosts API/Page performance, Database efficiency and work on fixes Infrastructure‑as‑Code : contribute small Terraform modules (e.g., S3 buckets, IAM roles) under guidance, and learn the code‑review process Documentation & Demos - create quick‑start guides and lightning‑talk demos 🤩What makes this role special? Accelerated exposure - work on problem statements encompassing Devops, Application Performance, Observability and many more with full stack exposure Real impact – the dashboards, pipelines, and docs you ship will be used by every engineer, even after your internship ends Flexible - Flexibility to work on different stacks, tools & platforms Modern toolchain – Kubernetes, Terraform, GitOps, Prometheus etc — the same technologies top cloud‑native companies use Mentorship culture – you’ll have a dedicated buddy, weekly 1‑on‑1s, and structured feedback to level up fast 💝What skills & experience do you need? Must-haves Strong computer networks and computer science fundamentals. Coursework or personal projects in Linux fundamentals and at least one programming language (Go, Python, Java, or TypeScript). Basic familiarity with Git workflows and CI systems (GitHub Actions, GitLab CI, or simi lar).Comfort running and debugging simple Docker containers locally. Curiosity about cloud infrastructure, performance tuning, and security best practices. Clear communication and a growth mindset, willing to ask questions, learn fast, document findings, and incorporate feedback. High Agency to fix issues proactively . Nice-to-haves Experience with any Cloud platforms - AWS, GCP, Azure. Exposure to kubernetes, Terraform and other IaCtools. Side projects with full stack exposure. Participation in hackathons & open‑source contributions ➕Bonus Interested in travel, local experiences, and hospitality personally. Interested in being in a rapidly growing startup. Anything out-of-the-box that can surprise us.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Date Opened 21/07/2025 Job Type Permanent Work Experience 5+ years Industry IT Services Salary 25LPA City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Our software engineers at Fiserv bring an open and creative mindset to a global team developing mobile applications, user interfaces and much more to deliver industry-leading financial services technologies to our clients. Our talented technology team members solve challenging problems quickly and with quality. We're seeking individuals who can create frameworks, leverage developer tools, and mentor and guide other members of the team. Collaboration is key and whether you are an expert in a legacy software system or are fluent in a variety of coding languages you're sure to find an opportunity as a software engineer that will challenge you to perform exceptionally and deliver excellence for our clients. Full-time Entry, Mid, Senior Yes (occasional), Minimal (if any) Responsibilities Requisition ID R-10358264 Date posted 07/21/2025 End Date 08/04/2025 City Bengaluru State/Region Karnataka Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Software Development Engineering What does a Lead DevOps Engineer do at Fiserv? As an experienced member of our Core banking Base Development Group, you will be responsible for effective working of DevOps & CI/CD throughout the development life cycle and delivery of our NextGen transformation project/program. What You Will Do: Create Infra as needed either on premises or on AWS for the application using Terraform or others CI – Create the maven, .Net, Angular and container image build using CI pipelines like GitHub Actions or Jenkins CD – Deploy of Spring boot and .Net containerized images using Helm charts via CD tools Set up the end-to-end application environment Create/Manage Open Shift Cluster on premise Linux machine administration Address the Cyber Security vulnerabilities on the infrastructure Create/Maintain the monitoring system for the application Lead the Dev Ops team members in above items Work with FTS team wherever needed for infrastructure related issues Identify the deployment strategy for higher resilience of the application Managing load balancers for high availability What You Will Need to Have: Computer science or IT Engineering graduate Technical Skillset required – Linux operating system experience; possibly RedHat Certified Engineer (RHCE). Good knowledge in AWS infra creation using Terraform or Cloud formation. Good knowledge and hands on experience on tools like Git, GitHub Actions/Jenkins, Maven/Ant, Nexus Jira, Confluence, Ansible, Terraform. Good at maven, .Net build process Kubernetes cluster creation, maintenance and administration, especially Open Shift and EKS Good knowledge and hands-on experience on scripting languages like Batch, Bash, hands-on experience on Python would be a plus. Experience in automatic package management with Helm to deploy in the OpenShift/Kubernetes Expertise in Load balancers – L7 and L4 load balancers Generation and Maintenance of CA and Self Signed certificates Knowledge of Integration and unit test and Behavior Driven Development Need to have good problem-solving skills. Good communication skills both written and verbal What Would Be Great to Have: Experience in tools like GitHub Actions, Harness Good understanding of the configuration management Financial Industry and Core Banking integration experience Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 2 weeks ago

Apply

11.0 years

0 Lacs

Pune, Maharashtra, India

On-site

VP – Engineering, Co-Builder Role (Equity + Profit Sharing) | Full-time | Hybrid | Pune --- Company: SpeedTech.ai Pune, Maharashtra, India (Hybrid) --- Employment Type: Equity-Based / Strategic Leadership Role Please note: This is a founder-track opportunity offering equity + profit sharing. Ideal for entrepreneurial leaders eager to co-build a global GenAI company alongside a seasoned team. --- Industry: 1. Artificial Intelligence 2. SaaS 3. Information Technology & Services 4. Startups --- Seniority Level: Executive / Leadership --- Job Function: 1. Engineering 2. Technology Leadership 3. Product Development --- About SpeedTech.ai SpeedTech.ai is an 11-year-old GenAI product company based in India, developing globally scalable AI platforms that solve real-world business challenges. Our flagship products are gaining strong traction across the USA, Canada, UAE and India: 1. RAIYA Telephony – Multilingual AI voice assistant for sales and support 2. RAIYA Concierge – Enterprise productivity AI assistant 3. RAIYA IDP – Intelligent Document Processing platform We are on a path to expand into 100+ countries and cross ₹100 Cr in revenue, with IPO plans within the next 5 years. --- Role Overview We’re seeking a visionary VP – Engineering to join us as a strategic co-builder and lead the full tech stack across our GenAI platforms. You’ll be at the helm of all engineering efforts—from strategy to execution—while shaping the future of AI products built for global scale. This is a Build Your Own Business (BYOB) model, where you join with equity and profit-sharing, working alongside a passionate leadership team building the next big thing in GenAI. --- Key Responsibilities: 1. Define and lead the overall engineering vision, strategy, and roadmap 2. Build and scale a world-class technology team 3. Architect secure, scalable, and performance-driven AI platforms 4. Oversee end-to-end product delivery, releases, and innovation 5. Collaborate closely with CEO, product, and sales teams to align tech and business goals 6. Represent SpeedTech in client and investor conversations as required --- What We’re Looking For: 1. 10+ years of experience in software engineering 2. Proven leadership in AI, ML, or SaaS product environments 3. Strong foundation in cloud infrastructure, system architecture, APIs, and AI tools 4. An entrepreneurial mindset and long-term commitment to building something big 5. Passionate about innovation, execution, and building from the ground up --- What You’ll Get: 1. Equity and Profit Sharing 2. Founder-track ownership through a strategic co-builder role 3. Long-term wealth creation opportunity in a high-growth GenAI company 4. A leadership seat in a company preparing for IPO within 5 years 5. Flexibility through a hybrid work model based in Pune --- Location: Hybrid – Pune, India While we offer flexibility, we value regular in-person collaboration for leadership roles. --- How to Express Interest: Connect with us on LinkedIn or message directly to explore this opportunity. --- Equal Opportunity: SpeedTech.ai celebrates diversity and is committed to building an inclusive, collaborative team environment.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary As a Interop Test Engineer, you will work as part of a team responsible for Interop testing of NetApp software solutions Interoperability with other hardware and software solutions. You will test storage and data management services to be deployed as containers in Kubernetes or native virtual machines and work with various public and private cloud providers with high focus on customer requirements and product quality meting the project timelines. As part of your work, you would be required to work with other partner teams, engineer and developers to understand the test requirements, execution of required testing and automation of the testcases. Debug and troubleshoot issues seen during setup and testing. Job Requirements Should have relevant experience working on storage or Host OS Interoperability testing Should be familiar with Server Hardware and Operating systems Should have skills for deploying and troubleshooting OS on server (OS install, driver, firmware upgrades etc) Should have skills in deploying and troubleshooting Kubernetes environments (Vanilla Kubernetes / OpenShift / Anthos / Tanzu / Rancher) Should have skills in managing, Administrating and troubleshooting cloud (AWS, GCP and Azure) Experience working with python for test automation or development Strong oral and written communication skills Ability to work collaboratively within a team to meet aggressive goals and high quality standards Strong aptitude for learning new technologies Nice To Have Skills Experience with REST API, Ansible, Terraform, Helm, Golang OS Administration skills ( Linux, Windows, ESXi) Familiarity with AWS, Azure or Google Cloud compute. Working experience of configuring and troubleshooting NAS/SAN environments Have relevant experience working on storage or Host OS Interoperability testing Familiar with Server Hardware and Operating systems Experience with ONTAP, CVS, or other NetApp products, or cloud-native storage platforms would be a plus. Education Bachelor's or Masters in Computer Science Engineering with 2-4 years of relevant work experience At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process. Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification. Why NetApp? We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better - but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches. We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life. If you want to help us build knowledge and solve big problems, let's talk. Submitting an application To ensure a streamlined and fair hiring process for all candidates, our team only reviews applications submitted through our company website. This practice allows us to track, assess, and respond to applicants efficiently. Emailing our employees, recruiters, or Human Resources personnel directly will not influence your application. Apply

Posted 2 weeks ago

Apply

3.0 - 8.0 years

6 - 12 Lacs

Gurugram

Work from Office

Location: NCR Team Type: Platform Operations Shift Model: 24x7 Rotational Coverage / On-call Support (L2/L3) Team Overview The OpenShift Container Platform (OCP) Operations Team is responsible for the continuous availability, health, and performance of OpenShift clusters that support mission-critical workloads. The team operates under a tiered structure (L2, L3) to manage day-to-day operations, incident management, automation, and lifecycle management of the container platform. This team is central to supporting stakeholders by ensuring the container orchestration layer is secure, resilient, scalable, and optimized. L2 OCP Support & Platform Engineering (Platform Analyst) Role Focus: Advanced Troubleshooting, Change Management, Automation Experience: 3–6 years Resources : 5 Key Responsibilities: Analyze and resolve platform issues related to workloads, PVCs, ingress, services, and image registries. Implement configuration changes via YAML/Helm/Kustomize. Maintain Operators, upgrade OpenShift clusters, and validate post-patching health. Work with CI/CD pipelines and DevOps teams for build & deploy troubleshooting. Manage and automate namespace provisioning, RBAC, NetworkPolicies. Maintain logs, monitoring, and alerting tools (Prometheus, EFK, Grafana). Participate in CR and patch planning cycles. L3 – OCP Platform Architect & Automation Lead (Platform SME) Role Focus: Architecture, Lifecycle Management, Platform Governance Experience: 6+ years Resources : 2 Key Responsibilities: Own lifecycle management: upgrades, patching, cluster DR, backup strategy. Automate platform operations via GitOps, Ansible, Terraform. Lead SEV1 issue resolution, post-mortems, and RCA reviews. Define compliance standards: RBAC, SCCs, Network Segmentation, CIS hardening. Integrate OCP with IDPs (ArgoCD, Vault, Harbor, GitLab). Drive platform observability and performance tuning initiatives. Mentor L1/L2 team members and lead operational best practices. Core Tools & Technology Stack Container Platform: OpenShift, Kubernetes CLI Tools: oc, kubectl, Helm, Kustomize Monitoring: Prometheus, Grafana, Thanos Logging: Fluentd, EFK Stack, Loki CI/CD: Jenkins, GitLab CI, ArgoCD, Tekton Automation: Ansible, Terraform Security: Vault, SCCs, RBAC, NetworkPolicies

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies