Jobs
Interviews

2805 Helm Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

10 - 20 Lacs

Pune

Remote

What you’ll do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Job Types: Full-time, Permanent Pay: ₹1,009,525.09 - ₹2,027,756.51 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Work from home Supplemental Pay: Joining bonus Overtime pay Performance bonus Shift allowance Work Location: In person Application Deadline: 01/08/2025

Posted 2 weeks ago

Apply

6.0 years

4 - 5 Lacs

Bengaluru

Remote

Overview: The DevOps Engineer strengthens the collaboration between symplr's Engineering, Development, and Operations teams by streamlining and automating the software development lifecycle (SDLC) to achieve faster delivery, improved quality, and higher reliability. This role leverages scripting and automation expertise to bridge the gap between development and operations, fostering a culture of continuous integration and continuous delivery (CI/CD). Duties & Responsibilities: Promote collaboration and shared responsibility throughout the SDLC. Design, develop, and implement automated build, test, and deployment pipelines using scripting languages and CI/CD tools (GitHub Actions). Design, build, and maintain scalable and reliable infrastructure to support our applications and services. Automate infrastructure provisioning and configuration management using tools such as Terraform, Ansible, or similar. Work closely with developers, testers, security professionals, and operations teams to break down silos and ensure seamless software delivery. Ensure adherence to security best practices and compliance regulations throughout the development and deployment process. Troubleshoot complex technical issues related to deployments, infrastructure, and automation. Mentor and guide junior team members, promoting a culture of continuous improvement and collaboration. Document processes, tools, and solutions to promote knowledge transfer within the team. Have HEART. To work here, you must be: Humble – self-aware and respectful Effective – measurably move the needle & immeasurably add Adaptable – innately curious and constantly Remarkable – stand out in some Transparent – openly and honestly sharing Skills Required: 6+ years of Engineering experience in the following areas: Solid understanding of system administration principles with experience managing both Windows Server (Active Directory, GPO, DFS, RBAC) and Linux servers (Ubuntu/Debian, RHEL/CentOS). Scripting proficiency in PowerShell, Bash, or Python to automate infrastructure and application deployment tasks. Experience with Infrastructure as Code (IaC), particularly Terraform, to define and manage infrastructure in a repeatable and scalable manner. In-depth knowledge of development infrastructure tools such as Terraform, Ansible, Helm, and container orchestration platforms (e.g., Kubernetes) for building and deploying applications efficiently. Hands-on experience with CI/CD tools (e.g., GitHub, Jenkins, Octopus Deploy, ArgoCD/Drone) to automate the software development lifecycle, enabling continuous integration and continuous delivery (CI/CD). Familiarity with monitoring tools such as DataDog, NewRelic Solid understanding of networking and other supporting components like Load Balancer, Application Gateway, Web Application Firewall Experience with application hosting platforms like IIS, Apache, and Tomcat to deploy and manage applications in various environments. Bachelor’s degree with Computer Science background Experience and/or knowledge in the implementation and support of Cloud Services, such as Azure, AWS, Private Cloud or Hybrid Cloud is Apply for this job online Share on your newsfeed About symplr: As a leader in healthcare operations solutions, we empower healthcare organizations to navigate the complexities of integrating critical business operations. Our customers are at the heart of everything we do, and they rely on our mission-critical systems to drive better operations and better outcomes. We are a remote-first company with employees working across the United States, India, and the Netherlands. Guided by values, we focus on teamwork, championing our customers, being rooted in action and outcomes, overcoming challenges, and leading through equality and integrity. Read more about symplr's culture and values at symplr.com/careers.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

FICO (NYSE: FICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity “As a part of FICO’s highly modern and innovative analytics and decision platform, the Cyber-Security Engineer will help shape the next generation security for FICO’s Platform. You will address cutting edge security challenges in a highly automated, complex, cloud & microservices driven environments inclusive of design challenges and continuous delivery of security functionality and features to the FICO platform as well as the AI/ML capabilities used on top of the FICO platform." – VP of Engineering. What You’ll Contribute Secure the design of next next-generation FICO Platform, its capabilities, and services. Support full-stack security architecture design from cloud infrastructure to application features for FICO customers. Work closely with product managers, architects, and developers on implementing the security controls within products. Develop and maintain Kyverno policies for enforcing security controls in Kubernetes environments. Collaborate with platform, DevOps, and application teams to define and implement policy-as-code best practices. Contribute to automation efforts for policy deployment, validation, and reporting. Stay current with emerging threats, Kubernetes security features, and cloud-native security tools. Implement required controls and capabilities for the protection of FICO products and environments. Build & validate declarative threat models in a continuous and automated manner. Prepare the product for compliance attestations and ensure adherence to best security practices. Provide expertise as a subject matter expert regarding edge services for public/private cloud information system controls related infrastructure, policy, and decision-making processes. Provide timely resolutions for security configuration or solutions in support of service availability. Work on problems of diverse scope where analysis of situation requires evaluation and troubleshooting including network packet analysis, Linux or Windows DNS, certificates lifecycle, logfile analysis, and related. What We’re Seeking Strong knowledge and hands-on experience with Kyverno and OPA/Gatekeeper (optional but a plus). Experience in threat modeling, code reviews, security testing, vulnerability detection, attacker exploit techniques, and methods for their remediation. Hands-on experience with programming languages, such as: Java, Python, etc. Experience of deploying services and securing cloud environments, preferably AWS Experience of deploying and securing containers, container orchestration and mesh technologies (such as EKS, K8S, ISTIO). Experience with Crossplane to manage cloud infrastructure declaratively via Kubernetes. Certifications in Kubernetes or cloud security (e.g., CKA, CKAD, CISSP) are desirable Ability to articulate complex architectural challenges with the business leadership and product management teams. Independently drive transformational security projects across teams and organizations. Experience with securing event streaming platforms like Kafka or Pulsar. Experience with ML/AI model security and adversarial techniques within the analytics domains. Hands-on experience with IaC (Such as Terraform, Cloudformation, Helm) and with CI/CD pipelines (such as Github, Jenkins, JFrog). Resourceful problem-solver skilled at navigating ambiguity and change. Customer-focused individual with strong analytical problem-solving skills and solid communication abilities. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO? At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today – Big Data analytics. You’ll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide: Credit Scoring — FICO® Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security — 4 billion payment cards globally are protected by FICO fraud systems. Lending — 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICO’s solutions, placing us among the world’s top 100 software companies by revenue. We help many of the world’s largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people – just like you – who thrive on the collaboration and innovation that’s nurtured by a diverse and inclusive environment. We’ll provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and we’re proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they don’t meet all stated qualifications. While our qualifications are clearly related to role success, each candidate’s profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to the FICO Privacy policy at https://www.fico.com/en/privacy-policy

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Harness is a high-growth company that is disrupting the software delivery market. Our mission is to enable the 30 million software developers in the world to deliver code to their users reliably, efficiently, securely and quickly, increasing customers’ pace of innovation while improving the developer experience. We offer solutions for every step of the software delivery lifecycle to build, test, secure, deploy and manage reliability, feature flags and cloud costs. The Harness Software Delivery Platform includes modules for CI, CD, Cloud Cost Management, Feature Flags, Service Reliability Management, Security Testing Orchestration, Chaos Engineering, Software Engineering Insights and continues to expand at an incredibly fast pace. Harness is led by technologist and entrepreneur Jyoti Bansal, who founded AppDynamics and sold it to Cisco for $3.7B. We’re backed with $425M in venture financing from top-tier VC and strategic firms, including J.P. Morgan, Capital One Ventures, Citi Ventures, ServiceNow, Splunk Ventures, Norwest Venture Partners, Adage Capital Partners, Balyasny Asset Management, Gaingels, Harmonic Growth Partners, Menlo Ventures, IVP, Unusual Ventures, GV (formerly Google Ventures), Alkeon Capital, Battery Ventures, Sorenson Capital, Thomvest Ventures and Silicon Valley Bank. Position Summary In this role, you will be working with internal and external stakeholders to architect, design and implement DevSecOps, FinOps and Engineering Excellence solutions for enterprise customers. You will have an opportunity to work with Harness Engineering and various customer functions, such as DevOps, SRE, Cloud, Finance and Engineering Analytics teams. You will develop best practices and automations to streamline Harness platform deployments in the most efficient, scalable, repeatable and reliable manner possible. We're a high-growth company on a once-in-a-lifetime journey to revolutionize engineering deployment tools & continuous delivery. What You Will Do Engage with our customer's technical teams to analyze and understand current DevSecOps/CI/CD/Policy & Template Governance tools and processes Architect and implement an optimized Harness setup for integration, scale, and repeatability Interface with the Customer's Executive and Leadership teams to understand the technical goals and business objectives related to their CI/CD process, design their Harness implementation to best fit those requirements, and correlate the technical success criteria to the business requirements Provide positive anecdotes from each engagement, craft best practices around Customer implementations, convert them into automation and create reference patterns Document and implement processes and solutions that are employed for onboarding success for the purpose of internal enablement Contribute to the product design, assist in the Harness Community, and for building out of an advanced technical knowledge base Consult on DevSecOps/CI/CD best practices, processes, solutions, etc. Interact with customers on a professional, meaningful and technically deep level Work closely with Pre-sales and Post-sales teams to ensure that Harness customers are successful and experience a high level of customer satisfaction with the Harness solution. About You BA/BS degree in CS or Computer Engineering-related field with 3+ years of relevant experience 5+ Experience with DevOps and including some multiple of the following solutions preferred: Kubernetes, Jenkins, GitHub, Gitlab, Bamboo, TeamCity, TravisCI, Bitbucket, Jira, ServiceNow, Helm, Kustomize, PCF, OpenShift, AWS, GCP, Azure, Terraform, CloudFormation, Linux, Python, Bash, Powershell, AppDynamics, New Relic, Dynatrace, Instana, Prometheus, ELK, Splunk, Sumo Logic, etc. Experience delivering custom solutions to customers of all sizes, whether internal or external (external customer-facing experience a plus). You are a perpetual learner, thrive in a team setting, enjoy sharing your experience and solutions, consistently pursuing excellence and success in all your tasks, detail-oriented and analytical, with excellent written and verbal communication skills. Results-driven individual with a hunger for accomplishing in fast paced environments and a knack for optimizing processes Willingness to travel up to 25% Location This role will be out of our Bengalaru, Karnataka, India office What You Will Have At Harness Experience building a transformative product End-to-end ownership of your projects Competitive salary Comprehensive healthcare benefit Flexible work schedule Quarterly Harness TGIF-Off / 4 days Paid Time Off and Parental Leave Monthly, quarterly, and annual social and team building events Monthly internet reimbursement Harness In The News Harness Grabs a $150m Line of Credit Welcome Split! SF Business Times - 2024 - 100 Fastest-Growing Private Companies in the Bay Area Forbes - 2024 America's Best Startup Employers SF Business Times - 2024 Fastest Growing Private Companies Awards Fast Co - 2024 100 Best Workplaces for Innovators All qualified applicants will receive consideration for employment without regard to race, color, religion, sex or national origin. Note on Fraudulent Recruiting/Offers We have become aware that there may be fraudulent recruiting attempts being made by people posing as representatives of Harness. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note, we do not ask for sensitive or financial information via chat, text, or social media, and any email communications will come from the domain @harness.io. Additionally, Harness will never ask for any payment, fee to be paid, or purchases to be made by a job applicant. All applicants are encouraged to apply directly to our open jobs via our website. Interviews are generally conducted via Zoom video conference unless the candidate requests other accommodations. If you believe that you have been the target of an interview/offer scam by someone posing as a representative of Harness, please do not provide any personal or financial information and contact us immediately at security@harness.io. You can also find additional information about this type of scam and report any fraudulent employment offers via the Federal Trade Commission’s website (https://consumer.ftc.gov/articles/job-scams), or you can contact your local law enforcement agency.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As an AI Ops Expert, you will be responsible for the delivery of projects with defined quality standards within set timelines and budget constraints. Your role will involve managing the AI model lifecycle, versioning, and monitoring in production environments. You will be tasked with building resilient MLOps pipelines and ensuring adherence to governance standards. Additionally, you will design, implement, and oversee AIops solutions to automate and optimize AI/ML workflows. Collaboration with data scientists, engineers, and stakeholders will be essential to ensure seamless integration of AI/ML models into production systems. Monitoring and maintaining the health and performance of AI/ML systems, as well as developing and maintaining CI/CD pipelines for AI/ML models, will also be part of your responsibilities. Troubleshooting and resolving issues related to AI/ML infrastructure and workflows will require your expertise, along with staying updated on the latest AI Ops, MLOps, and Kubernetes tools and technologies. To be successful in this role, you must possess a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of relevant experience. Your proven experience in AIops, MLOps, or related fields will be crucial. Proficiency in Python and hands-on experience with Fast API are required, as well as strong expertise in Docker and Kubernetes (or AKS). Familiarity with MS Azure and its AI/ML services, including Azure ML Flow, is essential. Additionally, you should be proficient in using DevContainer for development and have knowledge of CI/CD tools like Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools, Infrastructure as Code (Terraform or equivalent), strong problem-solving skills, and excellent communication and collaboration abilities are also necessary. Preferred skills for this role include experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn, as well as familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack, along with an understanding of data versioning tools like DVC or MLflow, would be advantageous. Proficiency in Azure-specific tools and services like Azure Machine Learning (Azure ML), Azure DevOps, Azure Kubernetes Service (AKS), Azure Functions, Azure Logic Apps, Azure Data Factory, Azure Monitor, and Application Insights is also preferred. Joining our team at Socit Gnrale will provide you with the opportunity to be part of a dynamic environment where your contributions can make a positive impact on the future. You will have the chance to innovate, collaborate, and grow in a supportive and stimulating setting. Our commitment to diversity and inclusion, as well as our focus on ESG principles and responsible practices, ensures that you will have the opportunity to contribute meaningfully to various initiatives and projects aimed at creating a better future for all. If you are looking to be directly involved, develop your expertise, and be part of a team that values collaboration and innovation, you will find a welcoming and fulfilling environment with us at Socit Gnrale.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

As a core member of Periscope's technology team at McKinsey, you will play a vital role in developing and deploying enterprise products, ensuring the firm stays at the forefront of technology. Your responsibilities will include software development projects, focusing on building and improving deployment pipelines, automation, and toolsets for cloud-based solutions on AWS, GCP, and Azure platforms. You will also delve into database management, Kubernetes cluster setup, performance tuning, and continuous delivery domains. Your role will involve hands-on software development, spending approximately 80% of your time on tasks related to deployment pipelines and cloud-based solutions. You will continually expand your expertise by experimenting with new technologies, frameworks, and approaches independently. Additionally, your strong understanding of agile engineering practices will enable you to guide teams on enhancing their engineering processes. In this position, you will not only contribute to software development projects but also provide coaching and mentoring to other DevOps engineers to enhance the organizational capability. Your base will be in either Bangalore or Gurugram office as a member of Periscope's technology team within McKinsey. With Periscope being McKinsey's asset-based arm in the Marketing & Sales practice, your role signifies the firm's commitment to innovation and delivering exceptional client solutions. By combining consulting approaches with solutions, Periscope aims to provide actionable insights that drive revenue growth and optimize various aspects of commercial decision-making. The Periscope platform offers a unique blend of intellectual property, prescriptive analytics, and cloud-based tools to deliver over 25 solutions focused on insights and marketing. Your qualifications should include a Bachelor's or Master's degree in computer science or a related field, along with a minimum of 6 years of experience in technology solutions, particularly in microservices architectures and SaaS delivery models. You should demonstrate expertise in working with leading Cloud Providers such as AWS, GCP, and Azure, as well as proficiency in Linux systems and DevOps tools like Ansible, Terraform, Helm, Docker, and Kubernetes. Furthermore, your experience should cover areas such as load balancing, network security, API management, and supporting web front-end and data-intensive applications. Your problem-solving skills, systems design expertise in Python, and strong monitoring and troubleshooting abilities will be essential in contributing to development tasks and ensuring uninterrupted system operation. Your familiarity with automation practices and modern monitoring tools like Elastic, Sentry, and Grafana will also be valuable in upholding system quality attributes. Strong communication skills and the ability to collaborate effectively in a team environment are crucial for conveying technical decisions, fostering alignment, and thriving in high-pressure situations. Your role at Periscope By McKinsey will not only involve technical contributions but also leadership in enhancing the team's capabilities and driving continuous improvement in technology solutions.,

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Greater Kolkata Area

On-site

ABOUT THE ROLE Klizo Solutions is expanding its platform team and needs a hands-on DevOps Engineer to own our build-and-release pipeline, cloud/on-prem infrastructure, and system observability. You’ll automate everything from Infrastructure-as-Code to CI/CD, driving reliability and delivery speed across multiple client and internal projects. RESPONSIBILITIES • Design, provision, and maintain AWS and on-prem environments with Terraform • Build resilient container platforms using Docker, Kubernetes, and Helm • Create/maintain Git-based CI/CD pipelines for automated test, security scan, and zero-downtime deploys • Implement and tune monitoring, logging, and alerting with Prometheus/Grafana (ELK optional) • Lead incident response, root-cause analysis, and capacity planning • Partner with developers to containerise micro-services and review IaC changes • Communicate with customers during outages, upgrades, and releases • Champion continuous improvement and document run-books / SOPs REQUIRED SKILLS • 2+ years in a DevOps / SRE role • High proficiency with Git and automated build/test/deploy workflows • Strong Terraform experience for AWS provisioning • Solid scripting or coding ability (Python, Bash, Go, etc.) • Linux administration, networking basics, and security hardening • Self-starter able to own tasks with minimal oversight NICE TO HAVE • Kubernetes certification (CKA/CKAD) or real-world Helm chart authoring • Observability stacks such as Prometheus, Grafana, Loki/Tempo, Datadog • Exposure to ELK / OpenSearch, PostgreSQL, or other data stores • Configuration-management tools (Ansible, Chef) • DevSecOps / secure SDLC tooling SHIFT & LOCATION • Evening shift (2 PM – 11 PM) IST or late shift (5:30 PM – 2:30 AM) IST • Company-provided drop for late shift COMPENSATION & PERKS • Starting at ₹20,000 to ₹50,000 per month (based on assessment and experience) • Housing assistance for relocators (walking distance to the office) • Up to 28 days paid leave (post-confirmation): Casual, Sick, Paid Holidays • Performance-linked bonuses • Continuous learning budget and advanced cloud/DevOps training ABOUT KLIZO SOLUTIONS With 90+ engineers across India, Vietnam, and the US, Klizo Solutions delivers AI-driven, data-heavy products for health-tech, e-commerce, and fintech clients. Our tech stack spans Python, Node.js, React, Java, Machine Learning, and large-scale data aggregation—offering DevOps engineers exposure to diverse, complex workloads. HOW TO APPLY Email your résumé and a brief note describing a challenging infrastructure issue you solved to jobs@klizos.com with the subject line “DevOps Engineer – ”. Short-listed candidates will complete a practical Terraform/Git task and a technical interview. Join us to build platforms that never sleep and pipelines that never break. You can also go directly into an AI Interview to qualify for the job: https://aicaller.interviewscreener.com/interview/8l8yv09vnia

Posted 2 weeks ago

Apply

46.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary We are hiring a Senior DevOps Engineer with 46 years of experience to join our growing engineering team. The ideal candidate is proficient in AWS and Azure, has a solid development background (Python preferred), and demonstrates strong experience in infrastructure design, automation, and DevOps to GCP is a plus. You will be responsible for building, managing, and optimizing robust, secure, and scalable infrastructure solutions from scratch. Key Responsibilities Design and implement cloud infrastructure using AWS, Azure, and optionally GCP. Build and manage Infrastructure-as-Code using Terraform. Develop and maintain CI/CD pipelines using tools such as GitHub Actions, Jenkins, or GitLab CI. Deploy and manage containerized applications using Docker and Kubernetes (EKS/AKS). Set up and manage Kafka for distributed streaming and event processing. Build monitoring, logging, and alerting solutions using tools like Prometheus, Grafana, ELK, CloudWatch, Azure Monitor. Ensure cost optimization and security best practices across all cloud environments. Collaborate with developers to debug application issues and improve system performance. Lead infrastructure architecture discussions and implement scalable, resilient solutions. Automate operational processes and drive DevOps culture and best practices across teams. Required Skills 46 years of hands-on experience in DevOps/Site Reliability Engineering. Strong experience in multi-cloud environments (AWS + Azure); GCP exposure is a bonus. Proficient in Terraform for IaC; experience with ARM Templates or CloudFormation is a plus. Solid experience with Kubernetes (EKS & AKS) and container orchestration. Proficient in Docker and container lifecycle management. Hands-on experience with Kafka (setup, scaling, and monitoring). Experience implementing monitoring, logging, and alerting solutions. Expertise in cloud security, IAM, RBAC, and cost optimization. Development experience in Python or any backend language. Excellent problem-solving and troubleshooting skills. Nice To Have Certifications : AWS DevOps Engineer, Azure DevOps Engineer, CKA/CKAD. Experience with GitOps, Helm, and service mesh Familiarity with serverless architecture and event-driven systems. Education Bachelors or Masters degree in Computer Science, Information Technology, or related field. (ref:hirist.tech)

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Post : Team Lead DevOps. Experience : 6+ years. Location : Ahmedabad (Work from office). Role Summary Key Responsibilities : Manage, mentor, and grow a team of DevOps engineers. Oversee the deployment and maintenance of applications like : Odoo (Python/PostgreSQL) Magento (PHP/MySQL) Node.js Design and manage CI/CD pipelines for each application using tools like Jenkins, GitHub Actions, GitLab CI. Handle environment-specific configurations (staging, production, QA). Containerize legacy and modern applications using Docker and deploy via Kubernetes (EKS/AKS/GKE) or Docker Swarm. Implement and maintain Infrastructure as Code using Terraform, Ansible, or CloudFormation. Monitor application health and infrastructure using Prometheus, Grafana, ELK, Datadog, or equivalent tools. Ensure systems are secure, resilient, and compliant with industry standards. Optimize cloud cost and infrastructure performance. Collaborate with development, QA, and IT support teams for seamless delivery. Troubleshoot performance, deployment, or scaling issues across tech stacks. Must-Have Skills 6+ years in DevOps/Cloud/System Engineering roles with real hands-on experience. 2+ years managing or leading DevOps teams. Experience supporting and deploying : Odoo on Ubuntu/Linux with PostgreSQL. Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB. Node.js with PM2/Nginx or containerized setups. Experience with AWS / Azure / GCP infrastructure in production. Strong scripting skills : Bash, Python, PHP CLI, or Node CLI. Deep understanding of Linux system administration and networking fundamentals. Experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and having exposure in managing clients. Preferred Certifications (Highly Valued) AWS Certified DevOps Engineer Professional. Azure DevOps Engineer Expert. Google Cloud Professional DevOps Engineer. Bonus : Magento Cloud DevOps or Odoo Deployment Experience. Bonus Skills (Nice To Have) Experience with multi-region failover, HA clusters, or RPO/RTO-based design. Familiarity with MySQL/PostgreSQL optimization and Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower. Knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices. (ref:hirist.tech)

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

pune, maharashtra

On-site

As an Automation Engineer at Amdocs, you will play a vital role in the design, development, modification, debugging, and maintenance of our client's software systems. You will immerse yourself in engaging with specific modules, applications, or technologies, handling sophisticated assignments throughout the software development process. Your responsibilities will include contributing to and supporting the automation of various parts of infrastructure and deployment systems. You will work towards improving and streamlining processes to enable engineering and operations teams to work more efficiently. Collaborating with Product/Account Development teams, you will drive automation of Configuration Management, Build, Release, Deployment, and Monitoring processes, while providing instructions on the developed tools to the team. You will work closely with IT and Development to ensure seamless integration and continuity of the code, showcasing a holistic view of the working environments. Supporting engineering and operations teams in meeting infrastructure requirements will be a key aspect of your role. Additionally, you will provide professional support for developed automations, respond to incidents proactively to prevent system outages, and ensure environment availability to meet SLAs. Staying abreast of industry best practices, you will contribute ideas for improvements in DevOps practices and innovate through automation to enable standard deployable units of infrastructure across multiple environments until production. You will also be responsible for adopting and implementing new technology in the account/product life cycle. To excel in this role, you should have experience in CI/CD and MS domains, familiarity with configuration management and automation tools, continuous-integration tools, and continuous deployment knowledge on popular cloud computing platforms. Script development experience, IT experience in common languages, hands-on experience in build, release, deployment, and monitoring of cloud-based scalable distributed systems, and proficiency in RedHat or other Linux distributions are essential. Additionally, experience working in an agile development environment and troubleshooting key life cycle management tools are required. In this position, you will be at the forefront of integrating a major product infrastructure system with the Amdocs infrastructure system, driving automation to enhance team productivity. You will work alongside an international, highly skilled team with ample opportunities for personal and professional growth. Join a dynamic, multi-cultural organization that fosters innovation and empowers its employees to thrive in a diverse, inclusive workplace. Enjoy a wide range of benefits, including health, dental, vision, and life insurance, paid time off, sick time, and parental leave. Amdocs is proud to be an equal opportunity employer, welcoming applicants from all backgrounds and committed to building a diverse and inclusive workforce.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

ahmedabad, gujarat

On-site

You should have at least 4+ years of experience in DevOps technologies like Azure DevOps, CD/CI, infrastructure automation, Infrastructure as code, containerization, and orchestration. You must possess strong hands-on experience with production systems in AWS/Azure and CI/CD process engineering for a minimum of 2 years. Additionally, you should have a minimum of 1 year of experience managing infrastructure as code using tools like Terraform, Ansible, and Git. Your responsibilities will include working with Container platforms and Cloud Services, handling Identity, Storage, Compute, Automation, and Disaster Recovery using Docker, Kubernetes, Helm, and Istio. You will be expected to work on IAAS tools like AWS CloudFormation and Azure ARM, as well as maintain strong hands-on experience with Linux. Moreover, you should be familiar with DevOps release and build management, orchestration, and automation scripts using tools like Jenkins, Azure DevOps, Bamboo, and Sonarqube. Proficiency in scripting with Bash and Python is necessary. Strong communication and collaboration skills are essential for this role. Furthermore, you will need to have practical experience working with Web Servers such as Apache, Nginx, and Tomcat, along with a good understanding of Java and PHP Applications implementation. Experience with databases like MySQL, PostgreSQL, and MongoDB is also expected. If you meet these qualifications and are interested in this position, please email your resume to career@tridhyain.com.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Senior DevOps Engineer at TechBlocks, you will be responsible for designing and managing robust, scalable CI/CD pipelines, automating infrastructure with Terraform, and improving deployment efficiency across GCP-hosted environments. With 5-8 years of experience in DevOps engineering roles, your expertise in CI/CD, infrastructure automation, and Kubernetes will be crucial for the success of our projects. In this role, you will own the CI/CD strategy and configuration, implement DevSecOps practices, and drive an automation-first culture within the team. Your key responsibilities will include designing and implementing end-to-end CI/CD pipelines using tools like Jenkins, GitHub Actions, and Argo CD for production-grade deployments. You will also define branching strategies and workflow templates for development teams, automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests, and manage secrets lifecycle using Vault for secure deployments. Collaborating with engineering leads, you will review deployment readiness, ensure quality gates are met, and integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. Monitoring infrastructure health and capacity planning using tools like Prometheus, Grafana, and Datadog, you will implement alerting rules, auto-scaling, self-healing, and resilience strategies in Kubernetes. Additionally, you will drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers. Your role will be pivotal in ensuring the reliability, scalability, and security of our systems while fostering a culture of innovation and continuous learning within the team. TechBlocks is a global digital product engineering company with 16+ years of experience, helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. We believe in the power of technology and the impact it can have when coupled with a talented team. Join us at TechBlocks and be part of a dynamic, fast-moving environment where big ideas turn into real impact, shaping the future of digital transformation.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

punjab

On-site

As a Senior DevOps Engineer at our company based in Mohali, you will be responsible for playing a crucial role in building and managing CI/CD pipelines, cloud infrastructure, and security automation. We are looking for a highly skilled and motivated individual with expertise in AWS and GCP environments, Terraform, and container orchestration technologies. Your responsibilities will include designing, implementing, and managing cloud infrastructure across AWS and GCP using Terraform. You will deploy and maintain Kubernetes clusters, manage workloads using Helm/ArgoCD, and build secure, scalable CI/CD pipelines. Integrating quality gates in CI/CD processes, ensuring container security standards, and collaborating with cross-functional teams to streamline DevSecOps culture will be key aspects of your role. To be successful in this position, you must have at least 5 years of experience in DevOps/SRE roles, proficiency in AWS and GCP, and hands-on expertise in Terraform and Infrastructure as Code. Strong experience with Kubernetes, Docker, Helm, CI/CD tools like Jenkins, GitLab CI, GitHub Actions, and knowledge of scripting languages such as Python, Go, or Bash are essential. Excellent communication skills, both written and verbal, are also required. Having industry certifications, understanding compliance frameworks, experience with monitoring tools and logging, and possessing soft skills like strong problem-solving abilities and mentoring capabilities are considered a plus. If you are passionate about DevOps, cloud infrastructure, and automation, and possess the required skills and experience, we encourage you to share your CV with us at careers@harmonydi.com or hr@harmonydi.com. Join our team and be a part of our exciting journey in Mohali, Punjab.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

You are a skilled DevOps Engineer with proven experience in a DevOps engineering role, possessing a strong background in software development and system administration. Your primary responsibility will be to implement and manage CI/CD pipelines, container orchestration, and cloud services to enhance the software development lifecycle. Your impact will be seen in how you collaborate with development and operations teams to streamline processes and improve deployment efficiency. You will implement and manage CI/CD tools such as GitLab CI, Jenkins, or CircleCI. Utilizing Docker and Kubernetes for containerization and orchestration of applications is essential. Additionally, you will write and maintain scripts in at least one scripting language (e.g., Python, Bash) to automate tasks. Managing and deploying applications using cloud services (e.g., AWS, Azure, GCP) and their respective management tools will be part of your responsibilities. You should have a good understanding and application of network protocols, IP networking, load balancing, and firewalling concepts. Implementing infrastructure as code (IaC) practices to automate infrastructure provisioning and management is crucial. Utilizing logging and monitoring tools (e.g., ELK stack, OpenSearch, Prometheus, Grafana) will ensure system reliability and performance. You should be familiar with GitOps practices using tools like Flux or ArgoCD for continuous delivery, as well as working with Helm and Flyte for managing Kubernetes applications and workflows. Your qualifications should include a Bachelor's or Master's degree in computer science or a related field, along with proven experience in a DevOps engineering role. You should have a strong background in software development and system administration, experience with CI/CD tools and practices, proficiency in Docker and Kubernetes, and familiarity with cloud services and their management tools. An understanding of networking concepts and protocols, experience with infrastructure as code (IaC) practices, familiarity with logging and monitoring tools, knowledge of GitOps practices and tools, and experience with Helm and Flyte are also required. Preferred qualifications include experience with cloud-native architectures and microservices, knowledge of security best practices in DevOps and cloud environments, understanding of database management and optimization (e.g., SQL, NoSQL), familiarity with Agile methodologies and practices, experience with performance tuning and optimization of applications, knowledge of backup and disaster recovery strategies, and familiarity with emerging DevOps tools and technologies.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

About Zscaler Zscaler, a leading company serving numerous enterprise customers globally, including 45% of Fortune 500 companies, was established in 2007 with a core mission to ensure the cloud is a secure environment for conducting business and to enhance the experience for enterprise users. Operating the world's largest security cloud, Zscaler plays a pivotal role in accelerating digital transformation, enabling enterprises to become more agile, efficient, resilient, and secure. The innovative Zscaler Zero Trust Exchange platform, powered by AI, forms the foundation of our SASE and SSE offerings, safeguarding numerous enterprise customers from cyber threats and data breaches by securely connecting users, devices, and applications across any location. Recognized as a Best Workplace in Technology by esteemed platforms like Fortune, Zscaler nurtures an inclusive and supportive culture that attracts some of the most talented professionals in the industry. If you thrive in a fast-paced, collaborative environment and are dedicated to creating and innovating for the greater good, Zscaler welcomes you to take your next career step with us. Our Engineering team is responsible for developing and enhancing the world's largest cloud security platform, built entirely from scratch. With a repertoire of over 100 patents and ambitious plans to improve services and expand our global presence, our team has established Zscaler as the leader in cloud security today, serving over 15 million users across 185 countries. Join our team of cloud architects, software engineers, security experts, and more to contribute your vision and passion towards empowering organizations worldwide to embrace speed and agility through a cloud-first approach. We are currently seeking an experienced Principal Information Security Engineer to join our Cyber and Data Security team. Reporting to the Sr. Manager of Cyber Risk & Governance, your responsibilities will include: - Designing and managing runtime container security strategies, encompassing monitoring, threat detection, incident response, and vulnerability management for Docker and Kubernetes environments. - Implementing and optimizing security tools for runtime container protection, including image scanning, network micro-segmentation, and anomaly detection solutions. - Collaborating with engineering and DevOps teams to construct secure container architectures and enforce security best practices within CI/CD pipelines. - Investigating and resolving container-related security incidents, conducting root cause analysis, and implementing preventive measures. Minimum Qualifications: - Possess 8+ years of cybersecurity experience, with a specific focus on container security, along with at least 5 years of hands-on experience with Docker and Kubernetes. - Demonstrated expertise in Kubernetes security concepts, such as pod security policies, RBAC, and network policies. - Proficiency in container image scanning tools (e.g., Wiz, Trivy, Aqua Security, Crowdstrike Cloud Security, Prisma Cloud) and runtime protection solutions. - Strong understanding of container orchestration systems and securing CI/CD pipelines for containerized applications. - Proven ability to lead technical initiatives and collaborate effectively with cross-functional teams, including DevOps and SRE. Preferred Qualifications: - Hold certifications like CKA (Certified Kubernetes Administrator) or CKSS (Certified Kubernetes Security Specialist). - Experience in automating container security tasks using infrastructure-as-code tools like Terraform, Helm, or Ansible. Join Zscaler in our mission to create a diverse and inclusive team that mirrors the communities we serve and the customers we collaborate with. We value all backgrounds and perspectives, emphasizing collaboration and a sense of belonging. Be part of our journey to streamline and secure business operations. Zscaler offers a comprehensive Benefits program designed to support our employees at every stage of their lives and careers, which includes various health plans, time off for vacation and sick leave, parental leave options, retirement plans, education reimbursement, in-office perks, and more! By applying for a role at Zscaler, you agree to adhere to relevant laws, regulations, and company policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing reasonable support and accommodations during our recruiting processes for candidates with disabilities, long-term conditions, mental health conditions, sincere religious beliefs, neurodivergent individuals, or those requiring pregnancy-related support.,

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description: As a Sr.DevOps engineer, you will be engaged in the optimization of engineering processes and tooling affecting the Analytics engineering department product. You will be involved in the development and support of solutions which enhance the efficiency of engineering as well as configuration of tools. These solutions involve diverse development platforms, software, hardware, technologies and tools. You will participate in the design, development and implementation of complex infrastructure. You will be involved in supporting our strategic move of products into the public/private/hybrid cloud, performing DevOps activities, CI/CD and working with various technologies such as Docker, Kubernetes, Containerization, Maven, Artifactory, Jenkins, Ansible, Virtualization and more as required to automate the process of building, packaging, provisioning and deployment of on-premises and SaaS applications. Responsibilities: Design, plan, build and execute software infrastructure for complete CI/CD for the build, deploy and testing of the large-scale software products. Manage and monitor the engineering labs, including provisioning of new hardware and virtual machines, debugging problems impacting engineering and provisioning access and accounts. Develop custom tools, automation and integration with existing tools to increase efficiency. Development of DevOps and Cloud solutions. Requirements: At least 5 years of experience developing and maintaining Continuous Integration/Continuous Delivery solutions using Jenkins/Harness. At least 2 year of experience with Continuous Integration (CI) / Continuous Deployment (CD) and configuration management tools such as Docker, Maven, Artifactory, Jenkins, Ansible, Kubernetes. Proficient in CI/CD, DevOps, Virtualization, Linux, OS hardening, Security, VMWare and related troubleshooting. Well versed with scripting experience and automation. At least 2 year of scripting and/or programming experience in one of the following. Ansible, Shell/Bash, Yaml , Groovy, Python, PowerShell - Mandatory. C# and/or Java - Strong advantage. Experience in Helm, Git, GitHub, Harness, Coverity, Sonar, Datadog, Kafka, Jira & Confluence. Working experience on DevOps using Harness platform will be the preference. Experience in Terraform and ARM is an advantage. Experience working in a large global organization. Experience working with public cloud platforms such as AWS, Azure or GCP. Strong interpersonal and communication skills, teamwork and self-learning ability. Ability to work independently and effectively collaborate with distributed teams. Thorough knowledge of software development best practices, Agile practices, coding standards, code reviews, source control management and build process. Bachelor’s/ Master’s degree in Computer Science / Software Engineering or related engineering field. Relevant experience in coding 3-4+ years.

Posted 2 weeks ago

Apply

7.0 - 11.0 years

40 - 45 Lacs

Noida, Ahmedabad, Chennai

Work from Office

Dear Candidate, We are looking for a skilled Cloud DevOps Engineer to automate, deploy, and manage cloud infrastructure. If you have expertise in AWS, CI/CD pipelines, and Infrastructure as Code (IaC), wed love to hear from you! Key Responsibilities: Design and implement cloud-based DevOps solutions. Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation. Manage CI/CD pipelines for application deployment. Monitor system performance, security, and reliability. Optimize cloud resources for cost efficiency and scalability. Troubleshoot and resolve production issues in cloud environments. Required Skills & Qualifications: Hands-on experience with AWS, Azure, or Google Cloud. Expertise in CI/CD tools (Jenkins, GitLab CI, ArgoCD). Strong scripting skills (Python, Bash, PowerShell). Knowledge of Kubernetes, Docker, and container orchestration. Familiarity with logging and monitoring tools (Prometheus, Grafana, ELK). Soft Skills: Strong analytical and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Position Azure Cloud Architect Job Description Arrow Electronics is seeking a Microsoft Azure Architect to join our Cloud Engineering team at our Bangalore, India facility. The Cloud Engineer will provide technical expertise in the IAAS/PAAS/SAAS Public Cloud domain in addition to facilitating project management cycles while working directly with Arrow's internal application services group. In this role, the Engineer will maintain a high level of technical aptitude in cloud and hybrid cloud solutions while also displaying a proven ability to communicate effectively and offer excellent customer service. What You Will Be Doing Azure Networking & Troubleshooting: Expertise in Private DNS Zones, Private Endpoints, ExpressRoute, and BGP routing, with strong troubleshooting skills. Azure Best Practices: Design and implement scalable, secure, and cost-effective solutions following Azure best practices. Landing Zones: Architect and implement Azure Landing Zones for streamlined governance and secure cloud adoption. Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform and Azure DevOps (ADO) pipelines. Azure Kubernetes Service (AKS): Design, manage, and secure AKS clusters, including Helm and GitOps workflows (e.g., ArgoCD, Flux). API Management: Manage Azure API Management (APIM) for secure API exposure, OAuth, Managed Identities, and API security best practices. CI/CD Pipelines: Develop and manage CI/CD pipelines through Azure DevOps for infrastructure and API automation. Security & Identity: Implement RBAC, Azure Entra ID, B2C, and External ID, and enforce Zero Trust security models. Data & AI Services (Preferred): Work with Databricks, Machine Learning via AKS, and Azure OpenAI Services. Security Automation: Implement automated security remediations, Wiz integrations, and governance policies. WAF/CDN Solutions: Work on solutions like Akamai and Azure Front Door for enhanced security and performance. Leadership & Mentorship: Mentor teams, drive cloud automation initiatives, and encourage certifications in Azure and AWS. RESTful APIs: Strong understanding of RESTful APIs and API Management operational mechanisms. Cost Optimization: Evaluate workload placements based on cost, performance, and security. Troubleshooting: Skilled in resolving issues, retrieving logs, and utilizing Application Insights for performance optimization. Soft Skills: Strategic thinker with excellent problem-solving, leadership, and communication skills. What We Are Looking For Lead effort to plan, engineer, and design the infrastructure as well as lead the effort to evaluate, recommend, integrate and coordinate enhancements to the infrastructure. Work with IT Architects to ensure that modified infrastructure interacts appropriately, data conversion impacts are considered, and other areas of impact are addressed and meet performance requirements of the project. Lead effort to develop and configure the infrastructure from conceptualization through stabilization using various computer platforms. Lead effort to implement the infrastructure by analyzing the current system environment and infrastructure, using technical tools and utilities, performing complex product customization, and developing implementation and verification procedures to ensure successful installation of systems hardware/software. Lead routine infrastructure analysis, and evaluation on resource requirements necessary to maintain and/or expand service levels. Providing cost estimates for new projects Collaborating in the creation work/design documents with team assistance. Performing "light" project management for on-boarding requests, collaborating with teams on scheduling actions and following change control processes. Assisting in show-back/charge back documentation and reports. Prior experience supporting Microsoft and/or Linux operating systems in a corporate environment on or off premise required. Prior experience designing/supporting IAAS infrastructure in a public cloud strongly preferred. Prior Knowledge of Azure Active Directory, Conditional Access, PowerBI a plus. Prior experience with Azure IAAS, PAAS, Networking and security a plus. Innovative self starter willing to learn, test, implement new technologies on existing and new public cloud providers. Experience / Education Typically requires 12 plus years of related experience with a 4 year degree; or 3 years and an advanced degree; or equivalent work experience. Arrow Electronics, Inc. (NYSE: ARW), an award-winning Fortune 133 and one of Fortune Magazine’s Most Admired Companies. Arrow guides innovation forward for over 220,000 leading technology manufacturers and service providers. With 2024 sales of USD $28.1 billion, Arrow develops technology solutions that improve business and daily life. Our broad portfolio that spans the entire technology landscape helps customers create, make and manage forward-thinking products that make the benefits of technology accessible to as many people as possible. Learn more at www.arrow.com. Our strategic direction of guiding innovation forward is expressed as Five Years Out, a way of thinking about the tangible future to bridge the gap between what's possible and the practical technologies to make it happen. Learn more at https://www.fiveyearsout.com/. Location: IN-KA-Bangalore, India (SKAV Seethalakshmi) GESC Time Type Full time Job Category Information Technology

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Team/Role At WEX, we simplify the business of running a business. Our WEX Health & Benefits solutions reduce complexity and help manage costs of benefits administration for our clients and partners. We are looking for passionate technologists, collaborators, and problem solvers to join our Health & Benefits Technology team as we build the next generation of employer benefits solutions and services. As a Software Engineering Manager on the WEX Health & Benefits Technology team, you will lead a team that partners closely with Product Managers and customers to learn about the challenges employers face while navigating the competitive employee benefits landscape. You will become a domain expert, designing solutions that solve problems in ways our customers love and work for our business. You will lead teams who build the highest quality software in the latest technologies and test driven development practices. How you’ll make an impact Lead, mentor, and manage your team through the successful delivery of valuable customer software. Stay current with emerging technologies and industry trends to drive innovation and strengthen TDD and BDD processes. Collaborate closely with Product Management by providing technical guidance on software design. Guide your team on best practices, coding standards, and design principles. Conduct performance reviews, set goals, and support professional development for team members. Measure, inspect, and drive decisions using data. Design, test, code, and instrument new solutions. Support live applications, promote proactive monitoring, rapid incident response and troubleshooting, and continuous improvement. Analyze existing systems and processes to identify bottlenecks and opportunities for improvements. Understand how your domain fits into and contributes to the overall company. Influence priority, expectations, and timelines within your domain. Set short-term (~monthly) goals for your team to deliver on priorities. Focus on instrumentation and team efficiency and performance measurables. Contribute to long term vision and the strategy to achieve the vision for the technology organization. Interact and communicate effectively with peer groups, non-technical organizations, and middle management. Drive collaboration across technology teams to foster innovation and follow guidelines around re-usability of frameworks and governance of architecture patterns. Experience you’ll bring Bachelor's degree in Computer Science, Software Engineering, or related field; OR demonstrable equivalent experience. At least 7 years of experience in software engineering. At least 5 years of management or supervisory experience. Excellent leadership ability to motivate teams and drive results. Strategic thinking that aligns with business objectives and drives innovation. Strong problem-solving skills, excellent communication and collaboration skills. Passionate about keeping up with modern technologies and design. Technology Must-Haves C#, Python (if applicable) Docker Modern RDBMS (i.e. MS SQL, Postgres, MySQL) ASP.NET RESTful API design Kafka / event-driven design Modern Web UI Frameworks and Libraries (i.e. Angular, React) Kubernetes NoSQL databases Designing and developing Cloud-Native applications and services Strong understanding of software security principles and OWASP guidelines Technology Nice-To-Haves or Dedicate to Learning Quickly Helm/ArgoCD Terraform GitHub Actions GraphQL Azure

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Senior DevOps Engineer will play a significant role in the development, automation, and security across our diverse company portfolio. We presently use a diverse number of systems like Chef, GitLab, GitLab-CI, Terraform, and Kubernetes to automate the testing and delivery of both our Windows and Linux based hosted applications; with plans to consolidate on a standard set of tools leveraging GitLab-CI and Kubernetes where possible. The Senior DevOps Engineer will have an excellent understanding of full-stack application development, testing, continuous integration, a passion for security, and extensive experience working in an agile software development environment. #India What your impact will look like here: Support the software development process by identifying inefficiencies, recommending solutions, automating delivery, and implementing testing Mentor other Engineers Technical leadership for medium to large projects Escalation support for Tier 2 SRE team in addition to internal DevOps needs Provisioning and maintenance of systems, load balancer configs, firewall rules, databases, and other automation driven infrastructure tasks Automate CI/CD pipelines to build and deploy our software You will love this job if you have: 5+ years of extensive experience with Elasticsearch and ELK stack implementations & Observability (certification is an added advantage). Good exposure to AI/ ML Proficiency in CI/CD tools, infrastructure as code, Cloud Services, Docker, Kubernetes, Helm, artifact management, security scanning, and secrets management. Strong understanding of CI/CD pipelines and infrastructure automation. Demonstrated ability to enforce security compliance and best practices. Excellent problem-solving skills and a collaborative mindset. Proven track record of mentoring team members and driving continuous improvement initiatives. The Team We area globally distributed workforce across the United States, Canada, United Kingdom, India, Armenia, Australia, and New Zealand. The Culture At Granicus, we are building a transparent, inclusive, and safe space for everyone who wants to be a part of our journey. A few culture highlights include – - Employee Resource Groups to encourage diverse voices - Coffee with Mark sessions – Our employees get to interact with our CEO on very important and sometimes difficult issues ranging from mental health to work life balance and current affairs. - Embracing diversity & fostering a culture of ideation, collaboration & meritocracy - We bring in special guests from time to time to discuss issues that impact our employee population The Company Serving the People Who Serve the People Granicus is driven by the excitement of building, implementing, and maintaining technology that is transforming the Govtech industry by bringing governments and its constituents together. We are on a mission to support our customers with meeting the needs of their communities and implementing our technology in ways that are equitable and inclusive. Granicus has consistently appeared on the GovTech 100 list over the past 5 years and has been recognized as the best companies to work on BuiltIn. Over the last 25 years, we have served 5,500 federal, state, and local government agencies and more than 300 million citizen subscribers power an unmatched Subscriber Network that use our digital solutions to make the world a better place. With comprehensive cloud-based solutions for communications, government website design, meeting and agenda management software, records management, and digital services, Granicus empowers stronger relationships between government and residents across the U.S., U.K., Australia, New Zealand, and Canada. By simplifying interactions with residents, while disseminating critical information, Granicus brings governments closer to the people they serve—driving meaningful change for communities around the globe. Want to know more? See more of what we do here. The Impact We are proud to serve dynamic organizations around the globe that use our digital solutions to make the world a better place — quite literally. We have so many powerful success stories that illustrate how our solutions are impacting the world. See more of our impact here. The Process - Assessment – Take a quick assessment. - Phone screen – Speak to one of our talented recruiters to ensure this could be a fit. - Hiring Manager/Panel interview – Talk to the hiring manager so they can learn more about you and you about Granicus. Meet more members on the team! Learn more and share more. - Reference checks – Provide 2 references so we can hear about your awesomeness. - Verbal offer – Let’s talk numbers, benefits, culture and answer any questions. - Written offer – Sign a formal letter and get excited because we sure are! Benefits at Granicus India Along with the challenges of the job, Granicus offers employees an attractive benefits package which includes – - Hospitalization Insurance Policy covering employees and their family members including parents - All employees are covered under Personal Accident Insurance & Term Life Insurance policy - All employees can avail annual health check facility - Eligible for reimbursement of telephone and internet expenses - Wellness Allowance to avail health club memberships and/or access to physical fitness centres - Wellbeing Wednesdays which includes 1x global Unplug Day and 2x No Meeting Days every quarter - Memberships for ‘meditation and mindfulness apps including on-demand mental health support 24/7 - Access to learning management system Say., LinkedIn Learning Premium account membership & many more - Access to Rewards & recognition portal and quarterly recognition program Security and Privacy Requirements - Responsible for Granicus information security by appropriately preserving the Confidentiality, Integrity, and Availability (CIA) of Granicus information assets in accordance with the company's information security program. - Responsible for ensuring the data privacy of our employees and customers, their data, as well as taking all required privacy training in a timely manner, in accordance with company policies. Granicus is committed to providing equal employment opportunities. All qualified applicants and employees will be considered for employment and advancement without regard to race, color, religion, creed, national origin, ancestry, sex, gender, gender identity, gender expression, physical or mental disability, age, genetic information, sexual or affectional orientation, marital status, status regarding public assistance, familial status, military or veteran status or any other status protected by applicable law.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

3 - 9 Lacs

Cochin

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your key responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills and attributes for success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What we look for Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

4.0 years

8 - 10 Lacs

Hyderābād

On-site

About Providence Providence, one of the US’s largest not-for-profit healthcare systems, is committed to high quality, compassionate healthcare for all. Driven by the belief that health is a human right and the vision, ‘Health for a better world’, Providence and its 121,000 caregivers strive to provide everyone access to affordable quality care and services. Providence has a network of 51 hospitals, 1,000+ care clinics, senior services, supportive housing, and other health and educational services in the US. Providence India is bringing to fruition the transformational shift of the healthcare ecosystem to Health 2.0. The India center will have focused efforts around healthcare technology and innovation, and play a vital role in driving digital transformation of health systems for improved patient outcomes and experiences, caregiver efficiency, and running the business of Providence at scale. Why Us? Best In-class Benefits Inclusive Leadership Reimagining Healthcare Competitive Pay Supportive Reporting Relation Cybersecurity at Providence is responsible for appropriately protecting all information relating to its caregivers and affiliates, as well as protecting its confidential business information (including information relating to its caregivers, affiliates, and patients) What will you be responsible for? Responsible for driving automation with Providence Enterprise security tools and services to bring in process efficiency and improvements in cyber security teams. Driving security automation workflows and build automation to bring impact in everyday workflows in threat management, security incident response and security operations teams. Identify scope for automation that improves security best practices and implement process workflows that strengthen the overall security posture. Participate in all Security operation and engineering meetings, including design, implementation, and identify scope for automation wherever needed in the overall workflow. Troubleshoot, debug, and optimize existing and new automation code/scripts and stay ahead of with cyber threats in healthcare and overall threat landscape and attack methods in cyber security industry. What would your work week look like? Collaborate with cross-functional teams and engage in building process and tool automation opportunities in threat and cyber incident response. Constantly look for healthcare-oriented threats and risks and build automation workflows using enterprise tools for alerting and response. Work in XSOAR automation tool to create new or review/optimize existing automation workflows. Identify and implement SOAR automation use cases that aligns with industry standard frameworks such as NIST, CIS and Providence information security policies etc. Set-up regular meetings with stakeholders to show progress of SOAR automation use cases and automation use cases implemented with applicable metrics. Clearly communicate security automation roadmap, backlog, and team updates across the organization. Who are we looking for? Bachelor’s degree in related filed, to include computer science, cyber security or equivalent combination of education and experience. 4-8 years of relevant post-qualification experience, with at least 3 years of proven experience in building automation workflows using SOAR for security engineering and security operation functions. Solid understanding of building or writing automation scripts using Python, PowerShell or any other scripting language. Hands-on experience in any vendor SOAR automation tool- Palo alto XSOAR preferred. Solid understanding in building secure API integration with industry standard EDR, SIEM, firewall and vulnerability management tools. Good understanding in implementing automation best practices and workflows- Secure key management and rotation, efficient resource handling etc. Understanding of AI and Large Learning Models (LLMs) and ability to leverage them to build security automation workflows. Familiarity with cloud native solutions, application containerization and container orchestration (Docker, Kubernetes), Infrastructure as Code (IaC), helm charts and YAML template configuration. Scripting or programming understanding with Shell scripting, Power Shell, KQL, CQL query languages is desirable. Providence’s vision to create ‘Health for a Better World’ aids us to provide a fair and equitable workplace for all in our employment, whether temporary, part-time or full time, and to promote individuality and diversity of thought and background, and acknowledge its role in the organization’s success. This makes us committed towards equal employment opportunities, regardless of race, religion or belief, color, ancestry, disability, marital status, gender, sexual orientation, age, nationality, ethnic origin, pregnancy, or related needs, mental or sensory disability, HIV Status, or any other category protected by applicable law. In furtherance to our mission in building a more inclusive and equitable environment, we shall, from time to time, undertake programs to assist, uplift and empower underrepresented groups including but not limited to Women, PWD (Persons with Disabilities), LGTBQ+ (Lesbian, Gay, Transgender, Bisexual or Queer), Veterans and others. We strive to address all forms of discrimination or harassment and provide a safe and confidential process to report any misconduct. Contact our Integrity hotline also, read our Code of Conduct.

Posted 2 weeks ago

Apply

15.0 years

5 - 8 Lacs

Hyderābād

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Employee experience technology, designs and delivers modern technology solutions for all teammates globally to interact, perform in their roles and service critical staff support organizations including Chief Administrative Office, Global Strategy & Enterprise Platforms, Global Human Resources, Corporate Audit & Credit Review, and Legal. Legal Technology enables modern practice of law through technology transformation and is responsible for delivering strategic technology solutions to the Legal Department and Office of the Corporate. Job Description* We are seeking a Senior DevOps Engineer with strong technical expertise in defining CI/CD strategy and implementation for large scale applications. Having proven experience in designing, building and maintaining scalable CI/CD pipelines across multiple environments. Adept in working across teams in standardizing CI/CD best practices and driving adoption. Responsibilities* Lead SDLC pipeline design, orchestration and automation of build, test and deployment workflows across portfolio. Collaborate with developers, QA and operations to streamline delivery processes and reduce deployment friction. Understand automated CICD process and help application teams onboard to enterprise tools. Lead complex CI/CD integrations to ensure on-time delivery and adherence to release processes and risk management. Perform root cause analysis for incidents and drive long term resolutions through automation and process improvements. Mentor junior engineers and lead CI/CD across multiple initiatives and teams. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA Certifications If Any: NA Experience Range* 15+ Years Foundational Skills* 8+ years of experience with any programming knowledge (.NET or Java or Python) and strong scripting skills in Groovy, Bash and PowerShell. Proficient in configuring Bitbucket, Artifactory, Jenkins, and CD Configuration management and release management tools. Sound knowledge of DevSecOps practices, secret management (Hashi vault), integration of code scanning tools including CheckMarx, Sonar. Experience with Change management, release strategies and zero downtime deployments. Good understanding of DNS, load balancing and basic networking principles. Excellent communication skills with strong stakeholder engagement, and ability to collaborate with other teams. Proven ability in leading and mentoring a team of DevOps engineers in a dynamic environment. Desired Skills* Experience in Cloud DevOps. Experience with tools such as Terraform, Docker, Kubernetes, Azure key vault. Knowledge of Helm charts and managing deployments in Kubernetes. Familiarity with observability tools including but not limited to Prometheus, Grafana. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Hyderabad

Posted 2 weeks ago

Apply

7.0 years

3 - 6 Lacs

Hyderābād

On-site

Requirements: Experience: 7+ Years Security Tools: Black Duck, Prisma Cloud, Qualys, Snyk, Coverity, SonarQube, Burpsuite (Anyone) DevOps Stack: Jenkins, Kubernetes, Helm, Docker Programming: Python, Shell, YAML, JSON (Good to have) Cloud Platforms: AWS, GCP (Understanding basics of Cloud) Vulnerability Management: Own end-to-end vulnerability lifecycle for a given Business Unit consisting of multiple enterprise level products. (SaaS & on-prem). Triage, track, Correlate and remediate vulnerabilities from tools like Black Duck, Prisma Cloud, Qualys, Jfrog Xray etc. Security Automation: Integrate security scanning tools into common tools. Develop dashboards and reports for compliance and leadership visibility. Write high level design to automate a few of the manual work. Collaboration & Governance: Work cross-functionally with product teams, and stakeholders. Contribute to security policies, standards, and best practices. Qualification: Bachelor’s degree in computer science, Engineering, or a related field Job Category: IT Support Job Location: Hyderabad Job Country: India

Posted 2 weeks ago

Apply

0 years

3 - 6 Lacs

Panchkula

On-site

Responsibilities: Design, implement, and manage automated CI/CD pipelines to facilitate efficient software delivery. Collaborate with development, operations, and quality assurance teams to optimize software development and release processes. Implement and maintain infrastructure as code (IaC) using tools such as Terraform, CloudFormation, or Ansible. Deploy, configure, and maintain cloud-based infrastructure and services on platforms such as AWS, Azure, or GCP. Monitor system performance, reliability, and security, and implement improvements as needed. Troubleshoot production issues and coordinate with cross-functional teams to ensure timely resolution. Implement and manage containerization and orchestration technologies such as Docker and Kubernetes. Develop and maintain documentation for infrastructure, processes, and best practices. Stay current with industry trends, best practices, and emerging technologies in DevOps and cloud computing. Participate in on-call rotation and provide support for production systems as needed. Qualifications and skills Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). Proven experience as a DevOps Engineer or similar role. Hands-on experience with CI/CD tools such as Jenkins, GitLab CI/CD, or CircleCI. AWS EKS k8s Teraform Argo/GitOps Helm Charts Proficiency in scripting and automation using languages such as Python, Bash, or PowerShell. Experience with version control systems such as Git. Strong understanding of cloud computing concepts and experience with at least one major cloud provider (AWS, Azure, or GCP). Familiarity with containerization and orchestration technologies such as Docker and Kubernetes. Experience with infrastructure as code (IaC) tools such as Terraform, CloudFormation, or Ansible. Knowledge of networking concepts and protocols. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Preferred Qualifications: Certification in relevant technologies such as AWS Certified DevOps Engineer, Certified Kubernetes Administrator (CKA), or similar. Experience with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or Splunk. Familiarity with security best practices and tools for securing cloud environments. Experience with serverless computing and microservices architecture. Knowledge of Agile methodologies and DevOps practices.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies