Jobs
Interviews

944 Gitops Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

5 - 15 Lacs

Bengaluru, Karnataka, India

On-site

AsanEngineering Manageryou will: Managethe professional developmentfora team of up to12software engineers. Hire, grow, and mentor high performing teams. Provideleadershipintechnical design and architecture;determineservices we shoulduse and how theyshould beimplemented. Mentor engineers inside and outside the team. Voice your opinion on technical decisionsand build consensus. Model participation in a Scrum team andembracethe agile work model. Promote software engineering best practices. Be a go-to person for software engineers on your team and other teams, beingwilling to tacklethe hardproblems. Bea hands-on developer, implementing POCs and solutions. Help unblock technical issues andprovidegeneral directionto the team. Participateinallaspectsofthedevelopmentlifecycle:Ideation,Design,Build,TestandOperate.We embraceaDevOpsculture( youbuildit,yourunit );whilewehavededicated24x7level-1support engineers,youwillbecalledontoassistwithlevel-2support. WorkprimarilywithJava, NodeJS,React, .NET,AWSandAzurepubliccloud,Identityplatforms(Auth0,PingIdentity, Microsoft Azure AD B2C)and identity standards (OAuth, OIDC, SAML,SCIM, etc.) Collaborate withengineeringmanagers, architects, scrum masters, software engineers, DevOps engineers, product managers and project managers to deliver phenomenal software. Demonstrateand modeltransparency and collaboration with teamsacross the company. Keepup-to-datewith emerging cloud technologytrends , especially inIdentity&AccessManagement. About You you'rea fit for the role of EngineeringManager ifyou have: 10+years of experiencein Software Engineeringwith Java, .NET, or similar languages. ExperiencedevelopingRESTAPIs. Experience developingcloud-nativeapplications and services on Azure,AWSorGCP. Excellent problem-solving skills, with the ability toidentifyand resolve complex technical issues. Strong written and verbal communication skills, with the ability to communicate technical concepts to non-technical stakeholders. Bachelors degree inComputer Science, Software Engineering or a related field. 3-4+ yearsofexperienceleading a team of Engineersand giving themdirectiononimplementationandbest practices. ExperiencewithamajorIdentityProvidersuchasForgeRock,Ping,Okta,orAuth0forWorkforceorCustomerIdentity andAccessManagement,andrelatedexperiencewith the OAuth2, OIDC, SAML and SCIM standards. Experience with automation and CI/CD tools using CloudFormation,TerraFormorGitOps.

Posted 3 days ago

Apply

2.0 - 6.0 years

5 - 15 Lacs

Bengaluru, Karnataka, India

On-site

In this opportunity as Associate Identity Architect,you will: AWS and Azure public cloud Identity platforms (Auth0,PingIdentity, MicrosoftAzureAD B2C, etc)andidentity standards (OAuth, OIDC,SAML, SCIM, etc) Java and/or .NET Core Create and oversee the architectural design & principles for CIAM When and whereappropriate, be hands-on in terms of implementing POCs and solutions Mentor to other engineers inside and outside the team Establish best practices; provide tooling that makes compliance frictionless Communicate and convey the CIAM architecture to outside architects Demonstrate and model transparency and collaboration with teams across the company Provide technical leadership in CIAM across TR Keepup-to-datewith emerging cloud technology trends , especially in Identity & Access Management. About You: you'rea fit for the role ofAssociate Architectif you have: Basic qualifications: 8+ years of experience in Software Engineering with Java, .NET, or similar languages. Experience developing REST APIs. Experience developing cloud-native applications and services on Azure,AWSor GCP. Excellent problem-solving skills, with the ability toidentifyand resolve complex technical issues. Strong written and verbal communication skills, with the ability to communicate technical concepts to non-technical stakeholders. Bachelors degree in Computer Science, Software Engineering or a related field. Experience with a major Identity Provider such as ForgeRock, Ping, Okta, or Auth0 for Workforce or Customer Identity and Access Management, and related experience with the OAuth2, OIDC, SAML and SCIM standards. Experience with automation and CI/CD tools using CloudFormation,TerraFormorGitOps.

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

At Franklin Templeton, the primary focus is on delivering better client outcomes through close collaboration with clients, understanding their strategic needs, and providing innovative solutions. With over 9,500 employees in 34 countries, we are dedicated to servicing investment solutions for clients in more than 160 countries. Our success over the past 70 years is attributed to the talent, skills, and dedication of our people. We are currently seeking qualified candidates to join our team. The Cloud Engineering team at FTT organization is responsible for offering Public Cloud platforms (AWS and Azure) to enable Franklin Templeton developers to accelerate time-to-market, drive innovation, and simplify integrations. We prioritize a high-quality engineering culture to design platforms with a customer-centric approach, scalability for large enterprises, and resilience to support market innovation. As a Senior Engineer at Franklin Templeton Technology, you will play a crucial role in developing and implementing the multi-cloud strategy. We are looking for individuals who are passionate about cloud technologies and leveraging them to address complex business challenges for our customers and Application Development teams. If you thrive in a collaborative environment, possess strong technical skills, and enjoy tackling significant challenges, we believe you will find our team rewarding to work with. Key Responsibilities of a Senior Engineer include: - Designing cloud-based solutions aligned with business requirements, collaborating with application teams for cloud service deployment - Providing expert guidance on Cloud design decisions, standards, and operational practices - Engaging in the Cloud Center of Excellence to establish and enforce best practices in cloud platform engineering, operations, application development, and governance - Selecting and implementing new cloud services and tools to progress the cloud roadmap - Maintaining blueprints and reference implementations of cloud products - Collaborating with Information Security teams to incorporate secure app patterns into Cloud platforms - Offering guidance on cloud platforms to teams dealing with high application complexity, escalating risks and issues when necessary - Facilitating discussions across key stakeholders to address challenges Qualifications and Experience: - Bachelor's degree in computer science or related field - 10+ years of expertise in leading Cloud platforms such as AWS and Azure - Proficiency in delivering large-scale distributed enterprise platforms focusing on performance, scale, security, reliability, and cost optimization - Experience in DevOps and GitOps models with technologies like Terraform - Familiarity with on-premises Private Cloud and Public Cloud platforms, including Azure and AWS - Proficiency in native CSP orchestration stacks and container-native technologies like Kubernetes - Experience with cloud-native logging, monitoring, and operations tools such as Datadog and Prometheus - Expertise in areas like Cloud IAM, network and security design, cloud-native Kubernetes services, and configuration management and automation tools This role is at the Individual Contributor level with work shift timings from 2:00 PM to 11:00 PM IST.,

Posted 3 days ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Machine Learning Engineer at Aera Technology, you will play a crucial role in designing, implementing, monitoring, and maintaining the machine learning systems that power our Aera Decision Cloud platform. Your responsibilities will involve working on new and challenging engineering problems to operationalize data science, collaborating closely with experts in Data Science, Engineering, and DevOps teams, and designing state-of-the-art Machine Learning approaches. Your main tasks will include building core machine learning infrastructure, such as distributed systems, development tools, model serving, and inference pipelines. Additionally, you will take end-to-end ownership of developing Machine Learning systems from data pipelines to training, hosting, and inference pipelines. It is essential to have strong programming skills in languages like C++, Go, Java, and Python, along with experience in working with large data sets and pipelines using frameworks like Dask and Ray. Ideally, you should possess a B.E./B.Tech M.E/M.Tech/M.S in Computer Science or Computer Engineering, along with 3-8 years of experience in software engineering and architecture. Having at least 2 years of experience in designing and deploying Machine Learning Systems and familiarity with libraries like Scikit-learn, Pandas, PyTorch, Tensorflow, and Keras will be beneficial for this role. Experience in building containerized applications using Docker and Kubernetes, Agile Methodology, GITOps, and Jira is also desired. In our dynamic environment, hosted on AWS/Azure/GCP with Kubernetes infrastructure, you will have the opportunity to contribute to the transformation of decision-making processes for enterprises worldwide. If you are passionate about creating a sustainable, intelligent, and efficient world, Aera Technology is the right place for you. Join our diverse teams across different global locations and be a part of our journey to revolutionize decision intelligence. At Aera Technology, we value our team members and offer a range of benefits to support them and their families at various life stages. In addition to a competitive salary and company stock options, we provide comprehensive medical insurance, term insurance, accidental insurance, paid time off, maternity leave, and more. We prioritize professional and personal development by offering unlimited access to online courses and people manager development programs. Our flexible working environment promotes a healthy work-life balance, and our office facilities include a fully-stocked kitchen with snacks and beverages to keep you refreshed throughout the day.,

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Position Title: SRE Engineer Position Type: Regular - Full-Time Position Location: New Delhi Requisition ID: 30491 Job Purpose Reporting to the Sr Manager, DevSecOps & SRE, the Site Reliability Engineer will be responsible for: Site reliability engineers (SREs) are responsible for improving system reliability and resilience to make it faster and easier to develop and deploy new software capabilities. SREs focus especially on building automation to reduce manual effort and prevent operations incidents. Job Responsibilities Work with stakeholders such as product owners and Engineering to define service level objectives (SLOs) for system operations. Track performance against SLOs in partnership with monitoring teams or other stakeholders, and ensure systems continue to meet SLOs over time. Create dashboards and reports to communicate key metrics. Create software to improve performance, scalability, and stability of systems. Collaborate with development teams to promote the concept of reliability engineering during all phases of the software development lifecycle to detect and correct performance issues and meet availability goals. Design, code, test, and deliver infrastructure software to automate manual operational work (i.e., “toil”). Participate in operational support and on-call rotation shifts for supported systems and products. Conduct blameless post mortems to troubleshoot priority incidents. Perform analytics on previous incidents to understand root causes and better predict and prevent future issues. Use automation to reduce the probability and/or impact of problem recurrence. Identify, evaluate, and recommend monitoring tools and diagnostic techniques to improve system observability. Participate in system design consulting, platform management, capacity planning and launch reviews. Collaborate and share lessons learned regarding performance and reliability issues with all stakeholders including developers, other SREs, operations teams, and project management teams. Participate in communities of practice to share knowledge and foster continuous improvement. Remain current on site reliability engineering methods and trends such as observability-driven development and chaos engineering. Drive continuous improvement in software quality and infrastructure reliability and resilience. Oversee, design, implement, and manage DevOps capabilities using continuous integration/continuous delivery toolsets and automation. SRE engineer will focus on Application Performance Monitoring (APM) including Design, Solution, POC, profiling and tuning application compute and data nodes and resources. Some key duties of this role are: Assist in defining SRE and Observability architecture, design Analyze, Implement new features of SRE and Observability Platform Full stack monitoring across all layers (Infrastructure/Network/Database/Application/Services/Third Party) Provide technical hands-on leadership in commercial and Open source/commercial monitoring Tool salection Implementation. Implement SRE driven automated Incident Detection -> automated Engagement –> Triage/Mitigate – RCA/Postmortems -> Problem task Remediation. AI Driven Correlation, De-duplication Noise Reduction and Auto Remediation Provide weekly monitoring and alert analysis and continuous improvement Create a model of the run-time environment (discovery) Profile the performance and behavior of user-defined transactions Establish Performance metrics from each of the applications/systems technical components (Webserver, App server, Database, etc.) Application performance management database APM tool Administration and Support Monitoring Tool design and implementation APM Setup/Usage policies and guidelines Capacity Planning and monitoring Monitor selected application performance Report vital statistics of application performance in production Make recommendations for improvements with Service Desk Make recommendations for adjustments to runtime resources to improve overall performance profile Key Qualification & Experiences Strong problem solving and analytical skills. Strong interpersonal and written and verbal communication skills. Highly adaptable to changing circumstances. Interest in continuously learning new skills and technologies. Experience with programming and scripting languages (e.g. Java, C#, C++, Python, Bash, PowerShell). Experience with incident and response management. Experience with Agile and DevOps development methodologies. Experience with container technologies and supporting tools (e.g. Docker Swarm, Podman, Kubernetes, Mesos). Experience with working in cloud ecosystems (Microsoft Azure AWS, Google Cloud Platform,). Experience with monitoring and observability tools (e.g. Splunk, Cloudwatch, AppDynamics, NewRelic, ELK, Prometheus, OpenTelemetry). Experience with configuration management systems (e.g. Puppet, Ansible, Chef, Salt, Terraform). Experience working with continuous integration/continuous deployment tools (e.g. Git, Teamcity, Jenkin, Artifactory). Experience in GitOps based automation is Plus Bachelor’s degree (or equivalent years of experience). 5+ years of relevant work experience. SRE experience preferred. Background in Manufacturing, Platform/Tech compnies is preferred. Must have Public Cloud provider certifications (Azure, GCP or AWS) Having CNCF certification is plus Other Information Travel: as required. Job is primarily performed in a Hybrid office environment. McCain Foods is an equal opportunity employer. We see value in ensuring we have a diverse, antiracist, inclusive, merit-based, and equitable workplace. As a global family-owned company we are proud to reflect the diverse communities around the world in which we live and work. We recognize that diversity drives our creativity, resilience, and success and makes our business stronger. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to meet your needs. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with the Global Employee Privacy Policy Job Family: Information Technology Division: Global Digital Technology Department: I and O Project Delivery Location(s): IN - India : National Capital Territory : New Delhi Company: McCain Foods(India) P Ltd

Posted 4 days ago

Apply

5.0 years

0 Lacs

Delhi, India

Remote

Position Title: Infrastructure Solution Architect Position Type: Regular - Full-Time Position Location: New Delhi Requisition ID: 32004 Job Purpose As a Cloud Infrastructure Solution Architect, you'll drive the success of our IT Architecture program through your design expertise and consultative approach. You'll collaborate with stakeholders to understand their technical requirements, designing and documenting tailored solutions. Your blend of architecture and operations experience will enable you to accurately size work efforts and determine the necessary skills and resources for projects. Strong communication, time management, and process skills are essential for success in this role. You should have deep experience in defining Infrastructure solutions: Design, Architecture and Solution Building blocks. Role Overview The cloud infrastructure architect role helps teams (such as product teams, platform teams and application teams) successfully adopt cloud infrastructure and platform services. It is heavily involved in design and implementation activities that result in new or improved cloud-related capabilities, and it brings skills and expertise to such areas as cloud technical architecture (for a workload’s use of infrastructure as a service [IaaS] and platform as a service [PaaS] components); automating cloud management tasks, provisioning and configuration management; and other aspects involved in preparing and optimizing cloud solutions. Successful outcomes are likely to embrace infrastructure-as-code (IaC), DevOps and Agile ways of working and associated automation approaches, all underpinned by the cloud infrastructure engineer’s solid understanding of networking and security in the cloud. The nature of the work involved means that the cloud infrastructure engineer will directly engage with customer teams, but will also work on cloud infrastructure platform capabilities that span multiple teams. The cloud infrastructure architect collaborates closely with other architects, product/platform teams, software developers, Cloud Engineers, site reliability engineers (SREs), security, and network specialists, as well as other roles, particularly those in the infrastructure and operations. Being an approachable team-player is therefore crucial for success, and willingness to lead initiatives is important too. The cloud infrastructure engineer also supports colleagues with complex (escalated) operational concerns in areas such as deployment activities, event management, incident and problem management, availability, capacity and service-level management, as well as service continuity. The cloud infrastructure architect is expected to demonstrate strong attention to detail and a customer-centric mindset. Inquisitiveness, determination, creativity, communicative and collaboration skills are important qualities too. Key Responsibilities Provide expert knowledge on cloud infrastructure and platforms solutions architecture, to ensure our organization achieves its goals for cloud adoption. This involves translating cloud strategy and architecture into efficient, resilient, and secure technical implementations. Define cloud infrastructure landing zones, regional subscriptions, Availability Zone, to ensure HA, resiliency and reliability of Infrastructure and applciations Offer cloud-engineering thought leadership in areas to define specific cloud use cases, cloud service providers, and/or strategic tools and technologies Support cloud strategy working on new cloud solutions including analysing requirements, supporting technical architecture activities, prototyping, design and development of infrastructure artifacts, testing, implementation, and the preparation for ongoing support. Work on cloud migration projects, including analyzing requirements and backlogs, identifying migration techniques, developing migration artifacts, executing processes, and ensuring preparations for ongoing support. Design, build, deliver, maintain and improve infrastructure solutions. This includes automation strategies such as IaC, configuration-as-code, policy-as-code, release orchestration and continuous integration/continuous delivery (CI/CD) pipelines, and collaborative ways of working (e.g., DevOps). Participate in change and release management processes, carrying out complex provisioning and configuration tasks manually, where needed. Research and prototype new tools and technologies to enhance cloud platform capabilities. Proactively identify innovative ways to reduce toil, and teach, coach or mentor others to improve cloud outcomes using automation. Improve reliability, scalability and efficiency by working with product engineers and site reliability engineers to ensure well-architected and thoughtfully operationalized cloud infrastructures. This includes assisting with nonfunctional requirements, such as data protection, high availability, disaster recovery, monitoring requirements and efficiency considerations in different environments. Provide subject matter expertise for all approved IaaS and PaaS services, respond promptly to escalated incidents and requests, and build reusable artifacts ready for deployment to cloud environments. Exert influence that lifts cloud engineering competency by participating in (and, where applicable, leading) organizational learning practices, such as communities of practice, dojos, hackathons and centers of excellence (COEs). Actively participate in mentoring. Practice continuous improvement and knowledge sharing (e.g., providing KB articles, training and white papers). Participate in planning and optimization activities, including capacity, reliability, cost management and performance engineering. Establish FinOps Practices — Cloud Cost management, Scale up/down, Environment creation/deletion based on consumption Work closely with security specialists to design, implement and test security controls, and ensure engineering activities align to security configuration guidance. Establish logging, monitoring and observability solutions, including identification of requirements, design, implementation and operationalization. Optimize infrastructure integration in all scenarios — single cloud, multicloud and hybrid. Convey the pros and cons of cloud services and other cloud engineering topics to others at differing levels of cloud maturity and experience, and in different roles (e.g., developers and business technologists). Be forthcoming and open when the cloud is not the best solution. Work closely with third-party suppliers, both as an individual contributor and as a project lead, when required. Engage with vendor technical support as the customer lead role when appropriate. Participate/Lead problem management activities, including post-mortem incident analysis, providing technical insight, documented findings, outcomes and recommendations as part of a root cause analysis. Support resilience activities — e.g., disaster recovery (DR) testing, performance testing and tabletop planning exercises. The role holder is also expected to: Ensure that activities are tracked and auditable by leveraging service enablement systems, logging activity in the relevant systems of record, and following change and release processes. Collaborate with peers from other teams, such as security, compliance, enterprise architecture, service governance, and IT finance to implement technical controls to support governance, as necessary. Work in accordance with the organization’s published standards and ensure that services are delivered in compliance with policy. Promptly respond to requests for engineering assistance from technical customers as needed. Provide engineering support, present ideas and create best-practice guidance materials. Strive to meet service-level expectations. Foster ongoing, closer and repeatable engagement with customers to achieve better, scalable outcomes. Take ownership of personal development, working with line management to identify development opportunities. Work with limited guidance, independently and/or as part of a team on complex problems, potentially requiring close collaboration with remotely based employees and third-party providers. Follow standard operating procedures, propose improvements and develop new standard operating procedures to further industrialize our approach. Advocate for simplification and workflow optimization, and follow documentation standards. Skills And Experience Skills and Experience in the following activities/working styles is essential: Collaboration with developers (and other roles, such as SREs and DevSecOps Engineers) to plan, design, implement, operationalize and problem solve workloads that leverage cloud infrastructure and platform services. Working in an infrastructure or application support team. Cloud migration project experience. [Data center to Cloud IAAS, Cloud Native, Hybrid Cloud] Securing cloud platforms and cloud workloads in collaboration with security teams. Familiarity or experience with DevOps/DevSecOps. Agile practices (such as Scrum/Sprints, Customer Journey Mapping, Kanban). Proposing new standards, addressing peer feedback and advocating for improvement. Understanding of software engineering principles (source control, versioning, code reviews, etc.) Working in an environment that complies with Health and, Manufacturing Event-based architectures and associated infrastructure patterns Experience working with specific technical teams: [R&D teams, Data and analytics teams, etc.] Experience where immutable infrastructure approaches have been used Implementing highly available systems, using multi-AZ and multi region approaches Skills And Experience In The Following Technology Areas Experience with Azure, GCP, AWS, SAP cloud provider services (Azure and SAP preferred) Experience with these cloud provider services is preferred: Infra, Data, App, API and Integration Services DevOps-tooling such as CI/CD (e.g., Jenkins, Jira, Confluence, Azure DevOps/ADO, TeamCity, GitHub, GitLab) Infrastructure-as-code approaches, role-specific automation tools and associated programming languages (e.g., Ansible, ARM, Chef, Cloud Formation, Pulumi, Puppet, Terraform, Salt, AWS CDK, Azure SDK) Orchestration Tools (e.g., Morpheus Data, env0, Cloudify, Pliant, Quali, RackN, VRA, Crossplane, ArgoCD) Knowledge of software development frameworks/Languages; [e.g., Spring, Java, GOlang, PHP, Python] Container management (e.g., Docker, Rancher, Kubernetes, AKS, EKS, GKE, RHOS, VMware Tanzu) Virtualization platforms (e.g., VMware, Hyper-V) Operating systems (e.g., Windows and Linux including scripting experience) Database technologies and caching (e.g., Postgres, MSSQL, NoSQL, Redis, CDN) Identity and access management (e.g., Active Directory/Azure AD, Group Policy, SSO, cloud RBAC and hierarchy and federation) Monitoring tools (e.g., AWS CloudWatch, Elastic Stack (Elastic Search/Logstash/Kibana), Datadog, LogicMonitor, Splunk) Cloud networking (e.g., Subnetting, Route Tables, Security Groups, VPC, VPC Peering, NACLS, VPN, Transit Gateways, optimizing for egress costs) Cloud security (e.g., key management services, encryption, other core security services/controls the organization uses) Landing Zone Automation solutions (e.g., AWS Control tower) Policy guardrails (e.g., policy-as-code approaches, cloud provider native policy tools, Hashicorp Sentinel, Open Policy Agent) Scalable architectures, including APIs, microservices and PaaS. Analyzing cloud spending and optimizing resources (e.g., Apptio Cloudability, Flexera One, IBM Turbonomic, Netapp Spot, VMware CloudHealth) Implementing resilience (e.g., multi-AZ, multi-region, backup and recovery tools) Cloud provider frameworks (e.g., Well-Architected) Working with architecture tools and associated artifacts General skills, behaviors, competencies and experience required includes: Strong communication skills (both written and verbal), including the ability to adapt style to a nontechnical audience Ability to stay calm and focused under pressure Collaborative working Proactive and detail-oriented, strong analytical skills, and the ability to leverage a data-driven approach Willing to share expertise and best practices, including mentoring and coaching others Continuous learning mindset, keen to learn and explore new areas — not afraid of starting from a novice level Ability to present solutions, defend criticism of ideas, and provide constructive peer reviews Ability to build consensus, make decisions based on many variables and gain support for initiatives Business acumen, preferably industry and domain-specific knowledge relevant to the enterprise and its business units Deep understanding of current and emerging I&O, and, in particular, cloud, technologies and practices Achieve compliance requirements by applying technical capabilities, processes and procedures as required Job Requirements Education and Qualifications Essential Bachelor’s or master's degree in computer science, information systems, a related field, or equivalent work experience Ten or more years of related experience in similar roles Must have worked on implementing cloud at enterprise scale Desirable Cloud provider/Hyperscalers certifications preferred. Must Have Skills and Experience Strong problem solving and analytical skills. Strong interpersonal and written and verbal communication skills. Highly adaptable to changing circumstances. Interest in continuously learning new skills and technologies. Experience with programming and scripting languages (e.g. Java, C#, C++, Python, Bash, PowerShell). Experience with incident and response management. Experience with Agile and DevOps development methodologies. Experience with container technologies and supporting tools (e.g. Docker Swarm, Podman, Kubernetes, Mesos). Experience with working in cloud ecosystems (Microsoft Azure AWS, Google Cloud Platform,). Experience with monitoring and observability tools (e.g. Splunk, Cloudwatch, AppDynamics, NewRelic, ELK, Prometheus, OpenTelemetry). Experience with configuration management systems (e.g. Puppet, Ansible, Chef, Salt, Terraform). Experience working with continuous integration/continuous deployment tools (e.g. Git, Teamcity, Jenkin, Artifactory). Experience in GitOps based automation is Plus Qualifications Bachelor’s degree (or equivalent years of experience). 5+ years of relevant work experience. SRE experience preferred. Background in Manufacturing, Platform/Tech compnies is preferred. Must have Public Cloud provider certifications (Azure, GCP or AWS) Having CNCF certification is plus Started sharing status update to Function Owner and CC to Hiring Manager twice a week Approaching Hiring Manager for the status keeping in CC, McCain's HR Head and TA Head Started interacting with Hiring Managers on MS Teams every alternate days McCain Foods is an equal opportunity employer. We see value in ensuring we have a diverse, antiracist, inclusive, merit-based, and equitable workplace. As a global family-owned company we are proud to reflect the diverse communities around the world in which we live and work. We recognize that diversity drives our creativity, resilience, and success and makes our business stronger. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to meet your needs. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with the Global Employee Privacy Policy Job Family: Information Technology Division: Global Digital Technology Department: Infrastructure Architecture Location(s): IN - India : Haryana : Gurgaon Company: McCain Foods(India) P Ltd

Posted 4 days ago

Apply

7.0 years

0 Lacs

Thiruvananthapuram

On-site

Required Qualifications & Skills: 7+ years in DevOps, SRE, or Infrastructure Engineering . Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform, CloudFormation) . Proficient in Docker & Kubernetes . Hands-on with CI/CD tools & scripting (Bash, Python, or Go) . Strong knowledge of Linux, networking, and security best practices . Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation . Job Types: Full-time, Permanent Application Question(s): How many years of experience do you have working with cloud platforms (AWS, GCP, or Azure)? Have you implemented monitoring/logging using Prometheus, Grafana, ELK Stack, or Datadog? current monthly salary? Least expected monthly salary? How early you can join? Experience: DevOps: 5 years (Required) Azure: 5 years (Required) Kubernetes: 4 years (Required) Terraform: 4 years (Required) Location: Thiruvananthapuram, Kerala (Required) Work Location: In person Speak with the employer +91 9072049595

Posted 4 days ago

Apply

4.0 - 7.0 years

0 Lacs

Ahmedabad

On-site

Job Title: Certified DevOps Engineer – AWS (Urgent) / Microsoft / Oracle / Adobe / Cisco Experience: 4 to 7 Years Location: Ahmedabad Company: GMIndia Pvt. Ltd. About GMIndia Pvt. Ltd.: GMIndia Pvt. Ltd. is an innovation-driven IT solutions company delivering future-ready technology and automation services across industries. As we grow our DevOps capabilities, we are actively hiring certified DevOps Engineers to lead cloud and infrastructure transformation. Immediate priority is given to AWS-certified professionals. Position Overview: We are seeking a results-oriented Certified DevOps Engineer with 4–7 years of experience and valid certifications in AWS, Microsoft Azure, Oracle Cloud, Adobe Experience Cloud, or Cisco DevNet . The role demands strong experience in cloud platforms, infrastructure automation, CI/CD pipelines, and containerized deployments. Key Responsibilities: Design and implement CI/CD pipelines to accelerate development and delivery cycles Automate infrastructure using tools like Terraform, Ansible, or CloudFormation Deploy, manage, and monitor systems across AWS, Azure, Oracle, Cisco, or Adobe Cloud environments Collaborate with cross-functional teams for seamless application delivery and integration Manage containerized deployments using Docker and Kubernetes Implement effective system monitoring, logging, backup, and disaster recovery strategies Ensure infrastructure security and compliance with cloud best practices Required Skills & Certifications: 4 to 7 years of hands-on DevOps/Cloud experience Mandatory: Valid certification in one or more of the following (only certified candidates will be considered): AWS Certified DevOps Engineer / Solutions Architect (High Priority) Microsoft Certified: Azure DevOps Engineer Expert Oracle Cloud Infrastructure (OCI) Certified Adobe Certified Expert (Experience Cloud / AEM) Cisco Certified DevNet Professional or Specialist Proficient in scripting (Python, Bash, PowerShell) Strong knowledge of containerization (Docker) and orchestration (Kubernetes) Experience with monitoring tools like ELK Stack, Grafana, Prometheus, CloudWatch, etc. Familiar with DevSecOps practices and secure deployment standards Preferred Qualifications: Experience with hybrid or multi-cloud deployments Familiarity with GitOps, serverless architecture, or edge computing Exposure to agile development practices and tools (JIRA, Confluence) Why Join GMIndia Pvt. Ltd.? Urgent opportunity for AWS-certified engineers – immediate onboarding Work on challenging, cloud-native projects with cutting-edge technologies Supportive and collaborative work culture Competitive salary and long-term career growth opportunities Apply Now if you are a certified DevOps expert ready to take on cloud innovation challenges with a growing tech leader in Ahmedabad. Job Type: Full-time Benefits: Provident Fund Work Location: In person

Posted 4 days ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* We're looking for a highly skilled Container Platform Engineer to architect, implement, and manage our cloud-agnostic Data Science and Analytical Platform. Leveraging OpenShift (or other Kubernetes distributions) as the core container orchestration layer, you'll build a scalable and secure infrastructure vital for ML workloads and shared services. This role is key to establishing a robust hybrid architecture, paving the way for seamless future migration to AWS, Azure, or GCP. This individual will closely with data scientists, MLOps engineers, and platform teams to enable efficient model development, versioning, deployment, and monitoring within a multi-tenant environment. Responsibilities* Responsible for developing risk solutions to meet enterprise-wide regulatory requirements. Performs Monitoring and managing of large systems/platforms efficiently. Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes. Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams. Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment. Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams. Analyze, understand, execute and resolve the issues in user scripts / model / code. Perform release and upgrade activities as required. Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space. Ability to fire fight, propose fix, guide the team towards day-to-day issues in production. Ability to train partner Data Science teams on frameworks and platform. Flexible with time and shift to support the project requirements. It doesn’t include any night shift. This position doesn’t include any L1 or L2 (first line of support) responsibility. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA/MTech Certifications If Any: Azure, AWS, GCP, Data Bricks Experience Range* 9+ Years Foundational Skills* Platform Design & Deployment: Design and deploy a comprehensive data science tech stack on OpenShift (or other Kubernetes distributions), including support for Jupyter notebooks, model training pipelines, inference services, and internal APIs. Cloud-Agnostic Architecture: Proven ability to build a cloud-agnostic container platform capable of seamless migration from on-prem OpenShift to cloud-native Kubernetes on AWS, Azure, or GCP. Container Platform Management: Expertise in configuring and managing multi-tenant namespaces, RBAC, network policies, and resource quotas within Kubernetes/OpenShift environments. API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar tools) for managing and securing API traffic, including JWT/OAuth2-based authentication. MLOps Toolchain Support: Experience deploying and maintaining critical MLOps toolchains such as MLflow, Kubeflow, model registries, and feature stores. CI/CD & GitOps: Strong integration experience with GitOps and CI/CD tools (e.g., ArgoCD, Jenkins, GitHub Actions) for automating ML model and infrastructure deployment workflows. Microservices Deployment: Ability to deploy and maintain containerized microservices using Python frameworks (FastAPI, Flask) or Node.js to serve ML APIs. Observability: Ensure comprehensive observability across platform components using industry-standard tools like Prometheus, Grafana, and EFK/ELK stacks. Infrastructure as Code (IaC): Proficiency in automating platform provisioning and configuration using Infrastructure as Code tools (Terraform, Ansible, or Helm). Policy & Governance: Expertise with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing robust governance policies. Desired Skills* Lead the design, development, and implementation of scalable, high-performance applications using Python/Java/Scala. Apply expertise in Machine Learning (ML) to build predictive models, enhance decision-making capabilities, and drive business insights. Collaborate with cross-functional teams to design, implement, and optimize cloud-based architectures on AWS and Azure. Work with large-scale distributed technologies like Apache Kafka, Apache Spark, and Apache Storm to ensure seamless data processing and messaging at scale. Provide expertise in Java multi-threading, concurrency, and other advanced Java concepts to ensure the development of high-performance, thread-safe, and optimized applications. Architect and build data lakes and data pipelines for large-scale data ingestion, processing, and analytics. Ensure integration of complex systems and applications across various platforms while adhering to best practices in coding, testing, and deployment. Collaborate closely with stakeholders to understand business requirements and translate them into technical specifications. Manage technical risk and work on performance tuning, scalability, and optimization of systems. Provide leadership to junior team members, offering guidance and mentorship to help develop their technical skills. Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Security Architecture: Understanding of zero-trust security architecture and secure API design patterns. Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server. Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores. Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Chennai

Posted 4 days ago

Apply

6.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Job Description WHAT YOU’LL DO We are looking for talented Senior Software Engineers with strong experience in .NET Core, Azure cloud services, and React.js to build modern, scalable, and high-performance enterprise applications. You will work on cloud-native solutions, microservices, and full-stack development, while driving secure DevOps practices, Infrastructure as Code (IaC), and intelligent automation. As a Senior Software Engineer, you will be responsible for designing, developing, and maintaining high-quality software solutions that meet the needs of our organization and clients. You will collaborate with cross-functional teams to analyze requirements, develop efficient code, perform rigorous testing, and deliver robust software products. The ideal candidate has a strong background in software development, a passion for technology, and a drive for continuous learning and improvement. At H&M, one of our core principles is "We are One Team." You will be part of a global team working alongside colleagues in Sweden. Who You’ll Work With You’ll be working with the Assortment Build and Visualize team within CoE Engineering—a global, cross-functional group dedicated to empowering the Assortment Office at the heart of H&M Group’s customer offering. Our product focuses on building intuitive, collaborative, and flexible tools that help create relevant, well-balanced, inspiring, and sustainable assortments. By supporting both 2D and 3D visualization, we enable smooth and efficient planning processes that optimize relevance and enhance decision-making. Our team of 9 engineers—comprising full-time employees in India, Swedish consultants in Stockholm, and remote consultants—works closely together to deliver high-impact solutions through continuous improvement, strong technical expertise, and a shared commitment to H&M’s assortment success. Responsibilities Design, develop, and maintain scalable full-stack applications using .NET Core and React.js. Build secure and high-performance microservices deployed on Azure Kubernetes Service (AKS). Develop and optimize RESTful APIs and cloud-native components, ensuring fault tolerance and resilience. Implement modern CI/CD pipelines and IaC using Azure DevOps, Terraform, or Bicep. Apply secure coding practices, integrating authentication/authorization (OAuth 2.0, OpenID Connect, JWT). Tune system performance, manage memory, and handle production-grade scalability challenges. Integrate logging and monitoring tools (e.g.Serilog, Application Insights, Grafana, ELK) for proactive issue resolution. Work collaboratively with cross-functional teams product, DevOps, QA, and architects to deliver business outcomes. Contribute to software engineering best practices: code reviews, test automation, branching strategies, and documentation. Adopt AI-assisted development tools and apply intelligent automation to improve productivity and code quality Who You Are We are looking for people with 6-10 years of professional experience using .NET technologies. Strong proficiency in C#, ASP.NET, and .NET Core. Ideal for seasoned developers who can lead architecture, performance engineering, and DevOps implementation. Deep expertise in .NET Core internals, performance tuning, and memory optimization. Proven experience designing microservices architectures on Azure. Strong frontend engineering with React.js, Webpack, and modern JS tooling. Experience with Kubernetes, Docker, Terraform/Bicep, and GitOps pipelines. Strong knowledge of cloud security, OAuth 2.0, and encryption strategies. Expertise in distributed systems, caching strategies (Redis), and API gateways (e.g., Azure API Management). Advanced skills in stream/event-driven architectures (Kafka, Azure Event Hubs). Strong SQL & NoSQL expertise (e.g., MongoDB, Cosmos DB) with scalable DB design. Ability to lead AI-assisted development and adopt GenAI copilots and automation tools. Strong troubleshooting and incident management in production environments. Preferred : Azure Solutions Architect certification (AZ-305) or Azure developer (AZ-204) or equivalent. Knowledge of API governance, versioning strategies, and service mesh. Experience mentoring team members, leading code reviews, and driving Agile practices. And people who are… Excited about working in a fast-paced, Agile environment. Open to learning and adapting to new technologies and best practices. Team players with strong collaboration and communication skills. Who We Are H&M is a global company of strong fashion brands and ventures. Our goal is to prove that there is no compromise between exceptional design, affordable prices, and sustainable solutions. We want to liberate fashion for the many, and our customers are at the heart of every decision we make. We are made up of thousands of passionate and talented colleagues united by our shared culture and values. Together, we want to use our power, our scale, and our knowledge to push the fashion industry towards a more inclusive and sustainable future. Help us re-imagine fashion and together we will re-shape our industry. Learn more about H&M here. WHY YOU’LL LOVE WORKING HERE At H&M, we are proud to be a vibrant and welcoming company. We offer our employees attractive benefits with extensive development opportunities around the globe. We offer all our employees at H&M attractive benefits with extensive development opportunities around the globe. All our employees receive a staff discount card, usable on all our H&M brands in stores and online. Brands covered by the discount are H&M (Beauty and Move included), COS, Weekday, Monki, H&M HOME, & Other Stories, ARKET, Afound. In addition to our staff discount, all our employees are included in our H&M Incentive Program – HIP. You can read more about our H&M Incentive Program here. In addition to our global benefits, all our local markets offer different competitive perks and benefits. Please note that they may differ between employment types and countries. JOIN US Our uniqueness comes from a combination of many things – our inclusive and collaborative culture, our strong values, and opportunities for growth. But most of all, it’s our people who make us who we are. Take the next step in your career together with us. The journey starts here. We are committed to a recruitment process that is fair, equitable, and based on competency. We therefore kindly ask you to not attach a cover letter in your application

Posted 4 days ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Title: Solutions Architect – Agentic AI Systems & Scalable Platforms Experience : 15+ years Location : Delhi NCR, Bangalore, Pune (Hybrid) Job Summary: We are looking for a highly experienced Solutions Architect (15+ years) to lead the design and implementation of scalable, event-driven AI/ML platforms. This role will focus on building distributed systems, integrating multi-model AI orchestration, ensuring observability, and securing data operations across hybrid cloud environments. The ideal candidate combines deep technical acumen with excellent communication skills, capable of engaging with executive leadership and leading cross-functional engineering teams. Must Have Skills: 15+ years of experience in architecture and software engineering, with deep expertise in distributed systems Core Competencies Distributed System Design: Proven leadership in architecting resilient, scalable platforms for high-concurrency agent orchestration and state management AI/ML System Integration: Strong experience designing AI/ML integration layers with support for multi-model orchestration, fallback strategies, and cost optimization Event-Driven Orchestration: Expertise in implementing event-driven orchestration workflows, including human-in-the-loop decision points and rollback mechanisms Observability Architecture: Hands-on with observability architecture, including monitoring, tracing, debugging, and telemetry for AI systems Security-First Design: In-depth knowledge of zero-trust security architectures, with RBAC/ABAC and fine-grained access control for sensitive operations Technical Proficiencies Programming: Python (async frameworks), TypeScript/JavaScript (modern frameworks), Go Container Orchestration: Kubernetes, service mesh architectures, serverless patterns Real-time Systems: WebSocket protocols, event streaming, low-latency architectures Infrastructure Automation: GitOps, infrastructure as code, automated scaling policies Performance Engineering: Distributed caching, query optimization, resource pooling Platform Integration Skills API Gateway Design: Rate limiting, authentication, multi-provider abstraction Workflow Orchestration: State machines, saga patterns, compensating transactions Frontend Architecture: Micro-frontends, real-time collaboration features, responsive data visualization Persistence Strategies: Polyglot persistence, CQRS patterns, event sourcing Track record of effective collaboration with AI/ML engineers, Data Engineers, Backend Developers, and UI/UX teams on complex platform delivery, a lead by doing attitude towards resolving issues and technical roadblocks. Demonstrated ability to produce architecture diagrams and maintain technical documentation standards Excellent communication and stakeholder management, especially with senior and executive leadership Nice to Have Skills: Experience with real-time systems (WebSockets, event streaming, low-latency protocols) Exposure to polyglot persistence, event sourcing, CQRS patterns Experience with multi-tenant SaaS platforms and usage-based billing models Knowledge of hybrid cloud deployments and cost attribution for AI compute workloads Familiarity with compliance frameworks, audit trail design, and encryption strategies Exposure to frontend architectures like micro-frontends and real-time dashboards Experience with infrastructure as code (IaC) and performance tuning for distributed caching Role & Responsibilities: Architect and lead development of scalable, distributed agent orchestration systems Design abstraction layers for multi-model AI integration with efficiency and fallback logic Develop event-driven workflows with human oversight, compensating transactions, and rollback paths Define observability architecture, including logging, tracing, metrics, and debugging for AI workflows Implement zero-trust and fine-grained security controls for sensitive data operations Create and maintain technical artifacts, including architecture diagrams, standards, and design patterns Act as technical liaison between cross-functional teams and executive stakeholders Guide engineering teams through complex solutioning, issue resolution, and performance optimization Drive documentation standards and ensure architectural alignment across the delivery lifecycle Key Skills: Distributed systems, AI orchestration, Event-driven workflows, Kubernetes, GitOps, Python, Go, TypeScript, Observability, Zero-trust security, Architecture diagrams, Real-time systems, Hybrid cloud, CQRS, Documentation standards

Posted 4 days ago

Apply

0.0 - 4.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Required Qualifications & Skills: 7+ years in DevOps, SRE, or Infrastructure Engineering . Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform, CloudFormation) . Proficient in Docker & Kubernetes . Hands-on with CI/CD tools & scripting (Bash, Python, or Go) . Strong knowledge of Linux, networking, and security best practices . Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation . Job Types: Full-time, Permanent Application Question(s): How many years of experience do you have working with cloud platforms (AWS, GCP, or Azure)? Have you implemented monitoring/logging using Prometheus, Grafana, ELK Stack, or Datadog? current monthly salary? Least expected monthly salary? How early you can join? Experience: DevOps: 5 years (Required) Azure: 5 years (Required) Kubernetes: 4 years (Required) Terraform: 4 years (Required) Location: Thiruvananthapuram, Kerala (Required) Work Location: In person Speak with the employer +91 9072049595

Posted 4 days ago

Apply

7.0 years

0 Lacs

India

On-site

About Us: At Minutes to Seconds, we match people having great skills with tailor-fitted jobs to achieve well-deserved success We know how to match people to the right job roles to create that perfect fit. This changes the dynamics of business success and catalyses the growth of an individual. We’re passionate about doing an incredible job for our clients and job seekers. The success of individuals at the workplace determines our success. Scope : We’re looking for a Rancher Kubernetes expert to lead the design, automation, and reliability of our on-prem and hybrid container platform. Sitting at the intersection of the Platform Engineering and Infrastructure Reliability teams, this role owns the lifecycle of Rancher-managed clusters—from bare-metal provisioning and performance tuning to observability, security, and automated operations. You’ll apply SRE principles to ensure high availability, scalability, and resilience across environments supporting mission-critical workloads. Core Responsibilities: Platform & Infrastructure Engineering Design, deploy, and maintain Rancher-managed Kubernetes clusters (RKE2/K3s) at enterprise scale. Architect highly available clusters integrated with on-prem infrastructure: UCS, VxLAN, storage, DNS, and load balancers. Lead Rancher Fleet implementations for GitOps-driven cluster and workload management. Performance Engineering & Optimization Tune clusters for high-performance workloads on bare-metal hardware , optimizing CPU, memory, and I/O paths. Align cluster scheduling and resource profiles with physical infrastructure topologies (NUMA, NICs, etc.). Optimize CNI, kubelet, and scheduler settings for low-latency, high-throughput applications. Security & Compliance Implement security-first Kubernetes patterns: RBAC, Pod Security Standards, network policies, and image validation. Drive left-shifted security using Terraform, Helm, and CI/CD pipelines; align to PCI, FIPS, and CIS benchmarks. Lead infrastructure risk reviews and implement guardrails for regulated environments. Automation & Tooling Build and maintain IaC stacks using Terraform, Helm, and Argo CD. Develop platform automation and observability tooling using Python or Go Ensure declarative management of infrastructure and applications through GitOps pipelines SRE & Observability. Apply SRE best practices for platform availability, capacity, latency, and incident response. Operate and tune Prometheus, Grafana, and ELK/EFK stacks for complete platform observability. Drive actionable alerting, automated recovery mechanisms, and clear operational documentation. Lead postmortems and drive systemic improvements to reduce MTTR and prevent recurrence. Required Skills · 7+ years in infrastructure, platform, or SRE roles · Deep hands-on experience with Rancher (RKE2/K3s) in production environments · Proficient with Terraform, Helm, Argo CD, Python, and/or Go · Demonstrated performance tuning in bare-metal Kubernetes environments (UCS, VxLAN, MetalLB) · Expert in Linux systems (systems, networking, kernel tuning), Kubernetes internals, and container runtimes · Real-world application of SRE principles in high-stakes, always-on environments · Strong background operating Prometheus, Grafana, and Elasticsearch/Fluentd/Kibana (ELK/EFK) stacks Preferred Qualifications · Experience integrating Kubernetes with OpenStack and Magnum · Knowledge of Rancher add-ons: Fleet, Longhorn, CIS Scanning · Familiarity with compliance-driven infrastructure (PCI, FedRAMP, SOC2) · Certifications: CKA, CKS, or Rancher Kubernetes Administrator · Strategic thinker with strong technical judgment and execution ability · Calm and clear communicator, especially during incidents or reviews · Mentorship-oriented; supports team learning and cross-functional collaboration · Self-motivated, detail-oriented, and thrives in a fast-moving, ownership-driven culture

Posted 4 days ago

Apply

3.0 years

15 - 20 Lacs

Madurai, Tamil Nadu

On-site

Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Engineer Location: Madurai Experience: 5+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Engineer, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 5+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As our AI Engineer, you’ll own the design, development, and production-grade deployment of our machine learning and NLP pipelines. You’ll work cross-functionally with backend (Java/Spring Boot), data (Kafka/MongoDB/ES), and frontend (React) teams to embed AI capabilities throughout. Responsibilities Build & Deploy ML/NLP Models Design end-to-end ML pipelines for data ingestion, preprocessing, feature engineering, model training, evaluation and monitoring. Train, deploy and operate predictive models (classification, regression, anomaly detection) to drive actionable insights across all MCP sources. Implement NLP components - such as text classification, summarization, and conversational interfaces—to enhance chat-driven workflows and knowledge retrieval. Data Engineering & Integration Ingest, clean, and normalize data from Kafka/Mongo and third-party APIs Define and maintain JSON-schema validations and transformation logic Collaborate with backend services to embed AI outputs Platform & Service Collaboration Work with Java/Spring Boot teams to wrap models as REST endpoints or Kafka stream processors Ensure end-to-end monitoring, logging, and performance tuning within Kubernetes Partner with frontend engineers to surface AI insights in React-based chat interfaces Continuous Improvement Establish A/B testing, metrics, and feedback loops to tune model accuracy and latency Stay on top of LLM and MLops best practices to evolve our AI stack Qualifications Experience: 2–3 years in ML/AI or data science roles, preferably in SaaS. Languages & Frameworks: Python and Familiarity with Java & Spring Boot for service integrations. Data & Infrastructure: Hands-on with Kafka, MongoDB, Redis or similar. Experience containerizing in Docker and deploying on Kubernetes. JSON-path/JSONLogic or similar transformation engines. Soft Skills: Excellent communication - able to translate complex AI concepts to product and customer teams. Nice-to-Haves Experience integrating LLMs or building vector search indexes for semantic retrieval Prior work on chatbots or conversational UIs Familiarity with DevOps stack. (AWS/Azure, k8s, GitOps, Security, Observability and Incident management)

Posted 4 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: Provide Level 2 (L2) support for 5G Core SA network functions in production environment. Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). Validate EDR formats and schemas against 3GPP and Nokia specifications. NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. Analyze alarms from NetAct/Mantaray, or external monitoring tools. Correlate events using Netscout, Mantaray, and PM/CM data. Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. Perform root cause analysis (RCA) and implement corrective actions. Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration Automate deployment, scaling, healing, and termination of network functions using NCOM. Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: Trace subscriber issues (5G attach, PDU session, QoS). Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). Correlate user-plane drops, abnormal release, bearer QoS mismatch. Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. Maintain detailed documentation of network configurations, incident reports, and operational procedures. Support software upgrades, patch management, and configuration changes. Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). Audit NRF/PCF/UDM etc configuration & Database. Validate policy rules, slicing parameters, and DNN/APN settings. Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. Proactively detect degrading KPIs trends. 6. Security & Access Support: Application support for Nokia EDR and CrowdStrike. Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. Work with L3 and care team for issue resolution. Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: 5–9 years of experience in Telecom industry with hands on experience. Mandatory experience with Nokia 5G Core-SA platform. Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. Strong analytical and troubleshooting skills. Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). Knowledge of network protocols and security (TLS, IPsec). Excellent communication and documentation skills. Educational Qualification: BE / BTech 15 Years Full Time Education Additional Information: Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). Cloud Certifications (AWS)/ Experience on AWS Cloud, 15 years full time education

Posted 4 days ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Site Reliability Engineer -II About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Let’s do this. Let’s change the world. We are looking for a Site Reliability Engineer/Cloud Engineer (SRE2) to work on the performance optimization, standardization, and automation of Amgen’s critical infrastructure and systems. This role is crucial to ensuring the reliability, scalability, and cost-effectiveness of our production systems. The ideal candidate will work on operational excellence through automation, incident response, and proactive performance tuning, while also reducing infrastructure costs. You will work closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Roles & Responsibilities: System Reliability, Performance Optimization & Cost Reduction: Ensure the reliability, scalability, and performance of Amgen’s infrastructure, platforms, and applications. Proactively identify and resolve performance bottlenecks and implement long-term fixes. Continuously evaluate system design and usage to identify opportunities for cost optimization, ensuring infrastructure efficiency without compromising reliability. Automation & Infrastructure as Code (IaC): Drive the adoption of automation and Infrastructure as Code (IaC) across the organization to streamline operations, minimize manual interventions, and enhance scalability. Implement tools and frameworks (such as Terraform, Ansible, or Kubernetes) that increase efficiency and reduce infrastructure costs through optimized resource utilization. Standardization of Processes & Tools: Establish standardized operational processes, tools, and frameworks across Amgen’s technology stack to ensure consistency, maintainability, and best-in-class reliability practices. Champion the use of industry standards to optimize performance and increase operational efficiency. Monitoring, Incident Management & Continuous Improvement: Implement and maintain comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. Lead the incident management process to minimize downtime, conduct root cause analysis, and implement preventive measures to avoid future occurrences. Foster a culture of continuous improvement by leveraging data from incidents and performance monitoring. Collaboration & Cross-Functional Leadership: Partner with software engineering, and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. Act as a SME for SRE principles and advocate for best practices for assigned Projects. Capacity Planning & Disaster Recovery: Execute capacity planning processes to support future growth, performance, and cost management. Maintain disaster recovery strategies to ensure system reliability and minimize downtime in the event of failures. Must-Have Skills: Experienced with AWS/Azure Cloud Services Proficient in CI/CD (Jenkins/Gitlab), Observability, IAC, Gitops etc Experience with containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Bachelor’s degree in computer science and engineering preferred, other Engineering field is considered Databricks Knowledge/Exposure is good to have (need to upskill if hired) Soft Skills: Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5-7 years of experience in IT infrastructure, with at least 4+ years in Site Reliability Engineering or related fields. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 5 days ago

Apply

3.0 years

4 Lacs

Delhi

On-site

#hiring Hey Folks we are hiring for the profile of Kubernetes Developer /Administrator /DevOps Engineer Job Description: Kubernetes Developer /Administrator /DevOps Engineer Location: Shastri Park, Delhi Experience: 3+ years Education: Btech/ B.E./ MCA/ MSC/ MS Salary: Upto 70k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Description: We are looking for a skilled Kubernetes Developer, Administrator, and DevOps Engineer who can effectively manage and deploy our development images into Kubernetes environments. The ideal candidate should be highly proficient in Kubernetes, CI/CD pipelines, and containerization. Qualifications: Minimum 3 years of experience working with Kubernetes in production environments. Key Responsibilities: Design, deploy, and manage Kubernetes clusters for development, testing, and production environments. Build and maintain CI/CD pipelines for automated deployment of applications on Kubernetes. Manage container orchestration using Kubernetes, including scaling, upgrades, and troubleshooting. Work closely with developers to containerize applications and ensure smooth deployment to Kubernetes. Monitor and optimize the performance, security, and reliability of Kubernetes clusters. Implement and manage Helm charts, Docker images, and Kubernetes manifests. Mandatory Skills: Kubernetes Expertise: In-depth knowledge of Kubernetes, including deploying, managing, and troubleshooting clusters and workloads CI/CD Tools:Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, or similar. Containerization: Strong experience with Docker for creating, managing, and deploying containerized , applications. Infrastructure as Code (IaC): Familiarity with Terraform, Ansible, or similar tools for managing infrastructure. Networking and Security: Understanding of Kubernetes networking, service meshes, and security best practices. Scripting Skills: Proficiency in scripting languages like Bash, Python, or similar for automation tasks. Nice to Have: Experience with cloud platforms like AWS, GCP, or Azure. Knowledge of monitoring and logging tools such as Prometheus, Grafana, and ELK stack. Familiarity with GitOps practices using Argo CD or Flux. Job Types: Full-time, Contractual / Temporary Pay: From ₹500,000.00 per year Work Location: In person Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 5 days ago

Apply

12.0 - 18.0 years

4 - 8 Lacs

Pune

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles and Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad) , and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible , integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues , and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs , including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad , and container orchestration using Kubernetes and Docker . Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic , or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus 1. Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. 2. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises 3. Fast Growing: Growing Region at the rate of 30% Y-o-Y 4. Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus 5. AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 5 days ago

Apply

12.0 - 18.0 years

4 - 5 Lacs

Mumbai

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles and Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad) , and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible , integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues , and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs , including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad , and container orchestration using Kubernetes and Docker . Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic , or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus 1. Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. 2. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises 3. Fast Growing: Growing Region at the rate of 30% Y-o-Y 4. Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus 5. AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 5 days ago

Apply

12.0 - 18.0 years

4 - 5 Lacs

Mumbai

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles and Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad) , and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible , integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues , and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs , including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad , and container orchestration using Kubernetes and Docker . Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic , or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus 1. Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. 2. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises 3. Fast Growing: Growing Region at the rate of 30% Y-o-Y 4. Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus 5. AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 5 days ago

Apply

12.0 - 18.0 years

4 - 9 Lacs

Bengaluru

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles and Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad) , and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible , integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues , and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs , including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad , and container orchestration using Kubernetes and Docker . Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic , or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus 1. Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. 2. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises 3. Fast Growing: Growing Region at the rate of 30% Y-o-Y 4. Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus 5. AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 5 days ago

Apply

8.0 years

5 - 10 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do: You will take an ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive end-to-end development of services and pipelines supporting distributed data processing, data transformations and intelligent automation. This is an unique opportunity to contribute to SAP’s evolving data platform initiatives with hands-on involvement in Java, Python, Kafka, DevOps, Real-Time Analytics, Intelligent Monitoring, BTP and Hyperscaler ecosystems. Responsibilities: Design and develop Micro services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Design and Develop UI based on SAP UI5/Fiori is a plus Design and Develop Observability Framework for Customer Insights Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Experience with Databricks is an advantage. Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP’s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture. Ensure adherence to best practices in microservices architecture, including service discovery, load balancing, and fault tolerance. Stay updated with the latest industry trends and technologies to continuously improve the development process What you bring: 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Exposure to Log Aggregator Tools like Splunk, ELK , etc. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones. Meet your Team SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP’s Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP’s data platform #DevT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 430165 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 5 days ago

Apply

12.0 - 18.0 years

4 - 9 Lacs

Bengaluru

On-site

About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles and Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad) , and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible , integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues , and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs , including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad , and container orchestration using Kubernetes and Docker . Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic , or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus 1. Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. 2. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises 3. Fast Growing: Growing Region at the rate of 30% Y-o-Y 4. Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus 5. AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies