Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary: We are seeking a hands-on Senior Release Engineer with deep expertise in release orchestration, strong technical proficiency in Jenkins , and solid working knowledge of AWS infrastructure . This role requires someone who can not only manage and coordinate releases, but also build and optimize the infrastructure and pipelines that power them. The ideal candidate is detail-oriented, highly collaborative, and comfortable supporting early morning production releases in a fast-paced environment. Key Responsibilities: Lead end-to-end release management : planning, coordination, scheduling, execution, and post-release support. Own and maintain CI/CD pipelines using Jenkins and GitOps practices with ArgoCD . Build and manage AWS infrastructure required to support robust and scalable release processes. Integrate and maintain quality and security tools such as SonarQube , JaCoCo , and other DevSecOps components. Work closely with engineering, QA, product, and DevOps to ensure smooth deployments and rollback strategies. Set up and manage branching and merge strategies using Bitbucket . Maintain traceability and visibility of releases through Jira , ensuring alignment with sprint and delivery milestones. Support early morning production releases and provide hands-on troubleshooting and issue resolution. Lead go-live readiness assessments, go/no-go calls, and post-release reviews. Continuously improve release processes through automation, tooling, and process refinement. Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related technical field. Expert-level knowledge in Release Management , with at least 6+ years of experience in complex software delivery environments. Expertise in Jenkins for CI automation and pipeline optimization. Strong working knowledge of AWS (EC2, IAM, S3, VPC, CloudFormation or Terraform, etc.). Proven experience building or managing AWS infrastructure to support release pipelines. Solid understanding of Bitbucket for code repository management and branching strategies. Proficient in Jira for tracking releases and managing change tickets. Experience with SonarQube , JaCoCo , and other security/quality scanning tools. Willingness and ability to support early morning or off-hours production deployments . Strong communication and coordination skills for cross-team collaboration. Preferred Qualifications: AWS certification (e.g., AWS Certified DevOps Engineer or Solutions Architect). Familiarity with Kubernetes and Helm charts. Experience in regulated environments (e.g., finance, healthcare, etc.). Comfortable with Agile methodologies and DevSecOps practices.
Posted 5 days ago
3.0 years
0 Lacs
Delhi, Delhi
On-site
#hiring Hey Folks we are hiring for the profile of Kubernetes Developer /Administrator /DevOps Engineer Job Description: Kubernetes Developer /Administrator /DevOps Engineer Location: Shastri Park, Delhi Experience: 3+ years Education: Btech/ B.E./ MCA/ MSC/ MS Salary: Upto 70k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Description: We are looking for a skilled Kubernetes Developer, Administrator, and DevOps Engineer who can effectively manage and deploy our development images into Kubernetes environments. The ideal candidate should be highly proficient in Kubernetes, CI/CD pipelines, and containerization. Qualifications: Minimum 3 years of experience working with Kubernetes in production environments. Key Responsibilities: Design, deploy, and manage Kubernetes clusters for development, testing, and production environments. Build and maintain CI/CD pipelines for automated deployment of applications on Kubernetes. Manage container orchestration using Kubernetes, including scaling, upgrades, and troubleshooting. Work closely with developers to containerize applications and ensure smooth deployment to Kubernetes. Monitor and optimize the performance, security, and reliability of Kubernetes clusters. Implement and manage Helm charts, Docker images, and Kubernetes manifests. Mandatory Skills: Kubernetes Expertise: In-depth knowledge of Kubernetes, including deploying, managing, and troubleshooting clusters and workloads CI/CD Tools:Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, or similar. Containerization: Strong experience with Docker for creating, managing, and deploying containerized , applications. Infrastructure as Code (IaC): Familiarity with Terraform, Ansible, or similar tools for managing infrastructure. Networking and Security: Understanding of Kubernetes networking, service meshes, and security best practices. Scripting Skills: Proficiency in scripting languages like Bash, Python, or similar for automation tasks. Nice to Have: Experience with cloud platforms like AWS, GCP, or Azure. Knowledge of monitoring and logging tools such as Prometheus, Grafana, and ELK stack. Familiarity with GitOps practices using Argo CD or Flux. Job Types: Full-time, Contractual / Temporary Pay: From ₹500,000.00 per year Work Location: In person Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Release Engineer, you will be responsible for leading end-to-end release management activities including planning, coordination, scheduling, execution, and post-release support. Your role will involve owning and maintaining CI/CD pipelines using Jenkins and GitOps practices with ArgoCD. You will also be tasked with building and managing AWS infrastructure to support robust and scalable release processes. Additionally, integrating and maintaining quality and security tools such as SonarQube, JaCoCo, and other DevSecOps components will be part of your key responsibilities. Collaboration with engineering, QA, product, and DevOps teams will be essential to ensure smooth deployments and effective rollback strategies. You will set up and manage branching and merge strategies using Bitbucket, maintaining traceability and visibility of releases through Jira to align with sprint and delivery milestones. Your support for early morning production releases, hands-on troubleshooting, and issue resolution will be critical in this fast-paced environment. Furthermore, you will lead go-live readiness assessments, go/no-go calls, and post-release reviews. Continuous improvement of release processes through automation, tooling, and process refinement will be a key focus area for you. To qualify for this role, you must hold a Bachelor's degree in Computer Science, Engineering, or a related technical field. You should have expert-level knowledge in Release Management with at least 6+ years of experience in complex software delivery environments. Proficiency in Jenkins for CI automation and pipeline optimization, as well as a strong working knowledge of AWS services such as EC2, IAM, S3, VPC, CloudFormation, or Terraform is required. Experience in building or managing AWS infrastructure to support release pipelines is essential. Moreover, a solid understanding of Bitbucket for code repository management, proficiency in Jira for tracking releases and managing change tickets, and experience with SonarQube, JaCoCo, and other security/quality scanning tools are necessary. You should be willing and able to support early morning or off-hours production deployments, possess strong communication and coordination skills for cross-team collaboration. Preferred qualifications include AWS certification (e.g., AWS Certified DevOps Engineer or Solutions Architect), familiarity with Kubernetes and Helm charts, experience in regulated environments (e.g., finance, healthcare, etc.), and comfort with Agile methodologies and DevSecOps practices.,
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You are looking for a DevOps Technical Lead who will play a crucial role in leading the development of an Infrastructure Agent powered by Generative AI (GenAI) technology. In this role, you will be responsible for designing and implementing an intelligent Infra Agent that can handle provisioning, configuration, observability, and self-healing autonomously. Your key responsibilities will include leading the architecture and design of the Infra Agent, integrating various automation frameworks to enhance DevOps workflows, automating infrastructure provisioning and incident remediation, developing reusable components and frameworks using Infrastructure as Code (IaC) tools, and collaborating with AI/ML engineers and SREs to create intelligent infrastructure decision-making logic. You will also be expected to implement secure and scalable infrastructure on cloud platforms such as AWS, Azure, and GCP, continuously improve agent performance through feedback loops, telemetry, and model fine-tuning, drive DevSecOps best practices, compliance, and observability, as well as mentor DevOps engineers and work closely with cross-functional teams. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 8 years of experience in DevOps, SRE, or Infrastructure Engineering. You must have proven experience in leading infrastructure automation projects, expertise with cloud platforms like AWS, Azure, GCP, and deep knowledge of tools such as Terraform, Kubernetes, Helm, Docker, Jenkins, and GitOps. Hands-on experience with LLMs/GenAI APIs, familiarity with automation frameworks, and proficiency in programming/scripting languages like Python, Go, or Bash are also required. Preferred qualifications for this role include experience in building or fine-tuning LLM-based agents, contributions to open-source GenAI or DevOps projects, understanding of MLOps pipelines and AI infrastructure, and certifications in DevOps, cloud, or AI technologies.,
Posted 6 days ago
18.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles And Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad), and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible, integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues, and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs, including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad, and container orchestration using Kubernetes and Docker. Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic, or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises Fast Growing: Growing Region at the rate of 30% Y-o-Y Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore
Posted 6 days ago
18.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles And Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad), and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible, integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues, and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs, including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad, and container orchestration using Kubernetes and Docker. Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic, or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises Fast Growing: Growing Region at the rate of 30% Y-o-Y Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore
Posted 6 days ago
18.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles And Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad), and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible, integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues, and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs, including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad, and container orchestration using Kubernetes and Docker. Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic, or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises Fast Growing: Growing Region at the rate of 30% Y-o-Y Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore
Posted 6 days ago
18.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles And Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad), and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible, integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues, and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs, including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad, and container orchestration using Kubernetes and Docker. Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic, or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises Fast Growing: Growing Region at the rate of 30% Y-o-Y Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore
Posted 6 days ago
18.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for we're looking for a DevOps Architect who not only brings deep expertise in modern cloud infrastructure and automation but also has an evolving acumen in AI-driven DevOps practices (AIOps) to transform how we build, monitor, and operate at scale. As a DevOps Architect , you will be responsible for designing and implementing scalable, secure, and intelligent automation frameworks for cloud infrastructure and deployments. You will play a pivotal role in improving reliability, observability, performance, and cost optimization, while also integrating AI and ML techniques into DevOps processes to drive proactive operations and self-healing systems. Roles And Responsibilities: Architect end-to-end DevOps solutions leveraging AWS, HashiCorp stack (Terraform, Packer, Nomad), and Kubernetes for scalable cloud operations. Build intelligent CI/CD pipelines and infrastructure automation using Python and Ansible, integrating AI/ML for anomaly detection, predictive scaling, and auto-remediation. Design and implement AIOps capabilities for proactive monitoring, root cause analysis, and noise reduction using logs, metrics, and traces. Drive adoption of Infrastructure-as-Code (IaC), GitOps, and event-driven automation across all environments. Collaborate with SREs, developers, and QA to embed intelligent automation and observability early into the SDLC. Lead cost analysis and optimization strategies leveraging both DevOps tooling and AI insights. Manage and modernize systems around load balancers, proxies, caching, messaging queues, and secure APIs. Evaluate and implement AI-based toolchains or build custom scripts/models to optimize deployment health and reliability. Establish standards for security, governance, and compliance in infrastructure and DevOps processes. Mentor junior DevOps engineers and drive a culture of innovation, learning, and continuous improvement. Job Requirement 12–18 years of total experience with strong DevOps and cloud infrastructure background. Hands-on experience with AWS services and APIs, including deployment, automation, and monitoring. Proficiency with Terraform, Packer, Nomad, and container orchestration using Kubernetes and Docker. Strong scripting expertise in Python and Ansible with experience in developing automation frameworks. Proven track record of building scalable, reliable, and secure infrastructure for SaaS applications. Working knowledge of AIOps tools or concepts such as log/metric analysis using ML models, predictive alerting, anomaly detection, etc. Strong understanding of SDLC, CI/CD pipelines, and DevOps governance. Experience with system components like web servers, proxies, queues, and caching mechanisms. Excellent problem-solving and architectural design skills. Strong leadership qualities with experience mentoring cross-functional teams. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Information Technology, or related field. Exposure to tools like Datadog, Prometheus, Grafana, New Relic, or AI-native observability platforms. Experience working in product-based or SaaS organizations. AWS DevOps Engineer or Solutions Architect certifications are a plus. Five Reasons Why You Should Join Zycus Industry Recognized Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Zycus is also recognized as a Customer First Organization by Gartner. Zycus's Procure to Pay Suite Scores 4.5 out of 5 ratings in Gartner Peer Insights for Procure-to-Pay Suites. Pioneer in Cognitive Procurement: Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises Fast Growing: Growing Region at the rate of 30% Y-o-Y Global Enterprise Customers: Work with Large Enterprise Customers globally to drive Complex Global Implementation on the value framework of Zycus AI Product Suite: Steer next gen cognitive product suite offering About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore
Posted 6 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Site Reliability Engineering Manager/Cloud Engineering Manager About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Let’s do this. Let’s change the world. We are looking for a Site Reliability Engineer/Cloud Engineer (SRE2) to work on the performance optimization, standardization, and automation of Amgen’s critical infrastructure and systems. This role is crucial to ensuring the reliability, scalability, and cost-effectiveness of our production systems. The ideal candidate will work on operational excellence through automation, incident response, and proactive performance tuning, while also reducing infrastructure costs. You will work closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Roles & Responsibilities: Lead and motivate a high-performing Test Automation team to deliver exceptional results. Provide expert guidance and mentorship to the Test Automation team, fostering a culture of innovation and best practices System Reliability, Performance Optimization & Cost Reduction: Ensure the reliability, scalability, and performance of Amgen’s infrastructure, platforms, and applications. Proactively identify and resolve performance bottlenecks and implement long-term fixes. Continuously evaluate system design and usage to identify opportunities for cost optimization, ensuring infrastructure efficiency without compromising reliability. Automation & Infrastructure as Code (IaC): Drive the adoption of automation and Infrastructure as Code (IaC) across the organization to streamline operations, minimize manual interventions, and enhance scalability. Implement tools and frameworks (such as Terraform, Ansible, or Kubernetes) that increase efficiency and reduce infrastructure costs through optimized resource utilization. Standardization of Processes & Tools: Establish standardized operational processes, tools, and frameworks across Amgen’s technology stack to ensure consistency, maintainability, and best-in-class reliability practices. Champion the use of industry standards to optimize performance and increase operational efficiency. Monitoring, Incident Management & Continuous Improvement: Implement and maintain comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. Lead the incident management process to minimize downtime, conduct root cause analysis, and implement preventive measures to avoid future occurrences. Foster a culture of continuous improvement by leveraging data from incidents and performance monitoring. Collaboration & Cross-Functional Leadership: Partner with software engineering, and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. Act as a SME for SRE principles and advocate for best practices for assigned Projects. Capacity Planning & Disaster Recovery: Execute capacity planning processes to support future growth, performance, and cost management. Maintain disaster recovery strategies to ensure system reliability and minimize downtime in the event of failures. Must-Have Skills: Experienced with AWS/Azure Cloud Services Good knowledge on any visualization tools like Power BI , Tableau SQL/Python/Pyspark /Spark Knowledge Proficient in CI/CD (Jenkins/Gitlab), Observability, IAC, Gitops etc Experience with containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Bachelor’s degree in computer science and engineering preferred, other Engineering field is considered Databricks Knowledge/Exposure is good to have (need to upskill if hired) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Basic Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 9-11+ years of experience in IT infrastructure, with at least 7+ years in Site Reliability Engineering or related fields. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 6 days ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Systems Engineering Consultant 34309 Type: Contract Location: Chennai Budget: ₹19 LPA Notice Period: Immediate Joiners Only Position Overview We are seeking a Systems Engineering Consultant with extensive experience in virtualization server hosting, infrastructure automation, and global platform deployment. The ideal candidate will possess in-depth knowledge of VMware, Red Hat OSV, OpenShift Virtualization, scripting (PowerShell, Ansible, YAML) , and cloud-native tooling such as Tekton and GitOps . This role involves engineering, deployment, and operations across a large-scale, global server environment. Key Responsibilities Architect, design, and administer VMware & Red Hat OSV infrastructure. Deploy and support virtualization environments, including OpenShift Virtualization, VMware vSphere ESXi, vCenter, and vROPs. Lead lifecycle management for hardware and software, including security patching and upgrades. Engineer and document procedures for monitoring, disaster recovery, and security policies. Perform global deployment of virtualization platforms and lead automation initiatives using Tekton, PowerShell, YAML, and Ansible. Support infrastructure projects including capacity expansion, platform upgrades, and technology refreshes. Provide Level 1–3 operational support and participate in rotational on-call duties. Collaborate with global cross-functional teams to support deployments and resolve platform issues. Technical Skills Required 2-3yrs VMware (ESXi, vCenter, vROPs), Red Hat OSV, OpenShift Virtualization mandate PowerShell, Ansible, YAML, Tekton Pipelines GitHub/GitOps, Argo CD Infrastructure experience with Dockers, PVCs, Kubernetes Understanding of storage, networking, databases, and backups Familiarity with SQL Server, WebSphere, Linux/Windows OS Scripting and automation via PowerCLI, API calls Experience Required 10+ years in IT (development, operations, support, migrations) 5+ years in OpenShift Virtualization and VMware ecosystem 3+ years scripting with Ansible, PowerShell, YAML Experience managing Git repositories and using Tekton pipelines 3+ years experience with Docker, Argo CD, and global VMware infrastructure Proven experience leading virtualization or infrastructure projects in a large enterprise Preferred Skills Certifications: VMware DCV, Red Hat OSV, Red Hat CSA, CKA (Certified Kubernetes Administrator) Familiarity with tools such as Jira, ServiceNow, and best practices in Agile delivery Excellent technical documentation and communication skills Team leadership experience and the ability to work in a global distributed team Education Bachelor’s Degree in Computer Science, IT, or related field (Required) Certification Programs in virtualization, Red Hat, or related technologies (Preferred) Skills: sql server,ansible,windows,linux,kubernetes,powershell,yaml,gitops,docker,red hat osv,red hat,websphere,tekton,openshift virtualization,infrastructure,vmware
Posted 6 days ago
15.0 years
0 Lacs
Gurgaon
On-site
Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: • Provide Level 2 (L2) support for 5G Core SA network functions in production environment. • Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. • Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. • Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). • Validate EDR formats and schemas against 3GPP and Nokia specifications. • NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. • Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. • Analyze alarms from NetAct/Mantaray, or external monitoring tools. • Correlate events using Netscout, Mantaray, and PM/CM data. • Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. • Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). • Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. • Perform root cause analysis (RCA) and implement corrective actions. • Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration • Automate deployment, scaling, healing, and termination of network functions using NCOM. • Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). • Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: • Trace subscriber issues (5G attach, PDU session, QoS). • Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). • Correlate user-plane drops, abnormal release, bearer QoS mismatch. • Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: • Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. • Maintain detailed documentation of network configurations, incident reports, and operational procedures. • Support software upgrades, patch management, and configuration changes. • Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). • Audit NRF/PCF/UDM etc configuration & Database. • Validate policy rules, slicing parameters, and DNN/APN settings. • Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: • Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. • Proactively detect degrading KPIs trends. 6. Security & Access Support: • Application support for Nokia EDR and CrowdStrike. • Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: • Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. • Work with L3 and care team for issue resolution. • Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting • Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: • 5–9 years of experience in Telecom industry with hands on experience. • Mandatory experience with Nokia 5G Core-SA platform. • Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. • Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. • Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform • NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. • Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. • Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. • In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. • Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). • Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. • Strong analytical and troubleshooting skills. • Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). • Knowledge of network protocols and security (TLS, IPsec). • Excellent communication and documentation skills. Educational Qualification: • BE / BTech • 15 Years Full Time Education Additional Information: • Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). • Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). • Cloud Certifications (AWS)/ Experience on AWS Cloud 15 years full time education
Posted 6 days ago
2.0 years
2 - 9 Lacs
Gurgaon
On-site
About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About this role: The Cloud Centre of Excellence Team is responsible for developing Gartner’s capabilities in automating and streamlining IT infrastructure processes and tasks while improving Gartner’s capabilities and service offerings with greater self-service abilities using public cloud platforms and open-source technologies. What you’ll do: Collaborate with a cross-functional team of application developers, operations engineers, architects to understand project requirements and translate them into automated solutions that you build. Collaborate with colleagues to support and improve architecture, systems, processes, standards and tools. Lead technical discussions to ensure solutions are designed for successful deployment, security, and high availability in the cloud Design, implement, and maintain reusable compute, storage, network, and security components using infrastructure as code. Build reusable workflows / pipelines for application Deployments. Write and maintain code for automating the creation of scalable/resilient systems/infrastructure with a focus on immutability and containers. Develop, implement, and test automated data backup and recovery, and disaster recovery procedures across multiple services and platforms. Write and maintain clear, concise documentation, runbooks and operational standards including systems architecture and infrastructure diagrams. Assist development teams in the creation and understanding of automated application configurations, and maintaining the service catalog part of company’s internal developer portal. Ensure all solutions are cost-effective and properly instrumented with telemetry to ensure holistic monitoring. Troubleshoot, resolve, and report issues in the development, test and production environments. Design and deploy scalable, highly available, and fault tolerant distributed systems. Continuously identify, adopt, & refine best practices. Educate/mentor product teams and junior engineers. What you will need : 2+ years’ experience in AWS cloud, Kubernetes, and DevOps positions. Must have: Experience with containerized application builds and deployment orchestration using GitOps, primarily using Argo CD and Flux CD. Knowledge of infra automation and management through GitOps (Terraform / Open Tofu) is required. Exposure to Cloud Native tools for delivery such as Argo (CD, Rollouts), Kustomize, OCI and similar technologies. Good scripting experience (python, shell, groovy etc) is preferred. AWS and Kubernetes certification is a plus. Who you are: Effective time management skills and ability to meet deadlines Exceptional communication skills, to both technical and non-technical audiences Excellent organization, multitasking, and prioritization skills Ability to work independently and with a team Good communication skills and ability to work with global teams to define and deliver on projects Intellectual curiosity, passion for technology and keeping up with new trends Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. #LI-AJ4 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101726 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking a skilled and experienced DevOps Lead to become part of our team. The ideal candidate will possess a solid background in constructing and deploying pipelines utilizing Jenkins and GitHub Actions, along with familiarity with messaging systems like ArtemisMQ and extensive expertise in 3-tier and microservices architecture, including Spring Cloud Services SCS. Proficiency in Azure cloud services and deployment models is a crucial requirement. Your responsibilities will include designing, implementing, and maintaining CI/CD pipelines using Jenkins and GitHub Actions for Java applications. Ensuring secure and efficient build and deployment processes, collaborating with development and operations teams to integrate security practices into the DevOps workflow, and managing and optimizing messaging systems specifically ArtemisMQ. You will also be tasked with architecting and implementing solutions based on 3-tier and microservices architecture, utilizing Azure cloud services for application deployment and management, monitoring and troubleshooting system performance and security issues, and staying updated with industry trends in DevSecOps and cloud technologies. Additionally, mentoring and guiding team members on DevSecOps practices and tools will be part of your role. As a DevOps Lead, you will be expected to take ownership of parts of proposal documents, provide inputs in solution design based on your expertise, plan configuration activities, conduct solution product demonstrations, and actively lead small projects. You will also contribute to unit-level and organizational initiatives aimed at delivering high-quality, value-adding solutions to customers. In terms of technical requirements, you should have proven experience as a DevSecOps Lead or in a similar role, strong proficiency in Jenkins and GitHub Actions for building and deploying Java applications, the ability to execute CI/CD pipeline migrations from Jenkins to GitHub Actions for Azure deployments, familiarity with messaging systems such as ArtemisMQ, and extensive knowledge of 3-tier and microservices architecture, including Spring Cloud Services SCS. Furthermore, familiarity with infrastructure as code tools like Terraform or Ansible, knowledge of containerization and orchestration tools like Docker and Kubernetes, proficiency in Azure cloud services and AI services deployment, a strong understanding of security best practices in DevOps, and excellent problem-solving skills are prerequisites. Effective communication, leadership skills, the ability to work in a fast-paced collaborative environment, and knowledge of tools like Gitops, Podman, ArgoCD, Helm, Nexus, Github container registry, Grafana, and Prometheus are also desired. Furthermore, you should possess the ability to develop value-creating strategies, have good knowledge of software configuration management systems, stay updated on the latest technologies and industry trends, exhibit logical thinking and problem-solving skills, understand financial processes for various project types and pricing models, identify improvement areas in current processes and suggest technological solutions, and have client interfacing skills. Project and team management capabilities, along with one or two industry domain knowledge, are also beneficial. Preferred skills include expertise in Azure DevOps within the Cloud Platform technology domain.,
Posted 6 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a highly skilled and self-driven Site Reliability Engineer to join our dynamic team. This role is ideal for someone with a strong foundation in Kubernetes, DevOps, and observability who can also support machine learning infrastructure, GPU optimization, and Big Data ecosystems. You will play a pivotal role in ensuring the reliability, scalability, and performance of our production systems, while also enabling innovation across ML and data teams. Key Responsibilities Automation & Reliability Design, build, and maintain Kubernetes clusters across hybrid or cloud environments (e.g., EKS, GKE, AKS). Implement and optimize CI/CD pipelines using tools like Jenkins, ArgoCD, and GitHub Actions. Develop and maintain Infrastructure as Code (IaC) using Ansible, Terraform, or & Observability : Deploy and maintain monitoring, logging, and tracing tools (e.g., Thanos, Prometheus, Grafana, Loki, Jaeger). Establish proactive alerting and observability practices to identify and address issues before they impact users. ML Ops & GPU Optimization Support and scale ML workflows using tools like Kubeflow, MLflow, and TensorFlow Serving. Work with data scientists to ensure efficient use of GPU resources, optimizing training and inference & Incident Management : Lead root cause analysis for infrastructure and application-level incidents. Participate in the on-call rotation and improve incident response & Automation : Automate operational tasks and service deployment using Python, Shell, Groovy, or Ansible. Write reusable scripts and tools to improve team productivity and reduce manual Learning & Collaboration : Stay up-to-date with emerging technologies in SRE, ML Ops, and observability. Collaborate with cross-functional teams including engineering, data science, and security to ensure system integrity and : 3+ years of experience as an SRE, DevOps Engineer, or equivalent role. Strong experience with Kubernetes ecosystem and container orchestration. Proficiency in DevOps tooling including Jenkins, ArgoCD, and GitOps workflows. Deep understanding of observability tools, with hands-on experience using Thanos and Prometheus stack. Experience with ML platforms (MLflow, Kubeflow) and supporting GPU workloads. Strong scripting skills in Python, Shell, Ansible, or : CKS (Certified Kubernetes Security Specialist) certification. Exposure to Big Data platforms (e.g., Spark, Kafka, Hadoop). Experience with cloud-native environments (AWS, GCP, or Azure). Background in infrastructure security and compliance. (ref:hirist.tech)
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
CI/CD Release Automation DevOps Engineer - Assistant Vice President is an intermediate level position responsible for a variety of engineering activities including the design, acquisition and development of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks. Responsibilities: Engineer and release functionalities in the Application Release Automation area for various technologies including Public Cloud, Private Cloud and traditional products. Interact with product vendors and responsible for packaging new releases/patches with new features or bug-fixes Maintain Highly scalable infrastructure that manages concurrent releases for different applications across regions Qualifications: About 8-12 years of relevant experience in CICI Release Automation DevOps Engineering Experience working in Financial Services or a large complex and/or global environment Project Management experience Consistently demonstrates clear and concise written and verbal communication Comprehensive knowledge of design metrics, analytics tools, benchmarking activities and related reporting to identify best practices Demonstrated analytic/diagnostic skills Ability to work in a matrix environment and partner with virtual teams Ability to work independently, prioritize, and take ownership of various parts of a project or initiative Ability to work under pressure and manage to tight deadlines or unexpected changes in expectations or requirements Proven track record of operational process change and improvement Skills & Core Role Competencie s: Cloud Technologies: Open Shift, Docker, AWS, GCP CI/CD, Automation Tools: Harness, Jenkins, Tekton, Terraform, Gitlab, Ansible Tower Languages/Scripting: JAVA, Springboot, React, SQL, Shell Scripting, Python Version Control/SCM: GitHub, BitBucket, TFS, Artifactory, Nexus Web/Application Servers : Tomcat, NGINX, NODE.JS Databases: Oracle, MSSQL, MongoDB Monitoring Tools: AppDynamics, Splunk, ELK Operating Systems: Windows and Linux variants Other Tools: JIRA, ServiceNow, Confluence, APIM, Hashicorp Vault Education: Bachelor’s degree/University degree or equivalent experience This is a Senior Engineering Analyst role responsible in taking ownership of the milestones in the Application Release Automation area as we implement organizational outcomes such as strategic tools adoption with simplified pipelines. The role requires enablement of several key responsible in several product features such as GitOps, FeatureFlags coming from the vendor to meet timelines for Public Cloud team to ensure governance in the release process. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Systems & Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
2.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About this role: The Cloud Centre of Excellence Team is responsible for developing Gartner’s capabilities in automating and streamlining IT infrastructure processes and tasks while improving Gartner’s capabilities and service offerings with greater self-service abilities using public cloud platforms and open-source technologies. What you’ll do: Collaborate with a cross-functional team of application developers, operations engineers, architects to understand project requirements and translate them into automated solutions that you build. Collaborate with colleagues to support and improve architecture, systems, processes, standards and tools. Lead technical discussions to ensure solutions are designed for successful deployment, security, and high availability in the cloud Design, implement, and maintain reusable compute, storage, network, and security components using infrastructure as code. Build reusable workflows / pipelines for application Deployments. Write and maintain code for automating the creation of scalable/resilient systems/infrastructure with a focus on immutability and containers. Develop, implement, and test automated data backup and recovery, and disaster recovery procedures across multiple services and platforms. Write and maintain clear, concise documentation, runbooks and operational standards including systems architecture and infrastructure diagrams. Assist development teams in the creation and understanding of automated application configurations, and maintaining the service catalog part of company’s internal developer portal. Ensure all solutions are cost-effective and properly instrumented with telemetry to ensure holistic monitoring. Troubleshoot, resolve, and report issues in the development, test and production environments. Design and deploy scalable, highly available, and fault tolerant distributed systems. Continuously identify, adopt, & refine best practices. Educate/mentor product teams and junior engineers. What you will need : 2+ years’ experience in AWS cloud, Kubernetes, and DevOps positions. Must have: Experience with containerized application builds and deployment orchestration using GitOps, primarily using Argo CD and Flux CD. Knowledge of infra automation and management through GitOps (Terraform / Open Tofu) is required. Exposure to Cloud Native tools for delivery such as Argo (CD, Rollouts), Kustomize, OCI and similar technologies. Good scripting experience (python, shell, groovy etc) is preferred. AWS and Kubernetes certification is a plus. Who you are: Effective time management skills and ability to meet deadlines Exceptional communication skills, to both technical and non-technical audiences Excellent organization, multitasking, and prioritization skills Ability to work independently and with a team Good communication skills and ability to work with global teams to define and deliver on projects Intellectual curiosity, passion for technology and keeping up with new trends Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:101726 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JD is given below: Key Skills : Gloo API, Java, Go, Python, GitOps Location : Bangalore, Pune Experience : 8 Yrs We are seeking a talented and motivated Developer to join our team in building a new API Gateway Platform. The ideal candidate will have experience in developing self-service tools and abstractions for onboarding new APIs, as well as expertise in configuration management using GitOps. Familiarity with Envoy based Gateways is highly desirable. If you have a passion for innovation and a desire to work in a collaborative environment, we want to hear from you! Key Responsibilities: Develop new Envoy Based API Gateway Platform, focusing on seamless onboarding of new APIs. Create self-service tools to enhance the developer experience. Implement configuration management practices using GitOps methodologies. Collaborate with cross-functional teams to understand integration needs and align on API requirements. Optimize and maintain the platform for performance, scalability, and reliability. Participate in code reviews, testing, and documentation efforts to ensure high-quality deliverables. Preferred Skills: Understanding of API design principles and best practices. Hands-on with Gloo API Management Experience with cloud-based solutions and microservices architecture.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We’re seeking someone to join our team as a Data Caching L3 Support Specialist in the Enterprise Computing team. Enterprise Technology & Services (ETS) Enterprise Technology & Services (ETS) delivers and manages essential and innovative technology infrastructure solutions to Morgan Stanley’s businesses. ETS is responsible for driving the production, operations, and engineering of our data centers, voice and data networking solutions, mainframe servers and databases, distributed computing, wireless technologies, and associated end user capabilities. A core pillar within ETS is the ENS (Enterprise Network Services) Engineering team. This global team is responsible for Design and deployment of Data and Voice network infrastructure leveraging cutting edge technologies to provide network connectivity with best possible user-experience. This includes Morgan Stanley owned Private Data Center footprints across the globe, Campus & Branch Network, Global WAN & Metro Core network, Public Cloud Connectivity, Internet Plant, trade floor network etc. The Core Middleware and Container Platforms team is part of the Enterprise Computing organization at Morgan Stanley. The team manages and supports a variety of products/applications including Autosys, MQ, Kafka, Gemfire, Redis, on-prem Kubernetes, and numerous internally developed products in the data transfer/distribution space. This position is for a technical operations specialist with global responsibility for managing and providing support for Data Caching (Gemfire & Redis) and distributed job scheduling (Autosys) products. In the Technology division, we leverage innovation to build the connections and capabilities that power our Firm, enabling our clients and colleagues to redefine markets and shape the future of our communities. This is a Lead Technology Administration Office position at Director Level, which is part of the job family responsible for managing administrative tasks related to technology infrastructure and services, ensuring smooth operations and support for the organization's technology needs. Since 1935, Morgan Stanley is known as a global leader in financial services, always evolving and innovating to better serve our clients and our communities in more than 40 countries around the world. Interested in joining a team that's eager to create, innovate and make an impact on the world? Read on. What You'll Do In The Role The successful candidate will be involved in day to day handling of change, incidents, escalations and problem management. This includes application server administration, technical troubleshooting of infrastructure and user incidents. The position requires on call coverage outside business hours one week a month. The hours will be adjusted within the team to accommodate for the extended coverage. The role is responsible for supporting Morgan Stanley’s Gemfire, Redis, and Autosys platform on a day to day basis. Main job responsibilities include: Level 3 production management of Autosys, Gemfire, and Redis plants. This includes handling escalations from Level 1 and Level 2, incident management, request fulfillment, problem management, and change management. Deep-diving into complex troubleshooting, implementing changes, and serving as an escalation point for the Level 2 teams. Improving operational processes and automation by proactively identifying, analyzing, and improving upon existing processes and tools. Work in a global environment in a team that has members across the globe and provides support 24/7 in a follow the sun manner. Creating and maintaining best practices / policies and ensuring that they are followed. Work with External vendors and internal key stake holders to plan and execute changes. Participate in weekly review meetings and the squads’ agile ceremonies, and actively engage with various engineering teams to review the infrastructure. What You'll Bring To The Role Experience with Linux System Administration (preferably Red Hat) Hands-on data caching support (preferably Gemfire or Redis) experience Familiarity with batch scheduling (Autosys preferred) Familiarity with scripting and orchestration (Perl, Python, Ansible, GitOps) Experience with monitoring and alerting tools and modern observability concepts and tools Excellent written and oral English communication skills. The candidate must be capable of writing documentation, making presentations to an internal audience and interacting positively with upper management, colleagues, and customers Independent problem-solving skills, self-motivated, and a mindset for taking ownership Knowledge of operational and agile, DevOps, and SRE concepts such as SLA, metrics, toil, SLIs/SLOs, observability, and automated deployment pipielines A minimum of 3-5 years of infrastructure production support experience, preferably in a regulated environment (e.g. finance IT) Skills Desired Public/private Cloud experience Experience with containerization technologies (Kubernetes/Openshift) Experience with work management tools such as Jira, ServiceNow, Git Experience with troubleshooting tools such as TCPdump and Wireshark. Interest and understanding of emerging IT trends What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.
Posted 1 week ago
0 years
0 Lacs
Delhi
Remote
ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. ABOUT THE TEAM: Our 40+ engineering teams are working on designing, creating and running the rich product catalogue across our business areas (e.g. Payments Services, Business Services). We have a long roadmap ahead of us and always have interesting problems to tackle. We trust and empower our engineers to make real technical decisions that affect multiple teams and shape the future of Tide's Global One Platform. It's an exceptional opportunity to make a real difference by taking ownership of engineering practices in a rapidly expanding company! We work in small autonomous teams, grouped under common domains owning the full lifecycle of some microservices in Tide's service catalogue. Our engineers self-organize, gather together to discuss technical challenges, and set their own guidelines in the different Communities of Practice regardless of where they currently stand in our Growth Framework. ABOUT THE ROLE: Contribute to our event-driven Microservice Architecture (currently 200+ services owned by 40+ teams). You will define and maintain the services your team owns (you design it, you build it, you run it, you scale it globally) Use Java 17 , Spring Boot and JOOQ to build your services. Expose and consume RESTful APIs . We value good API design and we treat our APIs as Products (in the world of Open Banking often times they are gonna be public!) Use SNS + SQS and Kafka to send events Utilise PostgreSQL via Aurora as your primary datastore (we are heavy AWS users) Deploy your services to Production as often as you need to (this usually means multiple times per day!). This is enabled by our CI/CD pipelines powered by GitHub with GitHub actions , and solid JUnit/Pact testing (new joiners are encouraged to have something deployed to production in their first 2 weeks) Experience modern GitOps using ArgoCD . Our Cloud team uses Docker, Terraform, EKS/Kubernetes to run the platform. Have DataDog as your best friend to monitor your services and investigate issues Collaborate closely with Product Owners to understand our Users' needs, Business opportunities and Regulatory requirements and translate them into well-engineered solutions WHAT WE ARE LOOKING FOR: Have some experience building server-side applications and detailed knowledge of the relevant programming languages for your stack. You don't need to know Java, but bear in mind that most of our services are written in Java, so you need to be willing to learn it when you have to change something there! Have a sound knowledge of a backend framework (e.g. Spring/Spring Boot) that you've used to write microservices that expose and consume RESTful APIs Have experience engineering scalable and reliable solutions in a cloud-native environment (the most important thing for us is understanding the fundamentals of CI/CD, practical Agile so to speak) Demonstrate a mindset of delivering secure, well-tested and well-documented software that integrates with various third party providers and partners (we do that a lot in the fintech industry) OUR TECH STACK: Java 17 , Spring Boot and JOOQ to build the RESTful APIs of our microservices Event-driven architecture with messages over SNS+SQS and Kafka to make them reliable Primary datastores are MySQL and PostgreSQL via RDS or Aurora (we are heavy AWS users) Docker, Terraform, EKS/Kubernetes used by the Cloud team to run the platform DataDog, ElasticSearch/Fluentd/Kibana and Rollbar to keep it running GitHub with GitHub actions for Sonarcloud, Snyk and solid JUnit/Pact testing to power the CI/CD pipelines WHAT YOU WILL GET IN RETURN: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 25 Annual leaves Family & Friendly Leaves TIDEAN WAYS OF WORKING: At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. #LI-NN1 TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Job Title: Azure DevOps Engineer Experience: 7+ Years Location: Remote (Onboarding for 2 weeks in Nashik, Maharashtra) Employment Type: Full-Time Job Summary: We are seeking a highly experienced Azure DevOps Engineer with strong expertise in cloud infrastructure, CI/CD automation, and modern DevOps practices. The ideal candidate will be responsible for designing, building, and maintaining scalable Azure environments and supporting deployment pipelines using best-in-class DevOps tools. Note: Candidate must be willing to travel to Nashik, Maharashtra for a 2-week onboarding period . Key Responsibilities: Provision and configure infrastructure as per Azure cloud standards using Infrastructure-as-Code (IaC). Design and implement private, public, and hybrid cloud-based solutions on Azure. Build, manage, and optimize CI/CD pipelines using Azure DevOps . Work on Kubernetes , Helm , Docker , and microservices architecture. Implement secure and scalable source control workflows using GIT and GitOps principles. Develop and maintain scripts using PowerShell and Linux shell scripting for automation tasks. Build and deploy artifacts using NuGet , and manage deployments across multiple environments. Support and automate the build and deployment of Angular/React applications . Configure and maintain application servers including NGINX and IIS . Continuously improve build/release processes , enhancing efficiency and reliability. Perform debugging, root cause analysis , and resolve infrastructure and deployment issues. Monitor system performance, analyze logs, and ensure high availability and performance. Collaborate with development, QA, and IT teams to align DevOps processes with business needs. Required Skills: Strong experience in Microsoft Azure and its managed services . Proficient with CI/CD pipeline design and implementation using Azure DevOps. Deep knowledge of Kubernetes , Helm , Docker , and container orchestration. Solid experience with PowerShell scripting and Linux-based command-line tools. Hands-on experience with NuGet , build/release pipelines , and versioned artifact deployments. Ability to build and deploy frontend apps (Angular/React) to target environments. Strong knowledge of configuring and maintaining NGINX , IIS , or similar application servers. Advanced troubleshooting and system debugging capabilities. Experience with infrastructure monitoring, log analysis, and performance tuning. Familiarity with databases , caching , message queues , and email services in cloud environments.
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
You are a Senior Frontend Development Engineer being hired for an Enterprise-grade high-performance supercomputing platform based in Pune, India. The company specializes in assisting enterprises and service providers in building AI inference platforms for end users, utilizing a state-of-the-art RDU (Reconfigurable Dataflow Unit) hardware architecture. The cloud-agnostic MLOps platform simplifies infrastructure complexity, allowing seamless deployment, management, and scaling of foundation model workloads at production scale. As a Senior Software Engineer, your primary responsibility will involve developing robust and intuitive frontends to enhance user experience. You will play a crucial role in the enhancement of the enterprise-grade AI platform, ensuring high performance, security, and longevity. This role offers high-impact opportunities at the crossroads of AI infrastructure, enterprise software, and developer experience. Your key responsibilities will include: - Developing and maintaining dynamic, responsive UIs using React, TypeScript, and HTML/CSS - Collaborating with backend teams to integrate with RESTful and gRPC APIs - Simplifying complex technical workflows into user-friendly interfaces - Upholding high standards of UI/UX, accessibility, performance, and responsiveness - Working closely with cross-functional teams to deliver production-grade features - Contributing to frontend architecture and codebase evolution - Writing clean, maintainable code with unit and integration tests - Participating in code reviews, documentation, and continuous improvement initiatives To qualify for this role, you must possess: - 7-10 years of professional experience in frontend development - Strong expertise in React, TypeScript, and modern JavaScript (ES6+) - Proficiency in HTML/CSS, responsive design, and browser compatibility - Solid understanding of API integration, authentication flows (OAuth, SSO), and state management - Experience in working within a microservices-based architecture alongside backend teams - Familiarity with frontend build pipelines and CI/CD best practices - Passion for user experience and meticulous attention to design detail Preferred qualifications include experience with enterprise or developer-facing platforms, understanding of Kubernetes, containerized environments, and cloud storage systems (S3, GCP, Azure), exposure to Stripe integration, observability tools, and security best practices, as well as familiarity with GitOps workflows, HelmCharts, or CRDs. Being comfortable in full-stack environments and contributing to backend discussions will be an added advantage for this role.,
Posted 1 week ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Infrastructure/Cloud Main location: India, Karnataka, Bangalore Position ID: J0625-0444 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: Sr Kubernetes Admin Position: Senior Systems Engineer Experience: 6-8 years Category: Software Development/ Engineering Shift: Rotational Shift (Primarily - 7PM-4AM IST) - US Hours Main location: Bangalore, Chennai Position ID: J0625-0444 Employment Type: Full Time Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 6 years of relevant experience. Position Description: 6+ years of experience in managing Kubernetes cluster across on Azure or OCI Experience in creating and managing /upgrading production scale Kubernetes clusters, Deep understanding of Kubernetes networking Support our Kubernetes-based projects to resolve critical and complex technical issues in a 24x7x365 support model, leveraging deep technical and product expertise along with understanding of customer’s needs. Performing application deployments on Kubernetes cluster (using DevOps tools – Azure DevOps,ArgoCD,GitOPS etc) Securely managing Kubernetes Cluster on at least one of the cloud providers (AWS, Azure, OpenStack or GCP cloud) Kubernetes core concepts – Deployment, ReplicaSet, DaemonSet, Statefulsets, Jobs. Managing Kubernetes secrets Ingress Controllers (Nginx, Istio etc) and cloud native load balancers Managing Kubernetes storages (PV, PVC, Storage Classes, Provisioners) Kubernetes Networking ( Services, Endpoints, DNS, LoadBalancers) Managing resource quotas Experience in setting up monitoring and alerting for Kubernetes cluster using open source monitoring tools like Grafana, Prometheus. Deeply engage and understand the architecture and operations, and work to continuously improve the overall Kubernetes support experience. Performance tuning and Optimizing Kubernetes cluster. Certification in Microsoft, AWS and/or competing Cloud Technologies is desired. Strong knowledge in Linux and Docker Good in scripting (shell, bash and Python) Well versed with version control systems such as Git/BitBucket Hands-on experience in hardening infrastructure for security, performance, compliance & regulatory requirements Familiarity with configuration management tools like Chef/Ansible/Puppet, Terraform, AWS CloudFormation etc Experience or knowledge of any monitoring tool like Datadog, Nagios etc experience with troubleshooting in RHEL, knows how to check syslog, how to handle different issues with a host Certifications such as CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer) Must have skills : Worker nodes management (Linux and Windows) Containers & Build Azure Kubernetes Services ArgoCD GitOPS Knowledge of Azure infrastructure Job Qualifications: CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Life at CGI: It is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons Come join our team, one of the largest IT and business consulting services firms in the world Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: OCI Platform Position: Senior Systems Engineer Experience: 6-8 years Category: Software Development/ Engineering Shift: Rotational Shift (Primarily - 7PM-4AM IST) - US Hours Main location: Bangalore, Chennai Position ID: J0625-0445 Employment Type: Full Time Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 6 years of relevant experience. Position Description: 6+ years of experience in OCI cloud infrastructure with good PaaS, IaaS and experience Good experience in IAM, Identity domains, Cloud Guard/Security Zones, Cost Management Strong knowledge of OCI core services: VCN,NSG, Subnets, Compute,Load-balancers, WAF, Block/Object Storage,DBs Experince in IaC (OCI Resource Manager, Terraform) Experience with Oracle Cloud services like Autonomous Database, Oracle Integration Cloud, etc. Hands-on experience with Terraform or OCI Resource Manager. Proficient in scripting (Shell, Python, etc.) for automation. Knowledge of security concepts like IAM, policies, compartments. Experience with monitoring tools like OCI Monitoring, Logging, or third-party solutions. OCI certifications such as OCI Foundations, OCI Architect Associate/Professional are a plus. Experience with DevOps CI/CD pipeline, Test Automation tools and processes Configure backup, monitoring, disaster recovery, and high availability Troubleshoot and resolve cloud infrastructure issues. Maintain documentation for OCI environments and procedures Drive and support system reliability, availability, scale, and performance activities Must have skills : OCI Platform (IAM, Identity domains, Cloud Guard/Security Zones, Cost Management) Network (VCN, NSGs, Load-balancers, WAF) Storage (Object, Block and files) Data (DBs) Compute (VMs) IaC (OCI Resource Manager, Terraform) OCI DevOps OCI Backup OCI Monitoring service & Logging Analytics OCI Full Stack Disaster Recovery Job Qualifications: CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Life at CGI: It is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons Come join our team, one of the largest IT and business consulting services firms in the world Skills: English Cloud Computing Kubernetes Kubernetes Administrator What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bengaluru, Karnataka, India Department QA Engineering Job posted on Jul 25, 2025 Employee Type Full Time Experience range (Years) 4 years - 6 years About Us: MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As an API Test Engineer , you’ll join the core payments engineering team and own the end-to-end test automation for backend microservices and APIs that drive our B2B payment platform. You will work closely with developers, SREs, and product managers to ensure reliability and correctness in every transaction flow. You will contribute to Designing and implementing API-level automated test suites for validating payments, FX, wallet, ledger, and remittance services. Driving test execution pipelines as part of our CI/CD ecosystem, ensuring fast feedback and stable deployments. Creating contract tests from OpenAPI specifications to validate provider-consumer integration in a microservices architecture. Instrumenting and tracking non-functional quality metrics like latency, error rate, and throughput alongside SRE/infra teams. Collaborating with backend developers to test concurrency, retries, idempotency , and consistency across distributed services. Maintaining test harnesses and automation tooling for schema validation, regression checks, and SLA conformance testing. Enabling GenAI-based tooling (e.g., for generating test cases or data fixtures) to improve velocity and coverage — while ensuring test logic integrity. Performing root cause analysis and working with engineering teams during incidents or regression failures in production. Promoting test-first thinking and shifting quality left through tooling, documentation, and collaborative standards. Responsibilities Build, maintain, and extend automation frameworks using tools such as Postman, REST Assured, Karate, or custom suites . Write robust, scalable test scripts covering positive, negative, and boundary scenarios for RESTful APIs. Automate pre-deployment smoke tests , post-deployment regression suites , and performance baselines for key services. Validate integration points across internal systems (e.g., auth, ledger, risk, FX) and external partners (e.g., banking partners , PSP, service providers). Manage and version test cases aligned with evolving OpenAPI specifications and platform SLAs. Contribute to observability around test execution — logs, metrics, dashboards, and alerting for failed scenarios. Maintain test environments, mock services, and test data setup tools with high repeatability and reliability. Work with developers and platform engineers to recommend performance improvements, schema contracts, and test coverage expansion. Requirements Atleast 4 years of experience in test automation with a strong focus on API testing and backend system validation. Deep understanding of REST APIs, HTTP protocols, headers, authentication, and response validation. Experience with test automation frameworks such as REST Assured, Karate, Postman, or custom CLI tooling . Proficiency in scripting or development languages like Go, Java, or Python to build test utilities or harnesses. Familiarity with API documentation standards (OpenAPI/Swagger) and automated api contract validation. Strong foundation in test design patterns , modular test cases, and reusable assertions. Exposure to CI/CD tools such as Jenkins, bitbucket pipelines etc. Understanding of distributed system behaviors , asynchronous communication, retries, and service orchestration. Detail-oriented mindset with a commitment to reproducibility, traceability, and documentation. Brownie Points Experience testing payment platforms, remittance systems, or financial APIs . Familiarity with PostgreSQL or MySQL for data validation and transaction traceability. Basic knowledge of performance testing and tools like Locust, Artillery, Jmeter or Gatling. Experience using GitOps-style workflows for test definition and promotion. Exposure to GenAI-powered testing tools or prompt-based test case/script generation. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger! Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
The DevOps + Python role is based in Bangalore and offers a hybrid work mode with 2-3 days per week in the office. As an ideal candidate with over 7 years of experience, you should possess a strong set of complementary tech skills and relevant development experience. Your primary responsibility will involve Python scripting and a solid understanding of code management and release approaches is essential. Proficiency in CI/CD pipelines, GitFlow, Github, and GitOps (Flux, ArgoCD) is a must-have, with additional knowledge in Flux considered beneficial. A good grasp of functional programming, with Python as the primary language and Golang as a secondary language for IAC platform, is required. Hands-on experience in ABAC, RBAC, JWT, SAML, AAD, OIDC authorization, and authentication is essential, particularly in NoSQL databases like DynamoDB (SCC heavy). Additionally, familiarity with event-driven architecture involving queues, streams, batches, and pub/subs is necessary. You should be fluent in operating Kubernetes clusters from a development perspective, including creating custom CRD, operators, and controllers. Experience in developing serverless applications on AWS and Azure is crucial, along with understanding Monorepo, multirepo, and different code management approaches. A deep understanding of scalability, concurrency, network connectivity, proxies, and AWS cloud components (org, networks, security, IAM) is expected, with a basic understanding of Azure cloud. Moreover, you should have a solid grasp of SDLC, DRY, KISS, SOLID development principles to excel in this role.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough