Jobs
Interviews

944 Gitops Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

On-site

👩‍💻The role: Kubernetes Hands on - Help with working on Kubernetes clusters on Cloud and optimize workloads on the clusters. Exposure to helm charts, ArgoCD etc Automate Workflows - Help improve CI/CD workflows with GIthub Actions(or similar) that runs tests, builds docker images, publish artifacts, canary releases etc Instrument services - add basic metrics, logs, and health checks to micro‑services; build Grafana panels that surface p95 latency and error rates Learn Application Performance Tuning - Pair with senior engineers to build and deploy tooling that boosts API/Page performance, Database efficiency and work on fixes Infrastructure‑as‑Code : contribute small Terraform modules (e.g., S3 buckets, IAM roles) under guidance, and learn the code‑review process Documentation & Demos - create quick‑start guides and lightning‑talk demos 🤩What makes this role special? Accelerated exposure - work on problem statements encompassing Devops, Application Performance, Observability and many more with full stack exposure Real impact – the dashboards, pipelines, and docs you ship will be used by every engineer, even after your internship ends Flexible - Flexibility to work on different stacks, tools & platforms Modern toolchain – Kubernetes, Terraform, GitOps, Prometheus etc — the same technologies top cloud‑native companies use Mentorship culture – you’ll have a dedicated buddy, weekly 1‑on‑1s, and structured feedback to level up fast 💝What skills & experience do you need? Must-haves Strong computer networks and computer science fundamentals. Coursework or personal projects in Linux fundamentals and at least one programming language (Go, Python, Java, or TypeScript). Basic familiarity with Git workflows and CI systems (GitHub Actions, GitLab CI, or simi lar).Comfort running and debugging simple Docker containers locally. Curiosity about cloud infrastructure, performance tuning, and security best practices. Clear communication and a growth mindset, willing to ask questions, learn fast, document findings, and incorporate feedback. High Agency to fix issues proactively . Nice-to-haves Experience with any Cloud platforms - AWS, GCP, Azure. Exposure to kubernetes, Terraform and other IaCtools. Side projects with full stack exposure. Participation in hackathons & open‑source contributions ➕Bonus Interested in travel, local experiences, and hospitality personally. Interested in being in a rapidly growing startup. Anything out-of-the-box that can surprise us.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Mandatory-Should have a stable track record Suitable candidate shall be notified the same day Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description As a member of the Tools team within the Network Services Department, you will play a key role in designing and developing innovative self-service and automation tools that enhance the productivity and quality of network infrastructure tasks. Our Network Tools portfolio comprises custom-built software solutions that empower network teams with automation capabilities, intuitive dashboards, and self-service features, ultimately driving efficiency and excellence in enterprise network infrastructure operations. This position offers an exciting opportunity to balance technical development expertise with lead responsibilities, working with modern technologies like GoLang, Python, React, and OpenShift Container Platform. You'll collaborate with cross-functional teams to translate business requirements into technical solutions while leading DevOps practices and driving technical decision-making for our automation tools ecosystem. Responsibilities Minimum 5+ years overall software development experience with strong proficiency in GoLang, Python, and React (minimum 3 years each), plus experience with OpenShift, CI/CD practices, SQL proficiency, and demonstrated technical leadership capabilities. Proven experience in developing automation solutions (preferably Network automation), and 3+ years of modern frontend development experience including React and CSS. Experience with or willingness to quickly learn OpenShift Container Platform architecture, application deployment, and resource management. DevOps experience or strong interest in learning CI/CD pipelines (Tekton), GitOps workflows (ArgoCD), and Linux/Unix environment proficiency for legacy application support. Experience in developing and managing APIs, with basic Perl familiarity for legacy application migration to GoLang. Currently an active hands-on software developer with excellent troubleshooting and analytical capabilities, plus willingness to lead technical initiatives and project coordination responsibilities. Self-motivated individual with strong communication skills, fluent in English, and demonstrated ability to rapidly acquire new technological skills through experimentation and innovative problem-solving. Qualifications MUST HAVE - SKILLS & EXPERIENCE While we prefer candidates with all listed skills, we highly value a learning mindset and will consider strong candidates who demonstrate the ability to quickly acquire missing technical skills with a can-do, find-a-way approach. Minimum 5+ years overall software development experience with strong proficiency in GoLang, Python, and React (minimum 3 years each), plus SQL proficiency and demonstrated ability to lead technical initiatives. Proven experience in developing automation solutions (preferably Network automation) and modern frontend development with CSS customization. Experience with or willingness to quickly learn OpenShift Container Platform architecture, application deployment, resource management, and basic CI/CD pipelines. Experience in developing and managing APIs, with basic Linux/Unix familiarity for legacy application support, and basic Perl knowledge for legacy migrations to GoLang. Currently an active hands-on software developer with excellent troubleshooting and analytical capabilities, plus willingness to lead technical initiatives and project coordination responsibilities. Self-motivated individual with strong communication skills, fluent in English, and demonstrated ability to rapidly acquire new technological skills through experimentation and innovative problem-solving. Nice To Have- Skills & Experience Advanced DevOps experience with Tekton CI/CD pipelines, ArgoCD GitOps workflows, Git (GitHub/GitLab), and Jira project management. Experience with security and code quality tools (e.g., SonarQube), and AI-powered development tools (e.g., GitHub Copilot). Cloud-native development experience in GCP (CaaS environments) or Azure (Microsoft services integration). Knowledge of networking concepts (TCP/IP, Cisco equipment), Observability, AIOps concepts and tools. Prior experience in network infrastructure operations or CCNA certification.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Who are we? Securin is an AI-driven cybersecurity product based company backed up by services focused on proactive, adversarial exposure and vulnerability management. Our mission is to help organizations reduce cyber risk by identifying, prioritizing, and remediating the issues that matter most. Powered by a seasoned team of threat researchers and status as a Certified Naming Authority (CNA), Securin combines artificial intelligence / machine learning, threat intelligence, and deep vulnerability research (including the Dark Web) to deliver an adversarial approach to cyber defense. We help enterprises shift from reactive patching to strategic, risk-based exposure and vulnerability management – driving smarter security decisions and faster remediation. What do we provide? A chance to be on the leading edge of cybersecurity and AI Ability to have direct impact on company growth and revenue strategy An opportunity to mentor and be mentored by experts in multiple disciplines What do we deliver? Securin helps organizations to identify and remediate the most dangerous exposures, vulnerabilities, and risks in their environment. We deliver predictive and definitive intelligence and facilitate proactive remediation to help organizations stay a step ahead of attackers. By utilising our cybersecurity solutions, our clients can have a proactive and holistic view of their security posture and protect their assets from even the most advanced and dynamic attacks. Securin has been recognized by national and international organizations for its role in accelerating innovation in offensive and proactive security. Our combination of domain expertise, cutting-edge technology, and advanced tech-enabled cybersecurity solutions has made Securin a leader in the industry. Job Location : IIT Madras Research Park, A block, Third floor, 32, Tharamani, Chennai, Tamil Nadu 600113 Work Mode: Hybrid (Work from office, Chennai, 2 days a week to office) Responsibilities: ● Design & Development: Architect, implement, and maintain Java microservices processing high-volume data streams. ● Pipeline Engineering: Build and optimize ingestion pipelines (Kafka, Flink, Beam etc.) to ensure low-latency, high-throughput data flow. ● Secure Coding: Embed secure coding standards (OWASP, SAST/DAST integration, threat modeling) into the SDLC; implement authentication, authorization, encryption, and audit logging. ● Performance at Scale: Identify and resolve performance bottlenecks (JVM tuning, GC optimization, resource profiling) in distributed environments. ● Reliability & Monitoring: Develop health checks, metrics, and alerts (Prometheus/Grafana), and instrument distributed tracing (OpenTelemetry). ● Collaboration: Work closely with product managers, data engineers, SREs, and security teams to plan features, review designs, and conduct security/code reviews. ● Continuous Improvement: Champion CI/CD best practices (GitOps, automated testing, blue/green deployments) and mentor peers in code quality and performance tuning. Requirements ● Strong Java Expertise: 8+ years of hands-on experience with Java 11+; deep knowledge of concurrency, memory management, and JVM internals. ● Secure Coding Practices: Proven track record implementing OWASP Top Ten mitigations, performing threat modeling, and integrating SAST/DAST tools. ● Big Data & Streaming: Hands-on with Kafka (producers/consumers, schema registry), Spark or Flink for stream/batch processing. ● System Design at Scale: Experience designing distributed systems (microservices, service mesh) with high availability and partition tolerance. ● DevOps & Automation: Skilled in containerization (Docker), orchestration (Kubernetes), CI/CD pipelines (Jenkins). ● Cloud Platforms: Production experience on AWS; familiarity with managed services (MSK, EMR, GKE, etc.). ● Testing & Observability: Expertise in unit/integration testing (JUnit, Mockito), performance testing (JMH), logging (ELK/EFK), and monitoring stacks. ● Collaboration & Communication: Effective communicator; able to articulate technical trade-offs and evangelize best practices across teams. Qualifications: ● Bachelor’s or Master’s in Computer Science, Engineering, or related field. ● 6–10 years of software development experience, with at least 3 years focused on data-intensive applications. ● Demonstrated contributions to production-critical systems serving thousands of TPS (transactions per second). ● Strong analytical and problem-solving skills; comfortable working in fast-paced, agile environments. Nice to Have: ● Open source contributions to streaming or security projects. ● Experience with infrastructure as code (Terraform, CloudFormation). Why should we connect? We are a bunch of passionate cybersecurity professionals who are building a culture of security. Today, cybersecurity is no more a luxury but a necessity with a global market value of $150 billion. At Securin, we live by a people-first approach. We firmly believe that our employees should enjoy what they do. For our employees, we provide a hybrid work environment with competitive best-in-industry pay, while providing them with an environment to learn, thrive, and grow. Our hybrid working environment allows employees to work from the comfort of their homes or the office if they choose to. For the right candidate, this will feel like your second home. If you are passionate about cybersecurity just as we are, we would love to connect and share ideas.

Posted 1 week ago

Apply

3.0 - 9.0 years

0 Lacs

maharashtra

On-site

The Red Hat Solution Architecture team is looking for a Senior Specialist Solution Architect with a minimum of 9 years of experience to join the existing OpenShift SSA team in India. In this role, you will guide customers through their Red Hat journey, create opportunities, solve technical challenges, and build strong relationships with their engineering, development, and operations teams. Collaborating closely with customers to understand their business needs, you will align Red Hat's solutions to drive operational efficiency and innovation. As a Specialist Solution Architect, you will focus on the Red Hat OpenShift, Application Services, and OpenShift AI product portfolio. Your responsibilities will include delivering presentations, demos, proofs of concepts, and workshops to showcase Red Hat's solutions. You will work with sales, account architects, and other Red Hat teams to help customers make informed investments, ensuring their systems are scalable, flexible, and high-performing. Your ability to manage relationships and work independently will be crucial in helping customers achieve success. Key Responsibilities: - Collaborate with Red Hat account teams to present technical solutions and develop sales strategies. - Gather requirements, analyze solutions architecture, and design, as well as present solutions to meet customer needs through workshops and other supporting activities. - Research and respond to technical sections of RFIs and RFPs. - Build strong relationships with customer teams, including technical influencers and executives. - Serve as a technical advisor, leading discussions and guiding the implementation of cloud-native architectures. - Stay updated on industry trends and continuously enhance your skills. - Contribute to the team effort by sharing knowledge, documenting customer success stories, helping maintain the team lab environment, and participating in subject matter expert team initiatives. - Willingness to travel up to 50%. Requirements: - Proficiency in Kubernetes and cloud-native architectures like containers, service mesh, and GitOps. - Experience with development tooling used for application refactoring, migration, or development, and understanding of application development methodologies. - Understanding of virtualization technologies such as KVM and VMware. - Experience with infrastructure activities like installation, configuration, networking, security, and high availability. - Strong problem-solving abilities. - Excellent communication and presentation skills. - Minimum of 3 years in a customer-facing role; pre-sales experience is an added advantage. This role offers the opportunity to work on exciting technologies and contribute to our customers" success.,

Posted 1 week ago

Apply

46.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary We are hiring a Senior DevOps Engineer with 46 years of experience to join our growing engineering team. The ideal candidate is proficient in AWS and Azure, has a solid development background (Python preferred), and demonstrates strong experience in infrastructure design, automation, and DevOps to GCP is a plus. You will be responsible for building, managing, and optimizing robust, secure, and scalable infrastructure solutions from scratch. Key Responsibilities Design and implement cloud infrastructure using AWS, Azure, and optionally GCP. Build and manage Infrastructure-as-Code using Terraform. Develop and maintain CI/CD pipelines using tools such as GitHub Actions, Jenkins, or GitLab CI. Deploy and manage containerized applications using Docker and Kubernetes (EKS/AKS). Set up and manage Kafka for distributed streaming and event processing. Build monitoring, logging, and alerting solutions using tools like Prometheus, Grafana, ELK, CloudWatch, Azure Monitor. Ensure cost optimization and security best practices across all cloud environments. Collaborate with developers to debug application issues and improve system performance. Lead infrastructure architecture discussions and implement scalable, resilient solutions. Automate operational processes and drive DevOps culture and best practices across teams. Required Skills 46 years of hands-on experience in DevOps/Site Reliability Engineering. Strong experience in multi-cloud environments (AWS + Azure); GCP exposure is a bonus. Proficient in Terraform for IaC; experience with ARM Templates or CloudFormation is a plus. Solid experience with Kubernetes (EKS & AKS) and container orchestration. Proficient in Docker and container lifecycle management. Hands-on experience with Kafka (setup, scaling, and monitoring). Experience implementing monitoring, logging, and alerting solutions. Expertise in cloud security, IAM, RBAC, and cost optimization. Development experience in Python or any backend language. Excellent problem-solving and troubleshooting skills. Nice To Have Certifications : AWS DevOps Engineer, Azure DevOps Engineer, CKA/CKAD. Experience with GitOps, Helm, and service mesh Familiarity with serverless architecture and event-driven systems. Education Bachelors or Masters degree in Computer Science, Information Technology, or related field. (ref:hirist.tech)

Posted 1 week ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Post : Team Lead DevOps. Experience : 6+ years. Location : Ahmedabad (Work from office). Role Summary Key Responsibilities : Manage, mentor, and grow a team of DevOps engineers. Oversee the deployment and maintenance of applications like : Odoo (Python/PostgreSQL) Magento (PHP/MySQL) Node.js Design and manage CI/CD pipelines for each application using tools like Jenkins, GitHub Actions, GitLab CI. Handle environment-specific configurations (staging, production, QA). Containerize legacy and modern applications using Docker and deploy via Kubernetes (EKS/AKS/GKE) or Docker Swarm. Implement and maintain Infrastructure as Code using Terraform, Ansible, or CloudFormation. Monitor application health and infrastructure using Prometheus, Grafana, ELK, Datadog, or equivalent tools. Ensure systems are secure, resilient, and compliant with industry standards. Optimize cloud cost and infrastructure performance. Collaborate with development, QA, and IT support teams for seamless delivery. Troubleshoot performance, deployment, or scaling issues across tech stacks. Must-Have Skills 6+ years in DevOps/Cloud/System Engineering roles with real hands-on experience. 2+ years managing or leading DevOps teams. Experience supporting and deploying : Odoo on Ubuntu/Linux with PostgreSQL. Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB. Node.js with PM2/Nginx or containerized setups. Experience with AWS / Azure / GCP infrastructure in production. Strong scripting skills : Bash, Python, PHP CLI, or Node CLI. Deep understanding of Linux system administration and networking fundamentals. Experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and having exposure in managing clients. Preferred Certifications (Highly Valued) AWS Certified DevOps Engineer Professional. Azure DevOps Engineer Expert. Google Cloud Professional DevOps Engineer. Bonus : Magento Cloud DevOps or Odoo Deployment Experience. Bonus Skills (Nice To Have) Experience with multi-region failover, HA clusters, or RPO/RTO-based design. Familiarity with MySQL/PostgreSQL optimization and Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower. Knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices. (ref:hirist.tech)

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

You are a skilled DevOps Engineer with proven experience in a DevOps engineering role, possessing a strong background in software development and system administration. Your primary responsibility will be to implement and manage CI/CD pipelines, container orchestration, and cloud services to enhance the software development lifecycle. Your impact will be seen in how you collaborate with development and operations teams to streamline processes and improve deployment efficiency. You will implement and manage CI/CD tools such as GitLab CI, Jenkins, or CircleCI. Utilizing Docker and Kubernetes for containerization and orchestration of applications is essential. Additionally, you will write and maintain scripts in at least one scripting language (e.g., Python, Bash) to automate tasks. Managing and deploying applications using cloud services (e.g., AWS, Azure, GCP) and their respective management tools will be part of your responsibilities. You should have a good understanding and application of network protocols, IP networking, load balancing, and firewalling concepts. Implementing infrastructure as code (IaC) practices to automate infrastructure provisioning and management is crucial. Utilizing logging and monitoring tools (e.g., ELK stack, OpenSearch, Prometheus, Grafana) will ensure system reliability and performance. You should be familiar with GitOps practices using tools like Flux or ArgoCD for continuous delivery, as well as working with Helm and Flyte for managing Kubernetes applications and workflows. Your qualifications should include a Bachelor's or Master's degree in computer science or a related field, along with proven experience in a DevOps engineering role. You should have a strong background in software development and system administration, experience with CI/CD tools and practices, proficiency in Docker and Kubernetes, and familiarity with cloud services and their management tools. An understanding of networking concepts and protocols, experience with infrastructure as code (IaC) practices, familiarity with logging and monitoring tools, knowledge of GitOps practices and tools, and experience with Helm and Flyte are also required. Preferred qualifications include experience with cloud-native architectures and microservices, knowledge of security best practices in DevOps and cloud environments, understanding of database management and optimization (e.g., SQL, NoSQL), familiarity with Agile methodologies and practices, experience with performance tuning and optimization of applications, knowledge of backup and disaster recovery strategies, and familiarity with emerging DevOps tools and technologies.,

Posted 1 week ago

Apply

8.0 - 18.0 years

0 Lacs

hyderabad, telangana

On-site

As a Site Reliability Engineer/Cloud Engineer (SRE) at Amgen, you will play a crucial role in optimizing performance, standardizing processes, and automating critical infrastructure and systems to ensure reliability, scalability, and cost-effectiveness. Working towards operational excellence through automation, incident response, and proactive performance tuning, you will collaborate closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Your responsibilities will include: - Ensuring the reliability, scalability, and performance of Amgen's infrastructure, platforms, and applications by proactively identifying and resolving performance bottlenecks and implementing long-term fixes. - Driving the adoption of automation and Infrastructure as Code (IaC) to streamline operations, minimize manual interventions, and enhance scalability. - Establishing standardized operational processes, tools, and frameworks across Amgen's technology stack to ensure consistency, maintainability, and best-in-class reliability practices. - Implementing and maintaining comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. - Partnering with software engineering and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. - Executing capacity planning processes to support future growth, performance, and cost management, and maintaining disaster recovery strategies to ensure system reliability. Basic qualifications required for this role include a Master's degree with 8 to 10 years of experience, a Bachelor's degree with 10 to 14 years of experience, or a Diploma with 14 to 18 years of experience in IT infrastructure, Site Reliability Engineering, or related fields. Must-have skills for this position: - Extensive experience with AWS Cloud Services - Proficiency in CI/CD (Jenkins/Gitlab), Observability, IAC, Gitops, etc. - Experience with containerization (Docker) and orchestration tools (Kubernetes) - Strong hands-on experience in SRE tasks and automation using Python/Scripting language - Well-versed with FinOps, Infra-Ops, & Platform Operations - Ability to learn new technologies quickly, strong problem-solving and analytical skills, excellent communication, and teamwork skills - Leadership skills to guide a team of 4 to 5 on technical blockers Good-to-have skills include knowledge of cloud-native technologies, strategies for cost optimization in multi-cloud environments, familiarity with distributed systems, databases, and large-scale system architectures, and a Bachelor's degree in computer science and engineering preferred. Soft skills required for this role are the ability to foster a collaborative and innovative work environment, strong problem-solving abilities, attention to detail, and a high degree of initiative and self-motivation.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Position Azure Cloud Architect Job Description Arrow Electronics is seeking a Microsoft Azure Architect to join our Cloud Engineering team at our Bangalore, India facility. The Cloud Engineer will provide technical expertise in the IAAS/PAAS/SAAS Public Cloud domain in addition to facilitating project management cycles while working directly with Arrow's internal application services group. In this role, the Engineer will maintain a high level of technical aptitude in cloud and hybrid cloud solutions while also displaying a proven ability to communicate effectively and offer excellent customer service. What You Will Be Doing Azure Networking & Troubleshooting: Expertise in Private DNS Zones, Private Endpoints, ExpressRoute, and BGP routing, with strong troubleshooting skills. Azure Best Practices: Design and implement scalable, secure, and cost-effective solutions following Azure best practices. Landing Zones: Architect and implement Azure Landing Zones for streamlined governance and secure cloud adoption. Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform and Azure DevOps (ADO) pipelines. Azure Kubernetes Service (AKS): Design, manage, and secure AKS clusters, including Helm and GitOps workflows (e.g., ArgoCD, Flux). API Management: Manage Azure API Management (APIM) for secure API exposure, OAuth, Managed Identities, and API security best practices. CI/CD Pipelines: Develop and manage CI/CD pipelines through Azure DevOps for infrastructure and API automation. Security & Identity: Implement RBAC, Azure Entra ID, B2C, and External ID, and enforce Zero Trust security models. Data & AI Services (Preferred): Work with Databricks, Machine Learning via AKS, and Azure OpenAI Services. Security Automation: Implement automated security remediations, Wiz integrations, and governance policies. WAF/CDN Solutions: Work on solutions like Akamai and Azure Front Door for enhanced security and performance. Leadership & Mentorship: Mentor teams, drive cloud automation initiatives, and encourage certifications in Azure and AWS. RESTful APIs: Strong understanding of RESTful APIs and API Management operational mechanisms. Cost Optimization: Evaluate workload placements based on cost, performance, and security. Troubleshooting: Skilled in resolving issues, retrieving logs, and utilizing Application Insights for performance optimization. Soft Skills: Strategic thinker with excellent problem-solving, leadership, and communication skills. What We Are Looking For Lead effort to plan, engineer, and design the infrastructure as well as lead the effort to evaluate, recommend, integrate and coordinate enhancements to the infrastructure. Work with IT Architects to ensure that modified infrastructure interacts appropriately, data conversion impacts are considered, and other areas of impact are addressed and meet performance requirements of the project. Lead effort to develop and configure the infrastructure from conceptualization through stabilization using various computer platforms. Lead effort to implement the infrastructure by analyzing the current system environment and infrastructure, using technical tools and utilities, performing complex product customization, and developing implementation and verification procedures to ensure successful installation of systems hardware/software. Lead routine infrastructure analysis, and evaluation on resource requirements necessary to maintain and/or expand service levels. Providing cost estimates for new projects Collaborating in the creation work/design documents with team assistance. Performing "light" project management for on-boarding requests, collaborating with teams on scheduling actions and following change control processes. Assisting in show-back/charge back documentation and reports. Prior experience supporting Microsoft and/or Linux operating systems in a corporate environment on or off premise required. Prior experience designing/supporting IAAS infrastructure in a public cloud strongly preferred. Prior Knowledge of Azure Active Directory, Conditional Access, PowerBI a plus. Prior experience with Azure IAAS, PAAS, Networking and security a plus. Innovative self starter willing to learn, test, implement new technologies on existing and new public cloud providers. Experience / Education Typically requires 12 plus years of related experience with a 4 year degree; or 3 years and an advanced degree; or equivalent work experience. Arrow Electronics, Inc. (NYSE: ARW), an award-winning Fortune 133 and one of Fortune Magazine’s Most Admired Companies. Arrow guides innovation forward for over 220,000 leading technology manufacturers and service providers. With 2024 sales of USD $28.1 billion, Arrow develops technology solutions that improve business and daily life. Our broad portfolio that spans the entire technology landscape helps customers create, make and manage forward-thinking products that make the benefits of technology accessible to as many people as possible. Learn more at www.arrow.com. Our strategic direction of guiding innovation forward is expressed as Five Years Out, a way of thinking about the tangible future to bridge the gap between what's possible and the practical technologies to make it happen. Learn more at https://www.fiveyearsout.com/. Location: IN-KA-Bangalore, India (SKAV Seethalakshmi) GESC Time Type Full time Job Category Information Technology

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. About The Team SAP's Business Technology Platform is shaping the future of enterprise software, creating the ability to extend and personalize SAP applications, integrate, and connect entire landscapes, and empower business users to integrate processes and experiences. Our BTP AI team aims to be on top of the latest advancements in AI and how we apply these to increase the platform value for our customers and partners. It is a multi-disciplinary team of data engineers, engagement leads, development architects and developers that aims to support and deliver AI cases in the context of BTP. The Role We are looking for a Full-Stack Engineer with a passion for strategic cloud platform topics and the field of generative AI. Generative artificial intelligence (GenAI) has emerged as a transformative force in society, and has the ability of creating, mimicking, and innovating a wide range of domains. This has implications for enterprise software and ultimately SAP. Become part of a multi-disciplinary team that focuses on execution and shaping the future of GenAI capabilities across our Business Technology Platform. This role requires a self-directed team player with deep coding knowledge and business acumen. In this role, you will be working closely with Architects, Data Engineers, UX designers and many others. Your Responsibility As Full-Stack Engineer Will Be To Iterate rapidly, collaborating with product and design to launch to build PoCs and first versions of new products & features. Work with engineers across the company to ship modular and integrated products and features. You feel home at both Typescript/ Node.js/edit stack accordingly backend parts as well as being comfortable with the UI5/ JavaScript/React/edit stack accordingly frontend and other software technologies including Rest/ JSON. Design AI based solutions to complex problems and requirements in collaboration with others in a cross-functional team. Assess new technologies in the field of AI, tools, and infrastructure with which to evolve existing highly used functionalities and services in the cloud. Design, maintain, and optimize data infrastructure for data collection, management, transformation, and access. Role Requirement Experience in data-centric programming languages (e.g. Python), SQL databases (e.g. SAP HANA Cloud), data modeling, integration, and schema design is a plus. Excellent communication skills with fluency in written and spoken English Tech you bring: SAP BTP, Cloud Foundry, Kyma, Docker, Kubernetes, SAP CAP, Jenkins, Git, GitOps; (Python) Critical thinking, innovative mindset, problem solving mindset Engineering/ master’s degree in computer science or related field with 3+ years professional experience in Software Development Extensive experience in the full life cycle of software development, from design and implementation to testing and deployment. Ability to thrive in a collaborative environment involving many different teams and stakeholders. You enjoy working with a diverse group of people with different expertise backgrounds and perspectives. Aware of the fast-changing AI landscape and confident to suggest new, innovative ways to achieve product features. Proficient in writing clean and scalable code using the programming languages in the AI and BTP technology stack. Ability to adapt to evolving technologies and industry trends in GenAI while working in a cross-functional team, showing creative problem-solving skills and customer-centricity. #SAPInternalT #SAPReturnshipIndiaCareers Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 421073 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: .

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. About The Team SAP's Business Technology Platform is shaping the future of enterprise software, creating the ability to extend and personalize SAP applications, integrate, and connect entire landscapes, and empower business users to integrate processes and experiences. Our BTP AI team aims to be on top of the latest advancements in AI and how we apply these to increase the platform value for our customers and partners. It is a multi-disciplinary team of data engineers, engagement leads, development architects and developers that aims to support and deliver AI cases in the context of BTP. The Role We are looking for a Full-Stack Engineer with AI skills, with a passion for strategic cloud platform topics and the field of generative AI. Generative artificial intelligence (GenAI) has emerged as a transformative force in society, and has the ability of creating, mimicking, and innovating a wide range of domains. This has implications for enterprise software and ultimately SAP. Become part of a multi-disciplinary team that focuses on execution and shaping the future of GenAI capabilities across our Business Technology Platform. This role requires a self-directed team player with deep coding knowledge and business acumen. In this role, you will be working closely with Architects, Data Engineers, UX designers and many others. Your responsibility as Full-Stack Engineer with AI skills will be to: Iterate rapidly, collaborating with product and design to launch to build PoCs and first versions of new products & features. Work with engineers across the company to ship modular and integrated products and features. You feel home at both Typescript/ Node.js/edit stack accordingly backend parts as well as being comfortable with the UI5/ JavaScript/React/edit stack accordingly frontend and other software technologies including Rest/ JSON. Design AI based solutions to complex problems and requirements in collaboration with others in a cross-functional team. Assess new technologies in the field of AI, tools, and infrastructure with which to evolve existing highly used functionalities and services in the cloud. Design, maintain, and optimize data infrastructure for data collection, management, transformation, and access. The Roles & Responsibilities Critical thinking, innovative mindset, problem solving mindset Engineering/ master’s degree in computer science or related field with 5+ years professional experience in Software Development Extensive experience in the full life cycle of software development, from design and implementation to testing and deployment. Ability to thrive in a collaborative environment involving many different teams and stakeholders. You enjoy working with a diverse group of people with different expertise backgrounds and perspectives. Proficient in writing clean and scalable code using the programming languages in the AI and BTP technology stack. Aware of the fast-changing AI landscape and confident to suggest new, innovative ways to achieve product features. Ability to adapt to evolving technologies and industry trends in GenAI while working in a cross-functional team, showing creative problem-solving skills and customer-centricity. Nice to have Experience in data-centric programming languages (e.g. Python), SQL databases (e.g. SAP HANA Cloud), data modeling, integration, and schema design is a plus. Excellent communication skills with fluency in written and spoken English Tech you bring: SAP BTP, Cloud Foundry, Kyma, Docker, Kubernetes, SAP CAP, Jenkins, Git, GitOps; (Python #ICC25 #SAPInternalT #SAPReturnshipIndiaCareers Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 423948 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: .

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Platform Engineering (Java, Springboot) Overview At Mastercard, we are dedicated to delivering unparalleled customer experiences by pushing the boundaries of innovation. Our Network team is seeking a Senior Software Engineer to propel our customer experience strategy forward through unwavering innovation and adept problem-solving. The quintessential candidate is focused on the customer experience journey, exuding high motivation, an insatiable intellectual curiosity, exceptional analytical acumen, and a robust entrepreneurial mindset. About The Role Design and Develop software around internet traffic engineering technologies including but not limited to public and private CDNs, load balancing, DNS, DHCP and IPAM solutions. Enthusiastic for building new platform technologies from ground up contributing to our high impact environment. Develop and Maintain Public and Private REST APIs, keeping high code and quality standards. Provide timely and competent support for the technologies the team owns and builds. Bridge automation gaps by writing and maintaining scripts that enhance automation and improve the overall quality of our services. Demonstrate drive and curiosity by continuously learning and teaching yourself new skills. Collaborate effectively with cross-functional teams, demonstrating your technical prowess and contributing to the overall success of the projects. All About You To excel in this role, you should have: Bachelor’s degree in Computer Science or a related technical field, or equivalent practical experience. Strong fundamentals in internet and intranet traffic engineering, OSI Layers & Protocols, DNS, DHCP, IP address management and TCP/HTTP processing. Practical understanding of data structures, algorithms, and database fundamentals. Proficient in Java, Python, SQL, NoSQL, Kubernetes, PCF, Jenkins, Chef and related platforms. Knowledgeable in cloud-native and multi-tiered applications development. Experience with programming around Network services, Domain Nameservers, DHCP and IPAM solutions. Understands the fundamental principles behind CI/CD pipelines, DevSecOps, GitOps, and related best practices. Ability to write and maintain scripts for automation, showcasing your commitment to efficiency and innovation. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250617

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

OpenShift Container Platform (OCP) Operations Team The OCP Ops Team ensures high availability, performance, and security of mission-critical OpenShift clusters. It operates under a tiered model (L1, L2, L3) to handle monitoring, support, incident response, automation, and lifecycle management. ⸻ L1 – Platform Technician (1–3 yrs) Focus: Monitoring, Daily Ops, Basic Support • 24x7 monitoring of clusters via oc CLI & Console • Execute SOPs, health checks, backups • Triage incidents & escalate to L2 • Handle basic admin tasks (RBAC, Projects, ConfigMaps) • Prepare platform health reports ⸻ L2 – Platform Analyst (3–6 yrs) Focus: Troubleshooting, Automation, Changes • Resolve issues (PVCs, services, ingress, etc.) • Apply changes via YAML/Helm/Kustomize • Cluster upgrades, patch validation • CI/CD support, namespace & RBAC automation • Manage observability tools (Prometheus, Grafana, EFK) • Participate in change/patch cycles ⸻ L3 – Platform SME (6+ yrs) Focus: Architecture, Governance, Automation • Lead cluster lifecycle, DR, upgrades • Automate with GitOps, Ansible, Terraform • Handle SEV1 incidents, RCA, compliance standards • Integrate with ArgoCD, Vault, Harbor • Guide performance tuning, mentor team ⸻ Core Tech Stack Platform: OpenShift, Kubernetes CLI Tools: oc, kubectl, Helm, Kustomize Monitoring: Prometheus, Grafana, Thanos Logging: Fluentd, EFK Stack, Loki CI/CD: Jenkins, GitLab CI, ArgoCD, Tekton Automation: Ansible, Terraform Security: Vault, SCCs, RBAC, NetworkPolicies ⸻

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

This role is in the Platform Engineering organization, where we build the foundational capabilities on which Thomson Reuters runs. Our team is responsible for the Customer Identity & Access Management (CIAM) Platform, a consolidated identity system designed to securely manage customer authentication and coarse-grained authorization for users and services accessing any product or service offered by Thomson Reuters. CIAM is meant to consolidate and replace the many disparate, disconnected and custom-developed identity systems at Thomson Reuters with a common, highly-secure, highly-available, cloud- and standards-based, vendor-managed solution. This role will be a key leader for a brand-new team of CIAM software engineers in Bangalore, India. About The Role As an Engineering Manager you will: Manage the professional development for a team of up to 12 software engineers. Hire, grow, and mentor high performing teams. Provide leadership in technical design and architecture; determine services we should use and how they should be implemented. Mentor engineers inside and outside the team. Voice your opinion on technical decisions and build consensus. Model participation in a Scrum team and embrace the agile work model. Promote software engineering best practices. Be a “go-to” person for software engineers on your team and other teams, being willing to tackle the hard problems. Be a hands-on developer, implementing POCs and solutions. Help unblock technical issues and provide general direction to the team. Participate in all aspects of the development lifecycle: Ideation, Design, Build, Test and Operate. We embrace a DevOps culture (“you build it, you run it”); while we have dedicated 24x7 level-1 support engineers, you will be called on to assist with level-2 support. Work primarily with Java, NodeJS, React, .NET, AWS and Azure public cloud, Identity platforms (Auth0, Ping Identity, Microsoft Azure AD B2C) and identity standards (OAuth, OIDC, SAML, SCIM, etc.) Collaborate with engineering managers, architects, scrum masters, software engineers, DevOps engineers, product managers and project managers to deliver phenomenal software. Demonstrate and model transparency and collaboration with teams across the company. Keep up-to-date with emerging cloud technology trends, especially in Identity & Access Management. About You You're a fit for the role of Engineering Manager if you have: 10+ years of experience in Software Engineering with Java, .NET, or similar languages. Experience developing REST APIs. Experience developing cloud-native applications and services on Azure, AWS or GCP. Excellent problem-solving skills, with the ability to identify and resolve complex technical issues. Strong written and verbal communication skills, with the ability to communicate technical concepts to non-technical stakeholders. Bachelor's degree in Computer Science, Software Engineering or a related field. 3-4+ years of experience leading a team of Engineers and giving them direction on implementation and best practices. Experience with a major Identity Provider such as ForgeRock, Ping, Okta, or Auth0 for Workforce or Customer Identity and Access Management, and related experience with the OAuth2, OIDC, SAML and SCIM standards. Experience with automation and CI/CD tools using CloudFormation, TerraForm or GitOps. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be joining as a GCP Data Architect at TechMango, a rapidly growing IT Services and SaaS Product company located in Madurai and Chennai. With over 12 years of experience, you are expected to start immediately and work from the office. TechMango specializes in assisting global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. In this role, you will be leading data modernization efforts for a prestigious client, Livingston, in a highly strategic project. As a GCP Data Architect, your primary responsibility will be to design and implement scalable, high-performance data solutions on Google Cloud Platform. You will collaborate closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: - Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) - Define data strategy, standards, and best practices for cloud data engineering and analytics - Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery - Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) - Architect data lakes, warehouses, and real-time data platforms - Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) - Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers - Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards - Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: - 10+ years of experience in data architecture, data engineering, or enterprise data platforms - Minimum 3-5 years of hands-on experience in GCP Data Service - Proficient in: BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner - Python / Java / SQL - Data modeling (OLTP, OLAP, Star/Snowflake schema) - Experience with real-time data processing, streaming architectures, and batch ETL pipelines - Good understanding of IAM, networking, security models, and cost optimization on GCP - Prior experience in leading cloud data transformation projects - Excellent communication and stakeholder management skills Preferred Qualifications: - GCP Professional Data Engineer / Architect Certification - Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics - Exposure to AI/ML use cases and MLOps on GCP - Experience working in agile environments and client-facing roles What We Offer: - Opportunity to work on large-scale data modernization projects with global clients - A fast-growing company with a strong tech and people culture - Competitive salary, benefits, and flexibility - Collaborative environment that values innovation and leadership,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

At Tide, we are dedicated to developing a business management platform that aims to streamline operations for small businesses, saving them both time and money. Our services include business accounts, banking solutions, as well as a range of connected administrative tools such as invoicing and accounting. Since its inception in 2017, Tide has grown to serve over 1 million small businesses worldwide, catering to SMEs in the UK, India, and Germany. With our headquarters in central London and additional offices in Sofia, Hyderabad, Delhi, Berlin, and Belgrade, Tide boasts a team of over 2,000 employees. As a rapidly expanding company, we are continually venturing into new products and markets, and we are always on the lookout for individuals who are enthusiastic and motivated to join us in our mission to empower small businesses in saving time and resources. Our teams operate in small, autonomous units, each specializing in specific domains and taking ownership of the complete lifecycle of various microservices within Tide's service catalogue. Engineers at Tide have the freedom to self-organize, collaborate on technical challenges, and establish guidelines within the different Communities of Practice, regardless of their current position within our Growth Framework. In this role, you will play a key part in shaping our event-driven Microservice Architecture, which currently comprises over 200 services managed by 40+ teams. Your responsibilities will include designing, building, running, and globally scaling the services owned by your team. You will utilize Java 17, Spring Boot, and JOOQ to develop these services, while ensuring the exposure and consumption of RESTful APIs. We place great emphasis on excellent API design and treat our APIs as valuable products, particularly in the realm of Open Banking where public APIs are common. Additionally, you will work with SNS+SQS and Kafka for event transmission, leverage PostgreSQL via Aurora as your primary datastore, and deploy services to Production frequently using CI/CD pipelines powered by GitHub and GitHub actions. Familiarity with modern GitOps using ArgoCD, Docker, Terraform, EKS/Kubernetes, and tools like DataDog for monitoring services will be essential for this role. Collaboration with Product Owners to understand user needs, business opportunities, and regulatory requirements and translating them into well-engineered solutions will be a crucial aspect of your work. We are seeking individuals with experience in building server-side applications, proficiency in relevant programming languages, and a solid understanding of backend frameworks like Spring/Spring Boot. If you have a background in engineering scalable and reliable solutions in a cloud-native environment, possess a mindset focused on delivering secure, well-tested, and well-documented software, and are eager to learn and adapt to new technologies, we encourage you to join us at Tide. Our tech stack includes Java 17, Spring Boot, JOOQ, SNS+SQS, Kafka, MySQL, PostgreSQL, Docker, Terraform, EKS/Kubernetes, DataDog, and more. In return for your contributions, we offer a competitive salary, health insurance for yourself and your family, life insurance, OPD benefits, mental wellbeing support, learning and development budget, WFH setup allowance, ample annual leaves, and family-friendly leave policies. Tide promotes a flexible workplace model that accommodates both in-person and remote work to suit the diverse needs of our teams. While we support remote work, we believe in the value of face-to-face interactions to nurture team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, encouraging regular in-person gatherings to cultivate a strong sense of community. Tide is committed to fostering a transparent and inclusive environment where every individual's voice is valued. As One Team, we strive to create a workplace where everyone feels heard and respected. Your personal data will be processed by Tide for recruitment purposes in accordance with Tide's Recruitment Privacy Notice.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a global leader in cybersecurity, CrowdStrike is dedicated to protecting the people, processes, and technologies that drive modern organizations. Since 2011, our unwavering mission is to prevent breaches and we have revolutionized modern security with the most advanced AI-native platform in the world. Our diverse customer base spans across all industries, relying on CrowdStrike to ensure the continuity of their businesses, the safety of their communities, and the progression of their lives. We are a mission-driven company that fosters a culture empowering every CrowdStriker with the autonomy and flexibility to steer their careers. CrowdStrike is constantly seeking passionate individuals to join our team who exhibit boundless enthusiasm, an unwavering focus on innovation, and a deep commitment to our customers, community, and each other. Are you ready to be part of a mission that makes a difference The future of cybersecurity commences with you. The CrowdStrike Information Technology team is currently seeking a Staff IT Monitoring Engineer/Site Reliability Engineer (SRE) to take charge of designing, implementing, and evolving our enterprise monitoring and observability platforms. In this pivotal role, you will be responsible for architecting scalable monitoring solutions, leading reliability initiatives, and serving as a technical authority on monitoring best practices. Your duties will involve mentoring junior team members, collaborating with cross-functional teams to establish Service Level Objectives (SLOs), and playing a vital role in major incident management. This position necessitates advanced technical prowess, strategic thinking, and the ability to strike a balance between operational excellence and innovation. **What You'll Need** **Required Skills and Qualifications:** - Possess 10+ years of experience with enterprise monitoring platforms and observability tools such as LogicMonitor, DataDog, LogScale, Zscaler Digital Experience (ZDX), ThousandEyes. - Demonstrate advanced proficiency in multiple scripting/programming languages including Python, Go, and Bash. - Exhibit expert knowledge of modern monitoring ecosystems such as Prometheus, Grafana, and ELK. - Showcase experience in architecting monitoring solutions at scale across hybrid environments. - Have a strong background in SRE practices encompassing SLO definition, error budgets, and reliability engineering. - Possess advanced knowledge of cloud platforms like AWS, GCP, and their native monitoring capabilities. - Demonstrate expertise in log aggregation, metrics and Key Performance Indicators (KPIs) collection, and distributed tracing implementations. - Experience in designing and implementing automated remediation systems. - Strong understanding of Infrastructure as Code and GitOps principles. - Proven ability to mentor junior engineers and provide technical leadership. **Shift timings:** 12PM -9PM IST In summary, we are looking for a seasoned professional to lead our monitoring and observability initiatives, drive reliability improvements, and play a pivotal role in incident management. If you possess the required skills and are passionate about making a difference in the cybersecurity landscape, we welcome you to join our team at CrowdStrike.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You should have a minimum of 5 years of experience in DevOps, SRE, or Infrastructure Engineering. Your expertise should include a strong command of Azure Cloud and Infrastructure-as-Code using tools such as Terraform and CloudFormation. Proficiency in Docker and Kubernetes is essential. You should be hands-on with CI/CD tools and scripting languages like Bash, Python, or Go. A solid knowledge of Linux, networking, and security best practices is required. Experience with monitoring and logging tools such as ELK, Prometheus, and Grafana is expected. Familiarity with GitOps, Helm charts, and automation will be an advantage. Your key responsibilities will involve designing and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, and GitHub Actions. You will be responsible for automating infrastructure provisioning through tools like Terraform, Ansible, and Pulumi. Monitoring and optimizing cloud environments, implementing containerization and orchestration with Docker and Kubernetes (EKS/GKE/AKS), and maintaining logging, monitoring, and alerting systems (ELK, Prometheus, Grafana, Datadog) are crucial aspects of the role. Ensuring system security, availability, and performance tuning, managing secrets and credentials using tools like Vault and Secrets Manager, troubleshooting infrastructure and deployment issues, and implementing blue-green and canary deployments will be part of your responsibilities. Collaboration with developers to enhance system reliability and productivity is key. Preferred skills include certification as an Azure DevOps Engineer, experience with multi-cloud environments, microservices, and event-driven systems, as well as exposure to AI/ML pipelines and data engineering workflows.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Primary Skills: Strong experience with Docker Kubernetes for container orchestration. Configure and maintain Kubernetes deployments, services, ingresses, and other resources using YAML manifests or GitOps workflows. Experience in Microservices based architecture design. Understanding of SDLC including CI and CD pipeline architecture. Experience with configuration management (Ansible). Experience with Infrastructure as code(Teraform/Pulumi/CloudFormation). Experience with Git and version control systems. Secondary Skills: Experience with CI/CD pipeline using Jankins or AWS CodePipeline or Github actions. Experience with building and maintaining Dev, Staging, and Production environments. Familiarity with scripting languages (e. g., Python, Bash) for automation. Monitoring and logging tools like Prometheus, Grafana. Knowledge of Agile and DevOps methodologies. Incidence management and root cause analysis. Excellent problem solving and analytical skills. Excellent communication skills. Mandatory work from Office

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Description Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: SSO Engineer Position: Lead Analyst Experience: 8 - 10 Years Category: Software Development/ Engineering Shift: General (5 Days work from Office) Main location: India, Tamil Nadu, Chennai/Karnataka, Bangalore Position ID: J0325-0698 Employment Type: Full Time Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 8 years of relevant experience. Role Description: Works independently under limited supervision and applies knowledge of subject matter in Applications Development. Possess sufficient knowledge and skills to effectively deal with issues, challenges within field of specialization to develop simple applications solutions. Second level professional with direct impact on results and outcome. Your future duties and responsibilities Working experience in Information Security Technology field. working knowledge of IAM technologies (mainly SAML/Oauth/OIDC/ Radius/Kerberos etc) Working experience for application migration to modern Authentication protocols Understanding and ability to execute on the full technical stack Experience with the industry standard IT infrastructure (web, middleware, operating systems). Strong technical problem-solving skills with the ability to detect underlying patterns, identify root causes and design optimal process solutions. Understanding of technical implementation of legacy applications Work across functions to improve IAM solutions to enhance compliance requirements and best practices. Participate in meetings to understand and recommend the best technical, most cost-effective solutions. Update knowledgebase and documentation to stay relevant with latest versions, configuration changes and development initiatives. Assist with the preparation of Business required / requested reports. Required Qualifications To Be Successful In This Role Must-Have Skills: 8+ years of work experience in Identity and Access Management, preferrable using the Forgerock/ Ping identity and access management solution. Expertise in integrating Web and mobile applications for single sign on. Experience with technologies such as SAML 2.0, OAuth 2.0, OpenID Connect, Role-Based Access Control (RBAC). Experience with LDAP. Experience with Java, Javascript, Node JS, Quarkus, Web Servers, Application Servers, Directory Servers Good-to-Have Skills: Basic knowledge in CI/CD, GitOps, Git/Gitlab, Docker and Kubernetes technologies Basic knowledge in AWS or other cloud. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 2 weeks ago

Apply

14.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role: L4 SME – Red Hat OpenShift Enterprise Admin (Telco Domain – Bare Metal) Location: Offshore – Noida / Other Offshore India Delivery Centers Experience Level: 10–14 Years Role Type: Individual Contributor with Senior Technical Ownership Role Summary We are seeking a Senior-Level SME (L4) in Red Hat OpenShift Enterprise Administration with deep, hands-on expertise in bare metal OpenShift deployments. The SME will play a key role in managing and stabilizing Telco-grade workloads for a global Tier-1 telecommunications provider, ensuring platform resilience, lifecycle governance, and continuous integration across critical services. Key Responsibilities Act as the OpenShift Bare Metal Admin SME supporting complex Telco workloads hosted on private cloud infrastructure. Lead end-to-end deployment, scaling, and lifecycle operations of Red Hat OpenShift on bare metal environments. Own upgrade readiness and rollback strategies, working closely with OEM vendors for platform bug RCA and patch governance. Enable monitoring, alerting, compliance enforcement, and automation for large-scale cluster environments. Collaborate with network, security, and architecture teams to ensure platform alignment with Telco service and regulatory requirements. Ensure integration with CI/CD pipelines, GitOps, observability frameworks (Prometheus/Grafana), and ITSM tools. Drive operational maturity through SOP creation, audit compliance, RCA publishing, and incident retrospectives. Mentor junior engineers and review configurations impacting production-grade OpenShift clusters. Required Skills 8+ years of direct experience managing Red Hat OpenShift v4.x on bare metal. Strong expertise in Kubernetes, SDN, CNI plugins, CoreOS, and container runtimes (CRI-O/containerd). Experience with BIOS provisioning, PXE boot environments, and bare metal cluster node onboarding. Familiarity with Telco Cloud principles, especially NFV/CNF workloads. Proficiency in RHEL administration, Ansible automation, Terraform (optional), and compliance remediation. Understanding of storage backends (Ceph preferred), NTP/DNS/syslog integration, and cluster certificate renewal processes. Working knowledge of Red Hat Satellite, Quay, and multi-tenant OpenShift workloads. Hands-on with security features like SELinux, SCCs, RBAC, and namespace isolation. Preferred Certifications Red Hat Certified Specialist in OpenShift Administration (EX280) – Highly Preferred Red Hat Certified System Administrator (RHCSA) Certified Kubernetes Administrator (CKA) – Bonus

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: A results-oriented Senior Architect with a proven track record of delivering end-to-end cloud solutions across infrastructure, data, DevOps, and AI domains. Skilled in architecting, implementing, and governing secure, scalable, and high-performing Azure architectures that align with both technical requirements and business objectives. Brings deep expertise in Azure IaaS and PaaS services, DevOps automation using Azure DevOps, and AI integration through Azure OpenAI and Copilot Studio, enabling intelligent, modern, and future-ready enterprise solutions. Expertise spans Azure infrastructure management, CI/CD automation, Infrastructure as Code (ARM), Azure Data Factory (ADF) pipelines, and enterprise AI adoption. Demonstrated ability to build and support scalable, secure, and cost-optimized Azure environments aligned with governance and compliance standards. Strong background in SQL Server administration—handling deployment, upgrades (in-place and side-by-side), performance tuning, backup/restore strategies, high availability, and security hardening both on Azure VMs and PaaS SQL offerings. Experienced in migrating databases across environments using native tools, scripting, and automation workflows. Combines deep cloud expertise with solid development and scripting skills (PowerShell) to enable automation, integration, and operational excellence. Adept at collaborating with cross-functional teams, mentoring junior engineers, and aligning technical solutions with evolving business goals. Key Accountabilities Design and manage scalable, secure, and highly available Azure infrastructure environments. Implement and maintain Azure IaaS resources such as Virtual Machines, NSGs, Load Balancers, VNETs, VPN Gateways, and ExpressRoute. Perform cost optimization, monitoring, backup/recovery, patching, and capacity planning. Implement governance using Azure Policies, RBAC, and Management Groups. Design and configure Azure PaaS services like Azure App Services, Azure SQL, Azure Web Apps, Azure Functions, Storage Accounts, Key Vault, Logic Apps, and Ensure high availability and DR strategies for PaaS components. Design multi-tier, cloud-native application architectures on Azure.Troubleshoot PaaS performance and availability issues. Integrate Azure OpenAI capabilities into applications and business workflows Develop use cases such as chatbot assistants, intelligent search, summarization, document Q&A, etc. Leverage Copilot Studio to build and deploy enterprise AI copilots integrated with data sources. Ensure responsible AI and compliance alignment. Design and manage data pipelines and orchestrations in ADF for ETL/ELT processes. Integrate with Azure Data Lake, Azure SQL, Blob Storage, and on-prem data sources. Build and manage CI/CD pipelines using Azure DevOps Automate infrastructure deployment using ARM templates Configure and manage release gates, approvals, secrets, and environments. Implement Infrastructure as Code (IaC) and GitOps best practices. Implement identity management using Azure AD, MFA, Conditional Access. Manage secrets using Azure Key Vault, secure access via Managed Identities. Develop reusable and parameterized ARM templates modules for consistent deployments. Maintain template versioning using Git repositories. Use templates in Azure DevOps pipelines and automate deployment validations. Align templates with security and compliance baselines (e.g., Azure Landing Zones). Collaborate with architects, developers, data engineers, and security teams to design solutions. Lead technical discussions and present solutions to stakeholders. Mentor junior engineers and conduct code reviews. Stay updated with Azure roadmap, and guide on service adoption. Essential Education Bachelor's (or higher) degree from a recognized institute of higher learning, ideally focused in Computer Science, MIS/IT, or other STEM related subjects. Essential Experience And Job Requirements Technical capability: Primary Skills: Azure IaaS, PaaS & Core Services Azure OpenAI / Copilot Studio SQL Server & Azure Data Factory (ADF) Secondary Skills: Security & Governance Monitoring & Observability DevOps & CI/CD Business capability: Service Delivery & Management Domain expertise – Legal and Ethics & Compliance Leadership and EQ: For those in team leadership positions (whether activity or line management) Always getting the basics right, from quality development conversations to recognition and ongoing performance feedback. Has the ability to develop, coach, mentor and inspire others. Ensures team compliance with BP's Code of Conduct and demonstrates strong leadership of BP's Leadership Expectations and Values & Behaviours. Creates an environment where people are listening and speak openly about the good, the bad, and the ugly, so that everyone can understand and learn, so that everyone can understand and learn. All role holders Embraces a culture of change and agility, evolving continuously, adapting to our changing world. Effective team player looks beyond own area/organisational boundaries to consider the bigger picture and/or perspective of others. Is self-aware and actively seeks input from others on impact and effectiveness. Applies judgment and common sense – able to use insight and good judgement to enable commercially sound, efficient and pragmatic decisions and solutions and to respond to situations as they arise. Ensures personal compliance with BP's Code of Conduct and demonstrates strong leadership of BP's Leadership Expectations and Values & Behaviours. Cultural fluency – actively seeks to understand cultural differences and sensitivities. Travel Requirement No travel is expected with this role Relocation Assistance: This role is not eligible for relocation Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We're Hiring: OpenShift SME / Platform Engineer – L3 Location: Noida (On-site) Company: Nuvem Labs Experience: 8+ years in Red Hat OpenShift / Kubernetes Platform Operations About the Role Nuvem Labs is seeking a skilled and experienced OpenShift SME / Platform Engineer to lead operations, performance optimization, and high-availability design of Red Hat OpenShift clusters across enterprise environments. You will be responsible for the health, scalability, and governance of OpenShift platforms while supporting complex troubleshooting and mentoring junior team members. You will act as the technical lead for OpenShift platform operations in a managed services model, working across hybrid cloud environments and enterprise infrastructure. Roles & Responsibilities · Ensure cluster health and performance of production-grade OpenShift environments. · Troubleshoot issues across control plane, nodes, SDN, storage (PVC), ingress/egress, and etcd. · Plan and execute platform upgrades, patching, scaling, and multi-node infra management. · Define and manage RBAC, quotas, Operators, MachineSets, and custom resource definitions (CRDs). · Integrate and manage observability tools (Prometheus, Alertmanager, Grafana, EFK/ELK). · Work with DevSecOps teams to automate security, backup, and governance policies. · Own escalation management, interface with Red Hat support, and lead L2/L1 mentoring. · Support disaster recovery, compliance, and business continuity planning. Required Skills 5+ years in managing OpenShift or Kubernetes platform at enterprise scale. Hands-on experience with: Red Hat OpenShift (v4.x), OCP CLI, Operators, and Service Mesh Container runtimes (CRI-O), etcd, SDN/CNI (OVN/Kuryr), ingress controllers Storage (CSI, PVC, RWX, dynamic provisioning), GitOps (ArgoCD, Flux) Linux OS (RHEL 8+), scripting (Bash/Python/YAML), automation (Ansible/Terraform) CI/CD tools and pipeline troubleshooting (Jenkins, Tekton, GitLab) Experience in multi-node, multi-cluster, or hybrid OpenShift environments. Preferred Qualifications · Certifications: Red Hat Certified Specialist in OpenShift Administration (EX280), RHCE, CKA, CKAD · Familiarity with: Multi-tenant clusters, infra node pools, and pod security policies o Cloud integration: OpenShift on Azure (ARO), AWS (ROSA), or OpenStack o GitOps, infrastructure as code, and backup/restore design (Velero, OADP) · Experience working with Red Hat support and enterprise customers under strict SLAs. Why Join Nuvem Labs? Drive next-gen OpenShift platform operations and automation at scale. Work with global customers on cutting-edge container platforms and CI/CD ecosystems. Grow into Platform Architect or Practice Head roles. Competitive salary + skill enhancement + long-term career visibility. Interested? Send your CV to: careers@nuvemlabs.in

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

**Immediate joining ( WFH ) InfraSingularity aims to revolutionize the Web3 ecosystem as a pioneering investor and builder. Our long-term vision is to establish ourselves as the first-of-its-kind in this domain, spearheading the investment and infrastructure development for top web3 protocols. At IS, we recognize the immense potential of web3 technologies to reshape industries and empower individuals. By investing in top web3 protocols, we aim to fuel their growth and support their journey towards decentralization. Additionally, our plan to actively build infrastructure with these protocols sets us apart, ensuring that they have the necessary foundations to operate in a decentralized manner effectively. We embrace collaboration and partnership as key drivers of success. By working alongside esteemed web3 VCs like WAGMI and more, we can leverage their expertise and collective insights to maximize our impact. Together, we are shaping the future of the Web3 ecosystem, co-investing, and co-building infrastructure that accelerates the adoption and growth of decentralized technologies. Together with our portfolio of top web3 protocols (Lava, Sei, and Anoma) and our collaborative partnerships with top protocols (EigenLayer, Avail, PolyMesh, and Connext), we are creating a transformative impact on industries, society, and the global economy. Join us on this groundbreaking journey as we reshape the future of finance, governance, and technology. About the Role We are looking for a Senior Site Reliability Engineer (SRE) to take ownership of our multi-cloud blockchain infrastructure and validator node operations. This role is critical in ensuring high performance, availability, and resilience across a range of L1/L2 blockchain protocols. If you're passionate about infrastructure automation, system reliability, and emerging Web3 technologies, we’d love to talk. What You’ll Do Own and operate validator nodes across multiple blockchain networks, ensuring uptime, security, and cost-efficiency. Architect, deploy, and maintain infrastructure on AWS, GCP, and bare-metal for protocol scalability and performance. Implement Kubernetes-native tooling (Helm, FluxCD, Prometheus, Thanos) to manage deployments and observability. Collaborate with our Protocol R&D team to onboard new blockchains and participate in testnets, mainnets, and governance. Ensure secure infrastructure with best-in-class secrets management (HashiCorp Vault, KMS) and incident response protocols. Contribute to a robust monitoring and alerting stack to detect anomalies, performance drops, or protocol-level issues. Act as a bridge between software, protocol, and product teams to communicate infra constraints or deployment risks clearly. Continuously improve deployment pipelines using Terraform, Terragrunt, GitOps practices. Participate in on-call rotations and incident retrospectives, driving post-mortem analysis and long-term fixes. Our Stack Cloud & Infra: AWS, GCP, bare-metal Containerization: Kubernetes, Helm, FluxCD IaC: Terraform, Terragrunt Monitoring: Prometheus, Thanos, Grafana, Loki Secrets & Security: HashiCorp Vault, AWS KMS Languages: Go, Bash, Python, Typescript Blockchain: Ethereum, Polygon, Cosmos, Solana, Foundry, OpenZeppelin What You Bring 4+ years of experience in SRE/DevOps/Infra roles—ideally within FinTech, Cloud, or high-reliability environments. Proven expertise managing Kubernetes in production at scale. Strong hands-on experience with Terraform, Helm, GitOps workflows. Deep understanding of system reliability, incident management, fault tolerance, and monitoring best practices. Proficiency with Prometheus and PromQL for custom dashboards, metrics, and alerting. Experience operating secure infrastructure and implementing SOC2/ISO27001-aligned practices. Solid scripting in Bash, Python, or Go. Clear and confident communicator—capable of interfacing with both technical and non-technical stakeholders. Nice-to-Have First-hand experience in Web3/blockchain/crypto environments. Understanding of staking, validator economics, slashing conditions, or L1/L2 governance mechanisms. Exposure to smart contract deployments or working with Solidity, Foundry, or similar toolchains. Experience with compliance-heavy or security-certified environments (SOC2, ISO 27001, HIPAA). Why Join Us? Work at the bleeding edge of Web3 infrastructure and validator tech. Join a fast-moving team that values ownership, performance, and reliability. Collaborate with protocol engineers, researchers, and crypto-native teams. Get exposure to some of the most interesting blockchain ecosystems in the world.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies