Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an AI Ops Expert, you will be responsible for taking full ownership of deliverables with defined quality standards, timelines, and budget constraints. Your primary role will involve designing, implementing, and managing AIops solutions to automate and optimize AI/ML workflows. Collaborating with data scientists, engineers, and stakeholders is essential to ensure the seamless integration of AI/ML models into production environments. Your duties will also include monitoring and maintaining the health and performance of AI/ML systems, developing and maintaining CI/CD pipelines specifically tailored for AI/ML models, and implementing best practices for model versioning, testing, and deployment. In case of issues related to AI/ML infrastructure or workflows, you will troubleshoot and resolve them effectively. To excel in this role, you are expected to stay abreast of the latest AIops, MLOps, and Kubernetes tools and technologies. Your strong skills should include proficiency in Python with experience in Fast API, hands-on expertise in Docker and Kubernetes (or AKS), familiarity with MS Azure and its AI/ML services like Azure ML Flow, and the ability to use DevContainer for development purposes. Furthermore, you should possess knowledge of CI/CD tools such as Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps, experience with containerization and orchestration tools like Docker and Kubernetes, proficiency in Infrastructure as code (Terraform or equivalent), familiarity with machine learning frameworks like TensorFlow, PyTorch, or scikit-learn, and exposure to data engineering tools such as Apache Kafka, Apache Spark, or similar technologies.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As an Ignition Application Administrator at EY, you will be a key member of the Enterprise Services Data team. Your role will involve collaborating closely with peer platform administrators, developers, Product/Project Seniors, and Customers to administer the existing analytics platforms. While focusing primarily on Ignition, you will also be cross-trained on other tools such as Qlik Sense, Tableau, PowerBI, SAP Business Objects, and more. Your willingness to tackle complex problems and find innovative solutions will be crucial in this role. In this position, you will have the opportunity to work in a start-up-like environment within a Fortune 50 company, driving digital transformation and leveraging insights to enhance products and services. Your responsibilities will include installing and configuring Ignition, monitoring the platform, troubleshooting issues, managing data source connections, and contributing to the overall data platform architecture and strategy. You will also be involved in integrating Ignition with other ES Data platforms and Business Unit installations. To succeed in this role, you should have at least 3 years of experience in customer success or a customer-facing engineering capacity, along with expertise in large-scale implementations and complex solutions environments. Experience with Linux command line, cloud operations, Kubernetes application deployment, and cloud platform architecture is essential. Strong communication skills, both interpersonal and written, are also key for this position. Ideally, you should hold a BA/BS Degree in technology, computing, or a related field, although relevant work experience may be considered in place of formal education. The position may require flexibility in working hours, including weekends, to meet deadlines and fulfill application administration obligations. Join us at EY and contribute to building a better working world by leveraging data, technology, and your unique skills to drive innovation and growth for our clients and society.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As an OpenShift Admin with 6 to 8 years of relevant experience, you will be responsible for building automation to support product development and data analytics initiatives. In this role, you will develop and maintain strong customer relationships to ensure effective service delivery and customer satisfaction. Regular interaction with customers will be essential to refine requirements, gain agreement on solutions and deliverables, provide progress reports, monitor satisfaction levels, identify and resolve concerns, and seek cooperation to achieve mutual objectives. To be successful in this role, you must have a minimum of 6 years of experience as an OpenShift Admin, with expertise in Kubernetes Administration, Automation tools such as Ansible, AWS EKS, Argo CD, and Linux administration. Extensive knowledge and experience with OpenShift and Kubernetes are crucial for this infrastructure-focused position. You should be experienced in deploying new app containers from scratch in OpenShift or Kubernetes, as well as upgrading OpenShift and working with observability in these environments. Additional skills that would be beneficial for this role include experience with Anthos/GKE for Hybrid Cloud, HashiCorp Terraform, and HashiCorp Vault. As an OpenShift Admin, you will be expected to create, maintain, and track designs at both high and detailed levels, identify new technologies for adoption, conduct consistent code reviews, and propose changes where necessary. You will also be responsible for provisioning infrastructure, developing automation scripts, monitoring system performance, integrating security and compliance measures, documenting configurations and processes, and deploying infrastructure as code and applications using automation and orchestration tools. The hiring process for this position will consist of screening rounds conducted by HR, followed by two technical rounds, and a final HR round. If you are someone with a strong background in OpenShift Administration and related technologies, and you are passionate about driving innovation and excellence in infrastructure management, we encourage you to apply for this role in our Pune office.,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a DevOps engineer at C1X AdTech Private Limited, a global technology company, your primary responsibility will be to manage the infrastructure, support development pipelines, and ensure system reliability. You will play a crucial role in automating deployment processes, maintaining server environments, monitoring system performance, and supporting engineering operations throughout the development lifecycle. Our objective is to design and manage scalable, cloud-native infrastructure using GCP services, Kubernetes, and Argo CD for high-availability applications. Additionally, you will implement and monitor observability tools such as Elasticsearch, Logstash, and Kibana to ensure full system visibility and support performance tuning. Enabling real-time data streaming and processing pipelines using Apache Kafka and GCP DataProc will be a key aspect of your role. You will also be responsible for automating CI/CD pipelines using GitHub Actions and Argo CD to facilitate faster, secure, and auditable releases across development and production environments. Your responsibilities will include building, managing, and monitoring Kubernetes clusters and containerized workloads using GKE and Argo CD, designing and maintaining CI/CD pipelines using GitHub Actions integrated with GitOps practices, configuring and maintaining real-time data pipelines using Apache Kafka and GCP DataProc, managing logging and observability infrastructure using Elasticsearch, Logstash, and Kibana (ELK stack), setting up and securing GCP services including Artifact Registry, Compute Engine, Cloud Storage, VPC, and IAM, implementing caching and session stores using Redis for performance optimization, and monitoring system health, availability, and performance with tools like Prometheus, Grafana, and ELK. Collaboration with development and QA teams to streamline deployment processes and ensure environment stability, as well as automating infrastructure provisioning and configuration using Bash, Python, or Terraform will be essential aspects of your role. You will also be responsible for maintaining backup, failover, and recovery strategies for production environments. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Engineering, or a related technical field with at least 4-8 years of experience in DevOps, Cloud Infrastructure, or Site Reliability Engineering. Strong experience with Google Cloud Platform (GCP) services including GKE, IAM, VPC, Artifact Registry, and DataProc is required. Hands-on experience with Kubernetes, Argo CD, and GitHub Actions for CI/CD workflows, proficiency with Apache Kafka for real-time data streaming, experience managing ELK Stack (Elasticsearch, Logstash, Kibana) in production, working knowledge of Redis for distributed caching and session management, scripting/automation skills using Bash, Python, Terraform, etc., solid understanding of containerization, infrastructure-as-code, and system monitoring, and familiarity with cloud security, IAM policies, and audit/compliance best practices are also essential qualifications for this role.,
Posted 1 week ago
8.0 - 10.0 years
27 - 37 Lacs
Chennai
Work from Office
Required skill: Kubernetes, Helm, Operator, Argo CD, AKS, Kubernetes security (All skills Mandatory) Good Experience in Kubernetes Hands-on experience in Helm, Operator, Argo CD, AKS Good knowledge on Kubernetes security
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As an AI Ops Expert, you will be responsible for the delivery of projects with defined quality standards within set timelines and budget constraints. Your role will involve managing the AI model lifecycle, versioning, and monitoring in production environments. You will be tasked with building resilient MLOps pipelines and ensuring adherence to governance standards. Additionally, you will design, implement, and oversee AIops solutions to automate and optimize AI/ML workflows. Collaboration with data scientists, engineers, and stakeholders will be essential to ensure seamless integration of AI/ML models into production systems. Monitoring and maintaining the health and performance of AI/ML systems, as well as developing and maintaining CI/CD pipelines for AI/ML models, will also be part of your responsibilities. Troubleshooting and resolving issues related to AI/ML infrastructure and workflows will require your expertise, along with staying updated on the latest AI Ops, MLOps, and Kubernetes tools and technologies. To be successful in this role, you must possess a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of relevant experience. Your proven experience in AIops, MLOps, or related fields will be crucial. Proficiency in Python and hands-on experience with Fast API are required, as well as strong expertise in Docker and Kubernetes (or AKS). Familiarity with MS Azure and its AI/ML services, including Azure ML Flow, is essential. Additionally, you should be proficient in using DevContainer for development and have knowledge of CI/CD tools like Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools, Infrastructure as Code (Terraform or equivalent), strong problem-solving skills, and excellent communication and collaboration abilities are also necessary. Preferred skills for this role include experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn, as well as familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack, along with an understanding of data versioning tools like DVC or MLflow, would be advantageous. Proficiency in Azure-specific tools and services like Azure Machine Learning (Azure ML), Azure DevOps, Azure Kubernetes Service (AKS), Azure Functions, Azure Logic Apps, Azure Data Factory, Azure Monitor, and Application Insights is also preferred. Joining our team at Socit Gnrale will provide you with the opportunity to be part of a dynamic environment where your contributions can make a positive impact on the future. You will have the chance to innovate, collaborate, and grow in a supportive and stimulating setting. Our commitment to diversity and inclusion, as well as our focus on ESG principles and responsible practices, ensures that you will have the opportunity to contribute meaningfully to various initiatives and projects aimed at creating a better future for all. If you are looking to be directly involved, develop your expertise, and be part of a team that values collaboration and innovation, you will find a welcoming and fulfilling environment with us at Socit Gnrale.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior DevOps Engineer at TechBlocks, you will be responsible for designing and managing robust, scalable CI/CD pipelines, automating infrastructure with Terraform, and improving deployment efficiency across GCP-hosted environments. With 5-8 years of experience in DevOps engineering roles, your expertise in CI/CD, infrastructure automation, and Kubernetes will be crucial for the success of our projects. In this role, you will own the CI/CD strategy and configuration, implement DevSecOps practices, and drive an automation-first culture within the team. Your key responsibilities will include designing and implementing end-to-end CI/CD pipelines using tools like Jenkins, GitHub Actions, and Argo CD for production-grade deployments. You will also define branching strategies and workflow templates for development teams, automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests, and manage secrets lifecycle using Vault for secure deployments. Collaborating with engineering leads, you will review deployment readiness, ensure quality gates are met, and integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. Monitoring infrastructure health and capacity planning using tools like Prometheus, Grafana, and Datadog, you will implement alerting rules, auto-scaling, self-healing, and resilience strategies in Kubernetes. Additionally, you will drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers. Your role will be pivotal in ensuring the reliability, scalability, and security of our systems while fostering a culture of innovation and continuous learning within the team. TechBlocks is a global digital product engineering company with 16+ years of experience, helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. We believe in the power of technology and the impact it can have when coupled with a talented team. Join us at TechBlocks and be part of a dynamic, fast-moving environment where big ideas turn into real impact, shaping the future of digital transformation.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Lead of DevOps & Cloud Engineering at Guidepoint, you will play a crucial role in managing and leading a team of Sys. Admin and DevOps engineers. Your primary responsibility will be to develop and execute a robust Development Operations (DevOps) strategy to ensure high-quality software deployments and maintain the overall health and performance of applications. This position is based in our Pune office and offers a hybrid work environment. Your key responsibilities will include: - Taking complete ownership of DevOps/DevSecOps pipelines to support Enterprise Production, Development, and Test deployments - Managing CI/CD pipelines using tools such as Azure DevOps, Argo CD, Keda, and Velero for backups and disaster recovery - Leading and overseeing a team of experienced DevOps engineers across global teams - Provisioning, configuring, optimizing, and supporting all components of the Web Applications Stack, including Linux, Apache, SQL, PHP, Solr, ElasticSearch, Redis, and Couchbase - Right-sizing and optimizing infrastructure for AKS and Azure VMs to balance performance and cost - Securing infrastructure by applying patches and updates regularly - Configuring and managing monitoring and alert systems for Infrastructure and Application - Providing production support by troubleshooting and triaging issues - Contributing to overall engineering initiatives as part of the engineering leadership team To be successful in this role, you should have: - 5+ years of experience as a Senior DevOps Engineer - 3+ years of experience in managing a team of DevOps/DevSecOps Engineers - Expertise in CI/CD tools like Terraform, Ansible, Azure DevOps Pipelines, Argo CD, Keda, and Velero - Hands-on experience with various technologies including Linux, PHP, Apache, Nginx, Solr, Elastic Search Clusters, Redis, Couchbase, SQL Server, RabbitMQ, and git version control - Knowledge of Datadog for Infrastructure and Application monitoring - Preferred experience with IaaS and PaaS Cloud services (Azure), Containerization Technology (Docker, Kubernetes, ASR), and Azure IaaS/PaaS components In addition to technical skills, we are looking for candidates with strong interpersonal and communication skills, problem-solving abilities, and a proactive work style. You should be a collaborative team player, exhibit effective leadership skills, and be able to manage projects while upholding quality standards and meeting deadlines. Guidepoint offers competitive compensation, employee medical coverage, a central office location, an entrepreneurial environment, autonomy, and a casual work setting. Join Guidepoint, a leading research enablement platform, and be part of a diverse and inclusive community that values different perspectives and contributions.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Project Manager in the Production IT Digital Manufacturing team at EDAG Production Solutions India, you will have the opportunity to play a key role in the implementation, extension, and rollout of Manufacturing Operation Management/Manufacturing Executive System (MES/MOM) projects. Your responsibilities will include technical conceptual design, development, and implementation of global systems used in digital manufacturing, as well as collaboration on large agile IT projects within the Digital Manufacturing environment. To succeed in this role, you should have completed university studies in technical computer science or general computer science, or possess a comparable qualification. With 5-7 years of experience, you should be well-versed in the development of manufacturing operation management systems, designing IT solutions for the process industry, and requirements engineering. Knowledge of IATF standards, automotive or chemical industry experience, as well as familiarity with modern IT delivery processes and agile requirements engineering will be beneficial. Experience in cloud platforms, cloud native technologies, technical management and rollout of MES/MOM projects, and integration and project management between automation levels, process control levels, MES, and SAP are desirable skills for this role. Your ability to work collaboratively in interdisciplinary projects and your networked end-to-end mindset will be crucial for success in this position. At EDAG Production Solutions India, we value diversity and believe that gender, age, nationality, and religion are not relevant factors. What matters most to us is your passion, expertise, and commitment to driving digital manufacturing forward. If you are ready for the next challenge in your career and possess the necessary qualifications and skills, we encourage you to apply by sending your documents via email, marked "Production IT Digital Manufacturing." Join us in shaping the future of digital manufacturing and be part of a dynamic and multicultural team that is dedicated to creating innovative solutions and driving growth together.,
Posted 2 weeks ago
2.0 - 7.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Hello Candidate, Greetings from Hungry Bird IT Consulting Services Pvt Ltd. We are hiring Software Engineer - Finance for our client. Job Title: Software Engineer Finance Domain Location: Hybrid - Hyderabad Department: Engineering Team: Finance Technology Reports to: Engineering Manager / Tech Lead Employment Type: Full-Time About the Role We are looking for a skilled Software Engineer to join our Finance Engineering team. Youll design, build, and maintain microservices and applications that power key financial workflows. This role is ideal for engineers with a solid backend foundation, familiarity with frontend technologies, and experience or interest in financial systems. You will work across the stack and collaborate closely with product managers, DevOps, and other engineers to build resilient, scalable systems using modern cloud-native tools and best practices. Responsibilities Core Responsibilities: Design and own microservices within the financial domain using Python and FastAPI Implement robust unit, integration, and E2E testing strategies Work with GraphQL and REST APIs to integrate external/internal services Build secure, scalable backend systems deployed on AWS using Terraform , Docker , and Kubernetes Collaborate with DevOps to enhance CI/CD pipelines using GitHub Actions Monitor system health and performance using OpenTelemetry or equivalent observability tools Mentor junior engineers and contribute to code reviews and system design discussions Translate finance-specific requirements into technical solutions with product and stakeholders For Less Experienced Engineers: Consume and integrate GraphQL/REST APIs into backend and frontend features Write basic unit tests and learn to extend test coverage under mentorship Work with Git, GitHub, and follow structured GitHub workflow practices Pair with senior engineers to gain exposure to Kubernetes, Argo CD , and financial systems Continuously develop skills across backend and cloud technologies Tech Stack Languages: Python (FastAPI), TypeScript, JavaScript, Node.js Frontend: React (basic usage) APIs: GraphQL, REST Cloud & Infra: AWS, Docker, Terraform, Kubernetes, Argo CD CI/CD & Observability: GitHub Actions, OpenTelemetry Version Control: Git, GitHub Minimum Qualifications: Solid programming skills in Python ; familiarity with TypeScript/JavaScript , Node.js , React Experience consuming REST and/or GraphQL APIs Proficient with Git and common GitHub workflows Familiarity with AWS and Docker Exposure to CI/CD and testing best practices Strong problem-solving skills and attention to detail Ability to work in a collaborative, fast-paced environment Preferred for Senior Candidates: Experience designing and owning microservices Strong background in unit, integration, and E2E testing Hands-on with AWS , Terraform , Kubernetes , and Argo CD Experience with GitHub Actions , OpenTelemetry , and system monitoring Previous experience working in the financial industry or on financial systems Track record of mentoring engineers and driving technical initiatives Nice to Have Experience with event-driven architecture or message brokers (e.g., Kafka, SQS) Domain expertise in accounting, payments, or reconciliation workflows Contributions to open-source or finance tech communities (Interested candidates can share their CV with us at shreya@hungrybird.in or reach us at +919701432176.) PLEASE MENTION THE RELEVANT POSITION IN THE SUBJECT LINE OF THE EMAIL. Example: KRISHNA, HR MANAGER, 7 YEARS, 20 20DAYS NOTICE. Name: Position applying for: Total experience: Notice period: Current Salary: Expected Salary: Thanks and Regards Shreya +91 9701432176
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Project Manager in Production IT Digital Manufacturing at EDAG Production Solutions India in Gurgaon, you will play a key role in the implementation, extension, and rollout of MES/MOM systems from conception to go-live. Your responsibilities will include technical conceptual design, development, and further enhancement of global systems used in digital manufacturing, as well as collaboration on large agile IT projects. You will also contribute to the strategic design of cloud-based system architecture and the development of cross-service concepts. To excel in this role, you should hold a university degree in technical computer science or a related field, along with 5-7 years of experience. Your experience in developing manufacturing operation management systems, designing IT solutions for the process industry, and knowledge of IATF standards will be valuable. Proficiency in requirements engineering, agile methodologies, and cloud platforms such as Azure is essential. Additionally, familiarity with modern IT delivery processes like DevOps and technologies like Kubernetes and microservices architecture will be beneficial. Your role will involve technical management and rollout of MES/MOM systems, coordination between automation levels, process control, and SAP, and integration of various systems. At EDAG, we value diversity and focus on your skills and expertise above all else. If you are seeking a challenging opportunity to drive digital manufacturing initiatives forward, we encourage you to apply by sending your documents via email, marked "Production IT Digital Manufacturing." Join us at EDAG Production Solutions India and be part of a dynamic team passionate about innovation and excellence.,
Posted 2 weeks ago
5.0 - 8.0 years
8 - 12 Lacs
Coimbatore
Work from Office
Skills: DevOps tools like Argo CD , Jenkins, Gitlab runners, CI/CD DevOps Engineer with strongexperience in setting up, integrating and managing various tools needed by ITteams to implement DevOps. The tools include CI/CD Tooling, monitoring tools.The tools include Git, Artifactory, Jenkins, Containers, Elastic, Logstash ,Kibana, Openshift, Service now Responsibilities: Able to understand and provide export level advice to implement DevOps for a given team(s) With strong hands-on in configuring the tools, able to administer and maintain the tools. Advanced level expertise with scripting languages, like Python, Groovy, Gradle, Maven etc. Ensuring the issues reported in tools are resolved within our committed service level agreements. Manage documentation/Knowledge base for the tools to ensure smooth support Maintain strong relationships with our internal teams customers for the delivery of support. Have a mindset of continuous improvement, in terms of efficiency of support processes and customer satisfaction. Requirements: Bachelor's degree inEngineering, or equivalent experience in related field 6-10 years of experience indesign, development and maintenance of any products/tools Minimum 5 years of extensiveexperience in managing maintaining the tools Expertise in installing,configuring, performance testing, troubleshooting of any DevOps Tools Strong understanding ofDevelopment methodologies Waterfall, Agile, Iterative, Experience with coderepositories, esp. Git/GitHub/GitLab Scripting/ Coding with Gradle,Groovy, Ansible, Grafana are added advantages Ability to actively scanfor technologies that offer new opportunities to differentiate, Technologytrend analysis Familiar working with Linuxand windows environments BehavioralSkills: Strong verbal and writtencommunication skills Good Problem Solving andAnalytical Skills Proactive in learning taking ownership Customer orientated focus Team player, ability towork in a fast pace environment with a positive and adaptable approach Must have ability to handlemultiple concurrent activities and have a flexible positive attitude.
Posted 3 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Coimbatore
Work from Office
Skills: DevOps tools like Argo CD , Jenkins, Gitlab runners, CI/CD DevOps Engineer with strongexperience in setting up, integrating and managing various tools needed by ITteams to implement DevOps. The tools include CI/CD Tooling, monitoring tools.The tools include Git, Artifactory, Jenkins, Containers, Elastic, Logstash ,Kibana, Openshift, Service now Responsibilities: Able to understand and provide export level advice to implement DevOps for a given team(s) With strong hands-on in configuring the tools, able to administer and maintain the tools. Advanced level expertise with scripting languages, like Python, Groovy, Gradle, Maven etc. Ensuring the issues reported in tools are resolved within our committed service level agreements. Manage documentation/Knowledge base for the tools to ensure smooth support Maintain strong relationships with our internal teams & customers for the delivery of support. Have a mindset of continuous improvement, in terms of efficiency of support processes and customer satisfaction. Requirements: Bachelor's degree inEngineering, or equivalent experience in related field 6-10 years of experience indesign, development and maintenance of any products/tools Minimum 5 years of extensiveexperience in managing & maintaining the tools Expertise in installing,configuring, performance testing, troubleshooting of any DevOps Tools Strong understanding ofDevelopment methodologies Waterfall, Agile, Iterative, Experience with coderepositories, esp. Git/GitHub/GitLab Scripting/ Coding with Gradle,Groovy, Ansible, Grafana are added advantages Ability to actively scanfor technologies that offer new opportunities to differentiate, Technologytrend analysis Familiar working with Linuxand windows environments Behavioural Skills: Strong verbal and writtencommunication skills Good Problem Solving andAnalytical Skills Proactive in learning &taking ownership Customer orientated focus Team player, ability towork in a fast pace environment with a positive and adaptable approach Must have ability to handlemultiple concurrent activities and have a flexible positive attitude.
Posted 3 weeks ago
8.0 - 13.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Overview - Our Hosting team manages and supports the infrastructure of all our platforms, from hardware to software, to operating systems to PowerSchool products. This collaborative team helps meet the needs of an evolving technology and business model with a specialized focus on protecting customer data and keeping information secure. We work closely with product engineering teams to deliver products into production environments across Azure and AWS. Description - Design, develop and operate Infrastructure-as-code automation for Terraform, K8s, Snowflake, and while also executing customer tenant migrations in Analytics and Insights. Manage and optimize K8s clusters and workloads using Argo CD, Flux and Helm. Configure and support dynamic connector configurations. Work with the Product Engineering teams in building out the specifications and provide scalable, reliable platforms for the automation/delivery platform UI Development for internal dashboards and management applications CI/CD pipeline engineering and support Environment support including production support Participate in on-call schedules Requirements - Minimum of 8+ years of relevant and related work experience. Bachelors degree or equivalent, or equivalent years of relevant work experience. Additional experience may be substitute for an advanced Degree. Advanced knowledge and experience with Kubernettes, Flux, Terraform or equivalent technologies Advance knowledge of AWS services, including EKS, EFS, RDS, ECS, etc Working knowledge of monitoring, logging and alerting tools like Grafana and Prometheus Strong Java, Python, and Git experience
Posted 3 weeks ago
8.0 - 12.0 years
0 - 0 Lacs
Noida
Hybrid
Company Overview BOLD is an established and fast-growing product company that transforms work lives. Since 2005, weve helped more than 10,000,000 folks from all over America (and beyond!) reach higher and do better. A career at BOLD promises great challenges, opportunities, culture and the environment. With our headquarters in Puerto Rico and offices in San Francisco and India, were a global organization on a path to change the career industry. Key Responsibilities Manage source code repositories, support teams to understand version control tools and resolve any issues. Continuous integration, build & deployment automation using tools like Jenkins, Bit bucket pipelines, etc. Integrate test suites in build & deployment automation process. Build and deploy Docker image on Kubernetes, VM & Webapps. Create and update deployment scripts. Troubleshoot/fix compilation and deployment issues. Ensure timely, reliable, and smooth deployment of releases and hotfixes. Optimize build & deployment process to make it more efficient. Document deployment instructions and keep the documents updated. Handle and troubleshoot deployments based on .Net and NodeJS framework Required Skills CloudAzure, Azure Services (WebApps / Function Apps) IaaSTerraform CI/CDJenkins Artifactory-Azure Container Registry, Jfrog artifactory etc. Code Analysis-Sonar Cube Containerization-Docker / Kubernetes Additional Skills (Good to Have) CI/CDArgoCD, Github, Scripting–Python/PowerShell / Sell Scripting /Groovy Configuration Management–Ansible, Salt Stack Build Tools-Nant, MSbuild, Maven Webservers-Nginx, IIS, Azure (WebApps / Function Apps) Languages–Good to have knowledge on .Net / Python/Node/React based projects
Posted 3 weeks ago
12.0 - 20.0 years
45 - 55 Lacs
Hyderabad
Hybrid
Job Location : Hyderabad Timing : 12:30PM TO 9:30 PM (Hybrid) Notice Period : Immediate to 15days Please Note : Only Local Candidates as Need to go for Face 2 Face for Final Round Summary : As DevOps Manager, you will be responsible for leading the DevOps functionhands on in technology while managing people, process improvements, and automation strategy. You will set the vision for DevOps practices at Techblocks India and drive cross-team efficiency. Experience Required: 12+ years total experience, with 3–5 years in DevOps leadership roles. Technical Knowledge and Skills: Mandatory: Cloud: GCP,AWS or Azure (Complete stack from IAM to GKE) CI/CD: End-to-end pipeline ownership (GitHub Actions, Jenkins, Argo CD) IaC: Terraform, Helm • Containers: Docker, Kubernetes • DevSecOps: Vault, Trivy, OWASP Nice to Have: FinOps exposure for cost optimization Big Data tools familiarity (BigQuery, Dataflow) Familiarity with Kong, Anthos, Istio Scope: Lead DevOps team across multiple pods and products Define roadmap for automation, security, and CI/CD Ensure operational stability of deployment pipelines Roles and Responsibilities: Architect and guide implementation of enterprise-grade CI/CD pipelines that support multi-environment deployments, microservices architecture, and zero downtime delivery practices. Oversee Infrastructure-as-Code initiatives to establish consistent and compliant cloud provisioning using Terraform, Helm, and policy-as-code integrations. Champion DevSecOps practices by embedding security controls throughout the pipeline—ensuring image scanning, secrets encryption, policy checks, and runtime security enforcement Lead and manage a geographically distributed DevOps team, setting performance expectations, development plans, and engagement strategies. • Drive cross-functional collaboration with engineering, QA, product, and SRE teams to establish integrated DevOps governance practices. Develop a framework for release readiness, rollback automation, change control, and environment reconciliation processes. Monitor deployment health, release velocity, lead time to recovery, and infrastructure cost optimization through actionable DevOps metrics dashboards Serve as the primary point of contact for C-level stakeholders during major infrastructure changes, incident escalations, or audits. Own the budgeting and cost management strategy for DevOps tooling, cloud consumption, and external consulting partnerships. Identify, evaluate, and onboard emerging DevOps technologies, ensuring team readiness through structured onboarding, POCs, and knowledge sessions. Foster a culture of continuous learning, innovation, and ownership—driving internal tech talks, hackathons, and community engagement
Posted 3 weeks ago
2.0 - 6.0 years
3 - 8 Lacs
Bengaluru
Work from Office
2+ years of hands-on experience in DevOps, with strong expertise in infrastructure automation and cloud-native technologies. Proficient in Terraform for infrastructure provisioning and Argo CD for GitOps-based continuous deployment. Solid understanding of cloud platforms including GCP , AWS , and Azure . Azure experience is a strong plus . Must have experience in setting up and managing monitoring and alerting using tools like Prometheus and Grafana . Responsible for ensuring high system uptime , continuous monitoring, and timely detection and notification of system anomalies. Collaborate with product managers to define and execute the DevOps roadmap for Saleskens services. Drive end-to-end execution of DevOps projects and report on progress and system health at an executive level. Design, implement, and enhance CI/CD pipelines to support reliable and frequent deployments. Perform root cause analysis of operational issues and work closely with development teams to implement fixes and improvements. Manage capacity planning and lead infrastructure enhancement projects, including design, budgeting, and execution. Build and maintain platforms for log processing , metrics collection , and data visualization to support observability and performance tracking. Cloud certifications are a plus.
Posted 3 weeks ago
14.0 - 24.0 years
10 - 20 Lacs
Chennai
Hybrid
We are looking for an experienced Solution Architect with deep expertise in telecom cloud orchestration, network virtualization, automation, and cutting-edge technologies like GenAI and CAMARA. The ideal candidate will have strong proficiency in ETSI MANO architecture, Kubernetes, CI/CD, and on-premises deployments, including OpenStack, along with familiarity with leading public cloud platforms. This role demands a focus on scalable, secure, and innovative solutions while meeting customer requirements and industry standards. Key Responsibilities: Design and implement robust Telco Cloud Orchestration solutions to deploy and manage virtualised (VNFs) and containerised (CNFs) network functions on public cloud and OpenStack. Architect scalable, secure, and highly available Kubernetes clusters tailored to telecom workloads. Implement orchestration strategies for CNFs, ensuring lifecycle management, observability, fault tolerance, and SLA compliance. Enable API-driven integration for the Telco Cloud platform with DCGW, firewalls, load balancers, inventory systems, and other infrastructure elements. Design and deploy CI/CD pipelines using tools like Gitea, FluxCD, Crossplane, Jenkins, and Argo Workflow to automate CNF onboarding, provisioning, and upgrades. Integrate with service orchestrators, resource orchestrators, and third-party tools to ensure seamless end-to-end workflows. Design and enforce security best practices, including IAM roles, pod-level security policies, network segmentation, and encryption mechanisms. Ensure compliance with security standards and frameworks for both cloud and on-premises deployments. Hands-on experience with OpenStack components (Nova, Swift, Glance, Keystone, Tata Elxsi's Telco Cloud Orchestration and Automation Platform, Cinder) for on-premises deployment and management. Leverage GenAI and CAMARA frameworks to drive innovation and optimise orchestration solutions. Explore AI/ML integration to enhance observability, fault prediction, and automation. Develop high-quality technical documentation, including architecture diagrams, HLDs/LLDs, and deployment procedures. Present technical solutions and architectural blueprints to diverse stakeholders, including customers, executives, and technical teams. Mentor and guide cross-functional teams to deliver scalable and secure solutions. Collaborate with product management, customers, and sales teams to align technical solutions with business goals. Conduct workshops, thought leadership sessions, and PoCs to demonstrate Tata Elxsi's Telco Cloud Orchestration and Automation Platform capabilities. Required Skills and Qualifications: Strong proficiency in ETSI MANO architecture and telecom cloud orchestration processes. Extensive experience with Kubernetes and managing AWS EKS clusters for deploying telco network functions (e.g., RAN, Core). Hands-on experience with on-premises deployment tools like OpenStack and related components. Strong proficiency in Telco Cloud Orchestration solutions to deploy and manage virtualised (VNFs) and containerised (CNFs) network functions on public cloud and OpenStack. Architect scalable, secure, and highly available Kubernetes clusters tailored to telecom workloads. Expertise in implementing orchestration strategies for CNFs, ensuring lifecycle management, observability, fault tolerance, and SLA compliance. Experience in developing high-quality technical documentation, including architecture diagrams, HLDs/LLDs Proficiency in CI/CD tools and workflows, including Gitea, FluxCD, Crossplane, Jenkins, and Argo Workflow. Strong understanding of infrastructure as code and automation tools like Terraform and Ansible. Deep understanding of 4G/5G network functions, service configurations, and troubleshooting. Familiarity with TMF APIs (e.g., TMF638, TMF639, TMF652) and service/resource orchestration standards. Experience with networking protocols (e.g., LDAP, NTP, DHCP, DNS, VPN) and network elements (e.g., routers, switches, firewalls). Strong knowledge of designing secure, scalable solutions for both cloud and on-premises environments. Familiarity with CNI plugins (e.g., Calico, Cilium), network observability, and traffic routing in Kubernetes. Proficiency in working with public cloud platforms, including AWS, Azure, Google Cloud Platform (GCP), and their telecom-relevant services. Experience with cloud services like AWS VPC, Azure Virtual WAN, GCP Anthos, IAM, load balancers, CloudWatch, Auto Scaling, S3, and BigQuery. Expertise in managing hybrid cloud solutions integrating public and private cloud infrastructure. Experience with GenAI, CAMARA, and leveraging AI/ML to enhance orchestration processes and operational efficiency. Excellent communication and interpersonal skills, with the ability to convey complex solutions in clear, concise terms. Strong problem-solving and troubleshooting skills to address complex customer challenges. Develop and deliver tailored solutions based on Tata Elxsi's Telco Cloud Orchestration and Automation Platform capabilities, addressing customer needs and aligning with business goals. Maintain detailed technical knowledge of Tata Elxsi's Telco Cloud Orchestration and Automation Platform solutions and present them effectively to customers. Support pre-sales activities, including preparing RFx responses, delivering live demos, and conducting PoCs. Oversee the design and implementation of solutions, ensuring seamless integration and operational readiness. Provide feedback to product management, identifying areas for innovation and improvement based on customer engagement and industry trends. Present Tata Elxsi's Telco Cloud Orchestration and Automation Platform roadmap, architecture blueprints, and competitive advantages to customers and stakeholders.
Posted 4 weeks ago
8.0 - 13.0 years
40 - 45 Lacs
Pune
Hybrid
The tackling challenges to enhance our internal CRM system and develop new products that You'llontakesproblem-solvingjudgmentupholds Overview: Guidepoints Engineering team thrives on problem-solving and creating happier users. As Guidepoint works to achieve its mission of making individuals, businesses, and the world smarter through personalized knowledge-sharing solutions, the engineering team is taking on challenges to improve our internal CRM system and create new products to optimize the seamless delivery of our services. As a Lead of DevOps & Cloud Engineering, you will be a core member of the engineering leadership team. In this role, you will manage and lead a team of Sys. Admin and DevOps engineers. You will be responsible for developing and executing a Development Operations (DevOps) strategy to ensure quality software deployments and overall application health and performance This is a hybrid position out of our Pune office. What Youll Do: Complete ownership of DevOps/DevSecOps pipelines, supporting Enterprise Production, Development, and Test deployments Managing CI/CD pipelines using Azure DevOps, Argo CD, Keda, and Velero for backups and DR Lead and manage a team of experienced DevOps engineers in our global team Provision, install, configure, optimize, maintain, and support all components of the Web Applications Stack: (Linux, Apache, SQL, PHP, Solr, ElasticSearch, Redis, Couchbase) Right size and optimize infrastructure, balancing performance and cost for AKS and Azure VMs Secure all infrastructure by applying patches and updates at regular intervals Configure and manage to monitor and alert of issues with Infrastructure and Application Provide production support by triaging and troubleshooting Contribute to overall engineering initiatives as a member of the engineering leadership team What You Have: 5+ years of experience working as a Senior DevOps Engineer 3+ years of experience managing a team of DevOps/DevSecOps Engineers Datadog experience with Infrastructure and Application monitoring and observability Tools Solid understanding and hands-on experience with CI/CD tools: Terraform, Ansible Azure DevOps Pipelines, Argo CD Keda and Velero expertise Hands-on experience and knowledge of Linux (CentOS), PHP (configs/modules), Apache, Nginx, Solr, Elastic Search Clusters, Redis, Couchbase, SQL Server, and RabbitMQ. Well-versed in git version control and Artifactory Soft Skills Required: Maintain problem solving skills and a proactive work style Strong interpersonal & communication skills Independent contributor with drive Collaborative and teamwork-oriented Accountable for work and take ownership of tasks Strategic thinker Manages projects and uphold quality on a deadline Exhibits judgement when it comes to prioritization and overall team management Proven effective leadership skills What We Offer: Competitive compensation Employee medical coverage Central office location Entrepreneurial environment, autonomy, and fast decisions Casual work environment
Posted 1 month ago
10.0 - 15.0 years
30 - 45 Lacs
Gurugram
Work from Office
We are looking for a Lead Engineer DevOps & Platform Engineering to drive engineering transformation across our global cloud infrastructure. In this high-impact role, youll take end-to-end ownership of DevOps architecture and implementation while also leading core platform engineering initiatives that accelerate product delivery, improve resilience, and scale our operations across GCP and Azure . Your Role & Responsibilities You will play a key role in shaping the platform engineering strategy , aligning engineering execution with business outcomes, and delivering reusable, intelligent, and secure platforms. Youll work across multiple squads and domains to build a high-performing ecosystem that powers next-generation software delivery. Your responsibilities will include: Lead and implement the vision for DevOps and platform engineering capabilities across GCP and Azure environments Drive engineering-wide initiatives on multi-tenancy optimization, environment reduction , and infrastructure reuse Own and evolve our CI/CD automation pipelines , Kubernetes infrastructure, and GitOps-based delivery workflows using ArgoCD, Helm, Terraform, and Kustomize Architect and scale multi-tenant platforms with strong isolation, configurability, and security baked-in Define and drive platform engineering strategy and OKRs , including targets for speed, scale, resilience, and developer enablement Establish best-in-class observability, alerting, and SLO frameworks , and improve incident lifecycle response using tools like Grafana, Prometheus, Loki, and InfluxDB Champion DevSecOps practices by embedding security, compliance, and policy-as-code in all platform components Build internal tools, reusable templates, and developer self-service capabilities to scale productivity Collaborate with engineering managers, product leaders, and architects to align platform goals with business priorities Mentor engineers and foster a culture of engineering excellence and continuous improvement Bonus: Lead GenAI experimentation for platform use cases like pipeline insights, log triage, changelog generation, and automated documentation Skills & Experience Required 10+ years of experience in DevOps, SRE, or Platform Engineering with cloud-native infrastructure Hands-on expertise in GCP and Azure , Kubernetes, ArgoCD, Terraform, Helm, and GitOps workflows Demonstrated success in leading platform-wide engineering initiatives and aligning them with business/technical strategy Deep knowledge of multi-tenant architecture , infrastructure standardization, and cost-efficient scaling Strong experience in observability tooling , SLOs, and production readiness practices Advanced understanding of DevSecOps , compliance automation, and security-by-design principles Strong communication, collaboration, and leadership skills across cross-functional teams Good to have: Exposure to LLMs/GenAI in engineering workflows and intelligent automation What You Bring A builder’s mindset with a strong focus on platform scalability, developer productivity, and resilience Ability to lead strategic initiatives from design through execution Bias for automation, reuse, and simplification Passion for mentoring, engineering leadership, and continuous learning Experience driving both technical depth and cross-team impact
Posted 1 month ago
12.0 - 21.0 years
30 - 45 Lacs
Hyderabad, Ahmedabad, Bengaluru
Hybrid
Job Description: Summary : As DevOps Manager, you will be responsible for leading the DevOps function hands on in technology while managing people, process improvements, and automation strategy. You will set the vision for DevOps practices at Techblocks India and drive cross-team efficiency. Experience Required: 12+ years total experience, with 35 years in DevOps leadership roles. Technical Knowledge and Skills: Mandatory: Cloud: GCP (Complete stack from IAM to GKE) CI/CD: End-to-end pipeline ownership (GitHub Actions, Jenkins, Argo CD) IaC: Terraform, Helm • Containers: Docker, Kubernetes • DevSecOps: Vault, Trivy, OWASP Nice to Have: FinOps exposure for cost optimization Big Data tools familiarity (BigQuery, Dataflow) Familiarity with Kong, Anthos, Istio Scope: Lead DevOps team across multiple pods and products Define roadmap for automation, security, and CI/CD Ensure operational stability of deployment pipelines Roles and Responsibilities: Architect and guide implementation of enterprise-grade CI/CD pipelines that support multi-environment deployments, microservices architecture, and zero downtime delivery practices. Oversee Infrastructure-as-Code initiatives to establish consistent and compliant cloud provisioning using Terraform, Helm, and policy-as-code integrations. Champion DevSecOps practices by embedding security controls throughout the pipelineensuring image scanning, secrets encryption, policy checks, and runtime security enforcement Lead and manage a geographically distributed DevOps team, setting performance expectations, development plans, and engagement strategies. • Drive cross-functional collaboration with engineering, QA, product, and SRE teams to establish integrated DevOps governance practices. Develop a framework for release readiness, rollback automation, change control, and environment reconciliation processes. Monitor deployment health, release velocity, lead time to recovery, and infrastructure cost optimization through actionable DevOps metrics dashboards Serve as the primary point of contact for C-level stakeholders during major infrastructure changes, incident escalations, or audits. Own the budgeting and cost management strategy for DevOps tooling, cloud consumption, and external consulting partnerships. Identify, evaluate, and onboard emerging DevOps technologies, ensuring team readiness through structured onboarding, POCs, and knowledge sessions. Foster a culture of continuous learning, innovation, and ownershipdriving internal tech talks, hackathons, and community engagement gopi.c@acesoftlabs.com or 9701923036
Posted 1 month ago
0.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Consultant - Java Developer In this role Full Stack Developer, you will partner with business, UX design, architecture and back-end teams to create world-class eCommerce software. You will play an integral role in the planning, development, testing and deployment of all software solutions. You will also drive process improvements to deliver software faster. Responsibilities Experience in enterprise / web / cloud applications software development experience Experience in Developing Java applications in Spring boot, Core Java Experience in NodeJS, frameworks ( e.g. Express, Nest etc), and JavaScript Experience in creating Angular 10+ UI Applications Experience in Building APIs and services using REST. Experience in working knowledge with functional, imperative, and object-oriented languages and methodologies. Experience in automation technologies (e.g., Maven or Gradle, Jenkins, etc.) Experience in Building, testing, and deploying code to run on cloud infrastructure. Experience in years of Experience in developing SQL/NoSQL database schemas. Experience in years of Google Kubernetes engine/AWS, cluster creation, Argo CD configuration Qualifications we seek in you! Minimum Qualifications BE / B.Tech / M.Tech /MCA Experience in GraphQL Experience with GCP Preferred qualifications Experience with GraphQL Multiple years experience utilizing CI/CD tools - CircleCI /Harness Excellent verbal, written communication, and collaboration skills Adept at growth mindset (agility and developing yourself and others) skills. Adept at execution and delivery (planning, delivering, and supporting) skills. Experience in team leadership Experience leveraging container-based technologies. Experience in Scrum/Agile development methodologies Experience working with complex systems and solving challenging/analytical problems. Exposure to Health Care Domain Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
8.0 - 12.0 years
30 - 35 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Summary: We are seeking a highly experienced Senior Azure DevOps Engineer with 8+ years of proven experience in designing, implementing, and managing Azure infrastructure and DevOps pipelines. The ideal candidate will have extensive hands-on expertise in Terraform, YAML deployments, Helm charts, and Azure services, and will play a key role in migrating workloads from AWS to Azure. Strong troubleshooting skills, deep knowledge of Azure infrastructure, and the ability to work collaboratively across teams are essential. Key Responsibilities: Design, develop, and manage Infrastructure as Code (IaC) using Terraform for provisioning Azure services. Implement and maintain CI/CD pipelines using Azure DevOps, GitHub Actions, Argo CD, and Bamboo. Deploy applications and infrastructure using YAML, Helm charts, and native Azure deployment tools. Provide technical leadership in migrating workloads from AWS to Azure, ensuring optimal performance and security. Manage and support containerized applications using Kubernetes and Helm in Azure environments. Design robust, scalable, and secure Azure infrastructure solutions (compute, storage, network, database, and monitoring). Troubleshoot deployment, integration, and infrastructure issues across cloud environments. Collaborate with cross-functional teams to deliver infrastructure and DevOps solutions aligned with project goals. Support monitoring and performance optimization using Azure Monitor and other tools. Required Qualifications & Skills: 8+ years of hands-on experience in Azure infrastructure and DevOps engineering. Deep expertise in Terraform, YAML, and Azure CLI/ARM templates. Strong hands-on experience with core Azure services: compute, networking, storage, app services, etc. Experience with CI/CD tools such as GitHub Actions, Azure DevOps, Bamboo, and Argo CD. Proficient in managing and deploying applications using Helm charts and Kubernetes. Proven experience in migrating cloud workloads from AWS to Azure. Strong knowledge of Azure IaaS/PaaS, containerization, and DevOps best practices. Excellent troubleshooting and debugging skills across build, deployment, and infrastructure pipelines. Strong verbal and written communication skills for collaboration and documentation. Preferred Certifications (Nice to Have): AZ-400 Designing and Implementing Microsoft DevOps Solutions HashiCorp Certified: Terraform Associate Good to Have Skills: Experience with Argo CD, Bamboo, Tekton, and other CI/CD tools. Familiarity with AWS services to support migration projects. Location: Hyderabad/ Bangalore/ Coimbatore/ Pune
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Mumbai
Work from Office
Responsibilities Perform systems analysis and design. Design and develop moderate to complex applications. Develops and ensures creation of application documents. Defines and produces integration builds. Monitors emerging technology trends. Primary Skills Experience with core Java concepts and Java EE. Experience with Spring and Spring Boot. Experience with CI/CD code automation and DevOps Experience with OpenShift Experience with GCP Experience with Machine Learning & Artificial Intelligence Good understanding of software architecture and design principles. Ability to design scalable, maintainable, and efficient Java & .Net applications. Understanding of data structures, programming logic, and design Understanding of application design patterns Excellent written & verbal communication skills Excellent attention to detail Secondary Skills Experience using ArgoCD Expert with Structured Query Language (SQL) Experience using Angular Experience using OAuth/OIDC Qualifications 3+ years of experience Bachelors Degree or International equivalent
Posted 1 month ago
6.0 - 9.0 years
10 - 18 Lacs
Bengaluru
Hybrid
Experience: 6 to 9 years Location: Bangalore Notice Period : immediate or 15 days Senior Devops Engineer
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough