Home
Jobs
Companies
Resume

421 Gitops Jobs - Page 13

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Calcutta

On-site

We are seeking a DevOps Engineer with 3+ years of experience specializing in AWS, Git, and VPS management. The ideal candidate will be responsible for automating deployments, managing cloud infrastructure, and optimizing CI/CD pipelines for seamless development and operations. Key Responsibilities: ✅ AWS Infrastructure Management – Deploy, configure, and optimize AWS services (EC2, S3, RDS, Lambda, etc.). ✅ Version Control & GitOps – Manage repositories, branching strategies, and workflows using Git/GitHub/GitLab. ✅ VPS Administration – Configure, maintain, and optimize VPS servers for high availability and performance. ✅ CI/CD Pipeline Development – Implement automated Git-based CI/CD workflows for smooth software releases. ✅ Containerization & Orchestration – Deploy applications using Docker and Kubernetes. ✅ Infrastructure as Code (IaC) – Automate deployments using Terraform or CloudFormation. ✅ Monitoring & Security – Implement logging, monitoring, and security best practices. Required Skills & Experience: 3+ years of experience in AWS, Git, and VPS management. Strong knowledge of AWS services (EC2, VPC, IAM, S3, CloudWatch, etc.). Expertise in Git and GitOps workflows. Hands-on experience with VPS hosting, Nginx, Apache, and server management. Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI). Knowledge of Infrastructure as Code (Terraform, CloudFormation). Strong scripting skills (Bash, Python, or Go). Preferred Qualifications: Experience with server security hardening on VPS servers. Familiarity with AWS Lambda & Serverless architecture. Knowledge of DevSecOps best practices. Job Types: Full-time, Permanent, Contractual / Temporary Benefits: Provident Fund Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

25.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: Meet our Team: The PayPal SRE team aims to create the most reliable commerce platform on the planet. We engineer AI driven reliability platforms that measure, monitor, and protect the experience of PayPal merchants and customers. You will be part of a production operations team within a new SRE organization focused entirely on merchant experience. This team will add to our platform of world class automation and observability tools to create features that give us greater visibility and faster response to any issue affecting merchant experience. Job Description: Your Way to Impact: Work directly with Product Development teams on features, operations, and reliability engineering, to improve the outcomes that our merchants deserve. Learn quickly and be able to triage and remediate production system and application incidents while practicing balanced incident response. Your Day to Day: Shift based team protecting our merchants by serving as a first responder for our systems and applications. You will facilitateconstructive retrospective sessions to help us enhance the whole lifecycle of operational response—from inception and design, through deployment, operation and refinement. Ensure the proper documentation of incidents affecting our merchants, with the appropriate data collected for the use of our various reliability platforms. Develop and improve production monitoring and management capabilities using existing platforms and tools. Work with the Production Operations Manager to take on problems, escalate, and resolve critical site incidents. Be the primary enabler in the reduction and elimination of issues affecting merchant experience. What you need to bring: 8+ years of experience in site reliability engineering Desire to be part of a shift-based team with hands on responsibility to protect the experiences of our merchants. Skills in Python or other industry standard software development languages (e.g., Java, Node) Strong experience with Monitoring applications – Splunk/DataDog Good understanding and working knowledge of networking principles, internet fundamentals, Operating Systems, and application stacks and should be comfortable using Linux command line. Experience with GitOps/Operation as Code preferred with the knowledge of terraform, artifactory. Experience with GCP and related services a strong plus. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply. REQ ID R0114422 Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Job Summary: We are seeking a skilled DevOps Engineer to design, deploy, and maintain scalable cloud-based infrastructure and applications. The ideal candidate will have hands-on experience with AWS, Kubernetes, containerization, and database management, with a focus on automation, reliability, and efficiency. You’ll collaborate with cross-functional teams to ensure seamless integration of development and operations, driving innovation and operational excellence. You will be responsible for automating and streamlining our deployment processes, managing our Kubernetes environment, optimizing resource utilization, and implementing robust monitoring solutions. Experience: 4 - 6 years Responsibilities: Design, build, deploy, and manage scalable and reliable cloud infrastructure (AWS) using IaC (Infrastructure as Code) tools. Build and optimize containerized environments using Docker, Kubernetes, and orchestration tools for applications like MongoDB, PostgreSQL, React, Node.js (TypeScript), .NET Core, and PHP. Implement and monitor CI/CD pipelines for continuous delivery and deployment. Ensure high availability, scalability, and cost optimization of cloud resources. Manage database replication and disaster recovery strategies for MongoDB and PostgreSQL. Monitor cloud infrastructure and applications performance using tools like Prometheus, Grafana, or CloudWatch. Troubleshoot performance bottlenecks and optimize resource utilization (CPU, memory, storage). Collaborate with developers, SREs, and QA teams to align DevOps practices with business goals. Stay updated on emerging cloud technologies and best practices. Implement and enforce security best practices across our infrastructure and applications. Automate repetitive tasks and processes using scripting and automation tools. Contribute to the documentation of infrastructure, processes, and best practices. Analyze resource utilization patterns and implement strategies for cost optimization and efficiency. Qualifications: Must Have: 4+ years of experience in a DevOps, SRE, cloud engineering or similar role. Extensive knowledge and hands-on experience with Amazon Web Services (AWS) cloud platform, including services like EC2, ECS/EKS, S3, RDS, VPC, IAM, etc. Strong understanding and practical experience with Kubernetes for container orchestration and management. Deep understanding of resource utilization concepts and methodologies for optimizing performance and cost. Proven experience in containerizing various applications and technologies, including MongoDB, PostgreSQL, React, Node.js with TypeScript, .NET Core APIs, and PHP using Docker or similar technologies. Experience in setting up and managing database replication for MongoDB and PostgreSQL. Proficient in implementing and managing application and infrastructure monitoring solutions using tools like Prometheus, Grafana, CloudWatch, or similar. Strong understanding of networking principles and security best practices in cloud environments. Excellent problem-solving and troubleshooting skills. Ability to work independently and as part of a team. Good communication and collaboration skills. Scripting: Strong scripting skills in Bash, Python, or PowerShell for automation. Good To Have: Experience with Infrastructure-as-Code (IaC) tools such as Helm, Terraform, Ansible, CloudFormation, or similar. Proficiency in scripting languages such as Python, Bash, or Go. Experience with CI/CD tools like Jenkins, GitLab CI, CircleCI, or GitHub Actions. Knowledge of configuration management tools like Ansible, Chef, or Puppet. Experience with log management and analysis tools like ELK stack (Elasticsearch, Logstash, Kibana) or Splunk. Experience with serverless technologies like AWS Lambda and API Gateway. Understanding of agile development methodologies. Experience with database administration tasks. Cloud Security: Knowledge of IAM roles, encryption, and compliance (e.g., GDPR, SOC2). Familiarity with performance testing and optimization techniques. Cloud Cost Optimization: Experience with cost management tools (e.g., AWS Cost Explorer). Nice To Have: Certifications: AWS Certified Solutions Architect, Kubernetes (CKA/CKAD), or Azure/GCP certifications. Cloud Security Tools: Experience with tools like AWS WAF, Shield, or cloud-native security frameworks. Experience with: Serverless databases (e.g., DynamoDB, Aurora Serverless). Automated testing frameworks for infrastructure (e.g., InSpec, Terraform Validate). GitOps practices (e.g., Flux, Argo CD). Company Benefits: Employees at Blobstation enjoy a full range of benefits, such as: 5 days a week Health Insurance Sponsorship towards training & certification Flexible working hours Flexibility to work from home Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Mohali district, India

On-site

Linkedin logo

𝗔𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗥𝗼𝗹𝗲: We looking for a highly experienced and innovative Senior DevSecOps & Solution Architect to lead the design, implementation, and security of modern, scalable solutions across cloud platforms. The ideal candidate will bring a unique blend of DevSecOps practices, solution architecture, observability frameworks, and AI/ML expertise — with hands-on experience in data and workload migration from on-premises to cloud or cloud-to-cloud. You will play a pivotal role in transforming and securing our enterprise-grade infrastructure, automating deployments, designing intelligent systems, and implementing monitoring strategies for mission-critical applications. 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽: • Own CI/CD strategy, automation pipelines, IaC (Terraform, Ansible), and container • orchestration (Docker, Kubernetes, Helm). • Champion DevSecOps best practices – embedding security into every stage of the SDLC. • Manage secrets, credentials, and secure service-to-service communication using Vault, • AWS Secrets Manager, or Azure Key Vault. • Conduct infrastructure hardening, automated compliance checks (CIS, SOC 2, ISO • 27001), and vulnerability management. • Solution Architecture: • Architect scalable, fault-tolerant, cloud-native solutions (AWS, Azure, or GCP). • Design end-to-end data flows, microservices, and serverless components. • Lead migration strategies for on-premises to cloud or cloud-to-cloud transitions, • ensuring minimal downtime and security continuity. • Create technical architecture documents, solution blueprints, BOMs, and migration • playbooks. • Observability & Monitoring: • Implement modern observability stacks: OpenTelemetry, ELK, Prometheus/Grafana, • DataDog, or New Relic. • Define golden signals (latency, errors, saturation, traffic) and enable APM, RUM, and log • aggregation. • Design SLOs/SLIs and establish proactive alerting for high-availability environments. 𝗔𝗜/𝗠𝗟 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 &𝗮𝗺𝗽; 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: • Integrate AI/ML into existing systems for intelligent automation, data insights, and • anomaly detection. • Collaborate with data scientists to operationalize models using MLflow, SageMaker, • Azure ML, or custom pipelines. • Work with LLMs and foundational models (OpenAI, Hugging Face, Bedrock) for POCs or • production-ready features. • Migration & Transformation: • Lead complex data migration projects across heterogeneous environments — legacy • systems to cloud, or inter-cloud (e.g., AWS to Azure). • Ensure data integrity, encryption, schema mapping, and downtime minimization • throughout migration efforts. • Use tools such as AWS DMS, Azure Data Factory, GCP Transfer Services, or custom • scripts for lift-and-shift and re-architecture. 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 &𝗮𝗺𝗽; 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: • 10+ years in DevOps, cloud architecture, or platform engineering roles. • Expert in AWS and/or Azure – including IAM, VPC, EC2, Lambda/Functions, S3/Blob, API • Gateway, and container services (EKS/AKS). • Proficient in infrastructure as code: Terraform, CloudFormation, Ansible. • Hands-on with Kubernetes (k8s), Helm, GitOps workflows. • Strong programming/scripting skills in Python, Shell, or PowerShell. • Practical knowledge of AI/ML tools, libraries (TensorFlow, PyTorch, scikit-learn), and • model lifecycle management. • Demonstrated success in large-scale migrations and hybrid architecture. • Solid understanding of application security, identity federation, and compliance. Familiar with agile practices, project estimation, and stakeholder communication. 𝗡𝗶𝗰𝗲 𝘁𝗼 𝗛𝗮𝘃𝗲: • Certifications: AWS Solutions Architect, Azure Architect, Certified Kubernetes Admin, or similar. • Experience with Kafka, RabbitMQ, event-driven architecture. • Exposure to n8n, OpenFaaS, or AI agents. Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Hi All, Greetings of the day! We are hiring for one of our reputed clients in Pune - Yerwada Looking for a Jr. Enterprise Architect with 8+ years of experience 2+ years of experience as a Cloud solution Architect. And below mentioned following skills : Any cloud experience (AWS/Azure/GCP) Cost optimization IAC- Infrastructure As Code. GitOps Cloud Security Interested candidates please apply here : alisha.sh@peoplefy.com Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

12 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

About the Role We are seeking a motivated and skilled OpenShift DevOps Engineer to join our team. In this role, you will be responsible for building, deploying, and maintaining our applications on the OpenShift platform using CI/CD best practices. You will work closely with developers and other operations team members to ensure smooth and efficient delivery of software updates. Responsibilities: Collaborate with customers to understand their specific requirements . Stay up-to-date with industry trends and emerging technologies. Prepare and maintain documentation for processes and procedures. Participate in on-call support and incident response, as needed. Good knowledge virtual networking and storage configuration Working experience with LINUX. Hands-on exp. in K8s services, load balancing & networking modules Proficient in security, firewall,storage concepts Implement and manage OpenShift environments, including deployment configurations, cluster management, and resource optimization. Design and implement CI/CD pipelines using tools like OpenShift Pipelines, GitOps, or other industry standards. Automate build, test, and deployment processes for applications on OpenShift. Troubleshoot and resolve issues related to OpenShift deployments and CI/CD pipelines. Collaborate with developers and other IT professionals to ensure smooth delivery of software updates. Stay up-to-date on the latest trends and innovations in OpenShift and CI/CD technologies. Participate in the continuous improvement of our DevOps practices and processes. Qualifications: Bachelor's degree in Computer Science, or a related field (or equivalent work experience). Familiarity with infrastructure as code (IaC) tools (e.g., Terraform, Ansible). Excellent problem-solving, communication, and teamwork skills. Experience working in Agile/Scrum or other collaborative development environments. Flexible to work in 24/7 support environment Proven experience as a DevOps Engineer or similar role. Strong understanding of OpenShift platform administration and configuration. Experience with CI/CD practices and tools, preferably OpenShift Pipelines, GitOps, or similar options. Experience with containerization technologies (Docker, Kubernetes). Experience with scripting languages (Python, Bash). Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Good to have: Experience with cloud platforms (AWS, Azure, GCP). Experience with Infrastructure as Code (IaC) tools (Terraform, Ansible). Experience with security best practices for DevOps pipelines. Mandatory Key Skills DevOps,OpenShift platform administration,CI/CD,GitOps,Docker, Kubernetes,Python, Bash, cloud, IaC*,Terraform*,Ansible*,Agile*,Scrum*

Posted 2 weeks ago

Apply

8.0 - 11.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Us: Airtel Payments Bank, India's first payments bank is a completely digital and paperless bank. The bank aims to take basic banking services to the doorstep of every Indian by leveraging Airtel's vast retail network in a quick and efficient manner. At Airtel Payments Bank, we’re transforming the way banking operates in the country. Our core business is banking and we’ve set out to serve each unbanked and underserved Indian. Our products and technology aim to take basic banking services to the doorstep of every Indian. We are a fun-loving, energetic and fast growing company that breathes innovation. We encourage our people to push boundaries and evolve from skilled professionals of today to risk-taking entrepreneurs of tomorrow. We hire people from every realm and offer them opportunities that encourage individual and professional growth. We are always looking for people who are thinkers & doers; people with passion, curiosity & conviction; people who are eager to break away from conventional roles and do 'jobs never done before’. Job Summary: We are looking for a Lead TechOps Engineer to join our team in managing and scaling containerized applications using Docker, Kubernetes, and OpenShift. You will be responsible for maintaining production environments, implementing automation, and ensuring platform stability and performance. Key Skills for TechOps Engineer (Docker, Kubernetes, OpenShift) 1. Containerization & Orchestration Expertise in Docker: building, managing, and debugging containers. Proficient in Kubernetes (K8s): deployments, services, ingress, Helm charts, namespaces. Experience with Red Hat OpenShift: operators, templates, routes, integrated CI/CD. 2. CI/CD and DevOps Toolchain Jenkins, GitLab CI/CD, other CI/CD pipelines. Familiarity with GitOps practices. 3. Monitoring & Logging Experience with Prometheus, Grafana, ELK stack, or similar tools. Understanding of health checks, metrics, and alerts. 4. Infrastructure as Code Hands-on with Terraform, Ansible, or Helm. Version control using Git. 5. Networking & Security K8s/OpenShift networking concepts (services, ingress, load balancers). Role-Based Access Control (RBAC), Network Policies, Secrets management. 6. Scripting & Automation Proficiency in Bash, Python, or Go for automation tasks. 7. Cloud Platforms (Optional but Valuable) Experience with AWS, GCP, or Azure Kubernetes Service (EKS, AKS, GKE). Responsibilities: Design, implement, and maintain Kubernetes/OpenShift clusters. Build and deploy containerized applications using Docker. Manage CI/CD pipelines for smooth application delivery. Monitor system performance and respond to alerts or issues. Develop infrastructure as code and automate repetitive tasks. Work with developers and QA to support and optimize application lifecycle. Requirements: 8-11 years of experience in TechOps/DevOps/SRE roles. Strong knowledge of Docker, Kubernetes, and OpenShift. Experience with CI/CD tools like Jenkins. Proficiency in scripting (Bash, Python) and automation tools (Ansible, Terraform). Familiarity with logging and monitoring tools (Prometheus, ELK, etc.). Knowledge of networking, security, and best practices in container environments. Good communication and collaboration skills. Nice to Have: Certifications (CKA, Red Hat OpenShift, etc.) Experience with public cloud providers (AWS, GCP, Azure). GitOps and service mesh (Istio, Linkerd) experience Why Join Us? Airtel Payments Bank is transforming from a digital-first bank to one of the largest Fintech company. There could not be a better time to join us and be a part of this incredible journey than now. We at Airtel payments bank don’t believe in all work and no play philosophy. For us, innovation is a way of life and we are a happy bunch of people who have built together an ecosystem that drives financial inclusion in the country by serving 300 million financially unbanked, underbanked, and underserved population of India. Some defining characteristics of life at Airtel Payments Bank are Responsibility, Agility, Collaboration and Entrepreneurial development : these also reflect in our core values that we fondly call RACE.. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Us: Airtel Payments Bank, India's first payments bank is a completely digital and paperless bank. The bank aims to take basic banking services to the doorstep of every Indian by leveraging Airtel's vast retail network in a quick and efficient manner. At Airtel Payments Bank, we’re transforming the way banking operates in the country. Our core business is banking and we’ve set out to serve each unbanked and underserved Indian. Our products and technology aim to take basic banking services to the doorstep of every Indian. We are a fun-loving, energetic and fast growing company that breathes innovation. We encourage our people to push boundaries and evolve from skilled professionals of today to risk-taking entrepreneurs of tomorrow. We hire people from every realm and offer them opportunities that encourage individual and professional growth. We are always looking for people who are thinkers & doers; people with passion, curiosity & conviction; people who are eager to break away from conventional roles and do 'jobs never done before’. Job Summary: We are looking for a skilled TechOps Engineer to join our team in managing and scaling containerized applications using Docker, Kubernetes, and OpenShift. You will be responsible for maintaining production environments, implementing automation, and ensuring platform stability and performance. Key Skills for TechOps Engineer (Docker, Kubernetes, OpenShift) 1. Containerization & Orchestration Expertise in Docker: building, managing, and debugging containers. Proficient in Kubernetes (K8s): deployments, services, ingress, Helm charts, namespaces. Experience with Red Hat OpenShift: operators, templates, routes, integrated CI/CD. 2. CI/CD and DevOps Toolchain Jenkins, GitLab CI/CD, other CI/CD pipelines. Familiarity with GitOps practices. 3. Monitoring & Logging Experience with Prometheus, Grafana, ELK stack, or similar tools. Understanding of health checks, metrics, and alerts. 4. Infrastructure as Code Hands-on with Terraform, Ansible, or Helm. Version control using Git. 5. Networking & Security K8s/OpenShift networking concepts (services, ingress, load balancers). Role-Based Access Control (RBAC), Network Policies, Secrets management. 6. Scripting & Automation Proficiency in Bash, Python, or Go for automation tasks. 7. Cloud Platforms (Optional but Valuable) Experience with AWS, GCP, or Azure Kubernetes Service (EKS, AKS, GKE). Responsibilities: Design, implement, and maintain Kubernetes/OpenShift clusters. Build and deploy containerized applications using Docker. Manage CI/CD pipelines for smooth application delivery. Monitor system performance and respond to alerts or issues. Develop infrastructure as code and automate repetitive tasks. Work with developers and QA to support and optimize application lifecycle. Requirements: 2-5 years of experience in TechOps/DevOps/SRE roles. Strong knowledge of Docker, Kubernetes, and OpenShift. Experience with CI/CD tools like Jenkins. Proficiency in scripting (Bash, Python) and automation tools (Ansible, Terraform). Familiarity with logging and monitoring tools (Prometheus, ELK, etc.). Knowledge of networking, security, and best practices in container environments. Good communication and collaboration skills. Nice to Have: Certifications (CKA, Red Hat OpenShift, etc.) Experience with public cloud providers (AWS, GCP, Azure). GitOps and service mesh (Istio, Linkerd) experience Why Join Us? Airtel Payments Bank is transforming from a digital-first bank to one of the largest Fintech company. There could not be a better time to join us and be a part of this incredible journey than now. We at Airtel payments bank don’t believe in all work and no play philosophy. For us, innovation is a way of life and we are a happy bunch of people who have built together an ecosystem that drives financial inclusion in the country by serving 300 million financially unbanked, underbanked, and underserved population of India. Some defining characteristics of life at Airtel Payments Bank are Responsibility, Agility, Collaboration and Entrepreneurial development : these also reflect in our core values that we fondly call RACE.. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Rajkot, Gujarat, India

On-site

Linkedin logo

We are looking for a skilled DevOps Engineer with at least 6 years of experience to join our dynamic team. The ideal candidate will be crucial in ensuring the smooth and efficient development, deployment, and maintenance of applications in a highly scalable and resilient infrastructure. You will collaborate closely with developers, QA engineers, and other stakeholders to drive automation, optimize processes, and maintain system reliability. Key Responsibilities: Infrastructure Management : Design, implement, and manage cloud infrastructure using AWS services (EC2, S3, RDS, etc.) Work with EKS and ECS to manage containerized applications. Build and maintain Docker images for scalable and reliable deployments. CI/CD Pipelines : Create, maintain, and enhance CI/CD pipelines using Bitbucket Pipelines . Automate deployment workflows and ensure seamless integration with development teams. Application Deployment : Use ArgoCD for managing GitOps-based application deployments. Ensure zero-downtime deployments and rollback strategies. Monitoring and Logging : Implement and maintain monitoring solutions using Grafana . Manage log aggregation and analysis with Fluentbit or similar tools. Caching and Messaging Systems : Configure and maintain caching solutions like Redis . Handle messaging systems such as RabbitMQ to enable robust event-driven architectures. System Optimization : Optimize system performance, scalability, and cost-efficiency. Ensure high availability and disaster recovery planning for critical systems. Required Skills and Qualifications: 6-10 years of hands-on experience in DevOps, infrastructure automation, or related roles. Strong knowledge of AWS services and best practices. Proficiency with container orchestration tools like EKS , ECS , and container technologies like Docker . Experience building CI/CD pipelines using Bitbucket Pipelines or similar tools. Working knowledge of ArgoCD for continuous delivery and deployment. Experience with monitoring tools like Grafana and log aggregators like Fluentbit . Familiarity with caching technologies like Redis and messaging systems like RabbitMQ . Solid understanding of networking, system security, and cloud-native architecture principles. Experience with Infrastructure-as-Code (IaC) tools like Terraform or CloudFormation (optional but preferred). Preferred Skills: Scripting and automation using Python, Bash, or similar languages. Knowledge of security best practices in DevOps. Experience with other CI/CD tools like Jenkins or GitLab CI (added advantage). Familiarity with Kubernetes operators and custom resource definitions (CRDs). Early Joiners Preferred Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Overview As an LLM (Large Language Model) Engineer, you will be responsible for designing, optimizing, and standardizing the architecture, codebase, and deployment pipelines of LLM-based systems. Your primary mission will focus on modernizing legacy machine learning codebases (including 40+ models) for a major retail clientenabling consistency, modularity, observability, and readiness for GenAI-driven innovation. Youll work at the intersection of ML, software engineering, and MLOps to enable seamless experimentation, robust infrastructure, and production-grade performance for language-driven systems. This role requires deep expertise in NLP, transformer-based models, and the evolving ecosystem of LLM operations (LLMOps), along with a hands-on approach to debugging, refactoring, and building unified frameworks for scalable GenAI : Lead the standardization and modernization of legacy ML codebases by aligning to current LLM architecture best practices. Re-architect code for 40+ legacy ML models, ensuring modularity, documentation, and consistent design patterns. Design and maintain pipelines for fine-tuning, evaluation, and inference of LLMs using Hugging Face, OpenAI, or open-source stacks (e.g., LLaMA, Mistral, Falcon). Build frameworks to operationalize prompt engineering, retrieval-augmented generation (RAG), and few-shot/in-context learning methods. Collaborate with Data Scientists, MLOps Engineers, and Platform teams to implement scalable CI/CD pipelines, feature stores, model registries, and unified experiment tracking. Benchmark model performance, latency, and cost across multiple deployment environments (on-premise, GCP, Azure). Develop governance, access control, and audit logging mechanisms for LLM outputs to ensure data safety and compliance. Mentor engineering teams in code best practices, versioning, and LLM lifecycle Skills : Deep understanding of transformer architectures, tokenization, attention mechanisms, and training/inference optimization Proven track record in standardizing ML systems using OOP design, reusable components, and scalable service APIs Hands-on experience with MLflow, LangChain, Ray, Prefect/Airflow, Docker, K8s, Weights & Biases, and model-serving platforms. Strong grasp of prompt tuning, evaluation metrics, context window management, and hybrid search strategies using vector databases like FAISS, pgvector, or Milvus Proficient in Python (must), with working knowledge of shell scripting, YAML, and JSON schema standardization Experience managing compute, memory, and storage requirements of LLMs across & Experience : 5+ years in ML/AI engineering with at least 2 years working on LLMs or NLP-heavy systems. Able to reverse-engineer undocumented code and reimagine it with strong documentation and testing in mind. Clear communicator who collaborates well with business, data science, and DevOps teams. Familiar with agile processes, JIRA, GitOps, and confluence-based knowledge sharing. Curious and future-facingalways exploring new techniques and pushing the envelope on GenAI innovation. Passionate about data ethics, responsible AI, and building inclusive systems that scale (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

About The Job The Red Hat India Services team is looking for a Consultant to join us in Chennai, India. In this role, you will help us ensure that our engagements are not just a technology implementation, but an organizational transformation. As a consultant, you will work with our lead architect in our engagements, cocreating innovative software solutions using emerging open source technology and modern software design methods in an agile environment. You’ll be coached by the team to facilitate the design and technical delivery of our solutions. As you do so, you’ll create enthusiasm for building great software using principles of open source and agile culture. You'll support everything from the scoping to delivering the engagements. Successful applicants must reside in a city where Red Hat has mentioned the location. What Will You Do Installation and configuration of JBoss EAP Web servers. Understanding EAP, Installing EAP, Patching, and Upgradation. Configuration of both standalone, domain mode, and High Availability. Standalone Directory Structure, Management interface. Assigning the Domain Controller, and Host Controller. Understanding CLI Tool. DMR syntax, Data source Subsystem. DB connection pools, Deploy JDBC Driver. Understanding the Logger Hierarchy (Logging Subsystem), Messaging Subsystem. Default Log File handlers, syslog handler, periodic Rotating File Handler. Securing EAP (Security Domain). LDAP security Realm, JMS Queue. JVM Configuration. JVM standalone and Domain Mode. JBOSS Directory Structure, overview of Web Subsystem. Infinispan and Mod cluster Subsystem. Deployments of Web Applications. Knowledge of how to deploy source code into running, scalable containers and virtual machines in automated fashion at enterprise scale Successful, collaborative delivery of customer requirements using Red Hat OpenShift Knowledge of how a customer use case can be developed into a project plan and how those requirements align with Red Hat’s technologies Understanding of how Red Hat’s technologies can transform software delivery (DevOps/GitOps) practices at large organizations What Will You Bring At least 5 years of experience in JBoss EAP Understanding client requirements and preparing the requirement and design document based on the client input Software development process Controls migrations of programs, database changes, reference data changes, and menu changes through the development life cycle. Sound Knowledge of Virtualization and OS environment (RHEL and Windows) dependencies for JBoss EAP/web servers. Experience with or knowledge of Red Hat’s technologies like Red Hat OpenShift Container Platform Working with some community technologies like Argo CD, Tekton Pipelines, Helm, or Jenkins About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Job Title: Lead DevOps Engineer Experience Required: 4 to 5 years in DevOps or related fields Employment Type: Full-time About The Role We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence. Key Responsibilities Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP). CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality. Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible. Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog. Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes. Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals. Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise. Incident Management: Oversee production system reliability, including root cause analysis and performance tuning. Technical Expertise Required Skills & Qualifications: Strong proficiency in cloud platforms like AWS, Azure, or GCP. Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes). Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi. Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI. Proficiency in scripting languages (e.g., Python, Bash, PowerShell). Soft Skills Excellent communication and leadership skills. Strong analytical and problem-solving abilities. Proven ability to manage and lead a team effectively. Experience 4 years + of experience in DevOps or Site Reliability Engineering (SRE). 4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration. Strong understanding of microservices, APIs, and serverless architectures. Nice To Have Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar. Experience with GitOps tools such as ArgoCD or Flux. Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001). Perks & Benefits Competitive salary and performance bonuses. Comprehensive health insurance for you and your family. Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise. Flexible working hours and remote work options. Collaborative and inclusive work culture. Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences. You can directly contact us: Nine three one six one two zero one three two Skills:- Amazon Web Services (AWS), Windows Azure and Google Cloud Platform (GCP) Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

6 - 11 Lacs

Pune

Work from Office

Naukri logo

Responsibilities/ Duties Develop enterprise-level software solutions according to technical specifications. Design and implement server-side applications using Node.js, Express, and MongoDB. Optimize server-side code for maximum performance and scalability. Identify and troubleshoot performance and security issues in the backend infrastructure. Implement data storage solutions using MongoDB and ensure proper database integration. Analyse the work and create tasks & sub-tasks with time estimations. Writing the unit tests to meet code coverage. Create, plan, and manage Tasks in JIRA Plan to manage & control code quality and execute code review. Criteria for the Role Experience developing/Architecting backend applications Node.js and MongoDB. 3 to 5+ years of work experience Ability to effectively manage time and prioritize work. Proficiency in GitOps Knowledge in building and securing REST APIs are compulsory. Knowledge in GraphQL and Redis is a plus. Knowledge of web servers like Apache and NGINX is a plus. Knowledge of CI/CD configuration is a plus. Strong oral and written communication skills, including technical documentation. Competency Should be a University Graduate Good Communication skills. Technical skills Strong oral and written communication skills, including technical documentation. Problem-solving and Analytical skills Excellent at clear and concise written and verbal communication.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. SAP started in 1972 as a team of five colleagues with a desire to do something new. Together, they changed enterprise software and reinvented how business was done. Today, we remain true to our roots. That’s why we engineer solutions to fuel innovation, foster equality and spread opportunity for our employees and customers across borders and cultures. SAP values the entrepreneurial spirit, fostering creativity and building lasting relationships with our employees. We believe that together we can transform industries, grow economics, lift up societies and sustain our environment. Experience (Role Requirements) What you bring: Because We Work On The Cutting Edge, We Need Someone Who Is Eager To Learn, a Creative Problem Solver, Resourceful In Getting Things To Work And Productive Working Independently Or Collaboratively. In Your Average Week, You Will 2+ years of experience Ability to work well in a team as well as independently and have a positive self-motivated can-do attitude Source control systems such as GIT Experience in Linux environment, Troubleshooting, etc Experience with containers and orchestration platforms (Docker, Kubernetes or others) First level contact in case of issues, judging the issues and involving the right teams Monitoring: Doing, Providing input for existing monitoring approaches, defining improvements Optimizing build pipeline Work closely with our engineering and operations team to build, integrate, test and deploy your amazing innovations and algorithms into our production systems. Assess new technology projects and tools, or migrate our environment to new versions to keep up with the rapid pace of change. Contribute and gain experience in operating enterprise scale cloud applications with DevOps culture at heart. Actively drive ITIL best practices to achieve operational excellence and efficiency and maximize uptime Actively drive customer escalations, 24x7 Operational topics, expert Live site handling, providing on-call support during off-business hours Checking out new technologies and introducing them to others to promote innovation and create business value. Continuously gain more insights into the product and also share the learning with the team. Learn new technologies, tools and infuse the same in the team to enhance operations. The Required Skills AND Competencies Cloud Native development technologies and typically used software and architecture on Cloud platforms like AWS, Cloud Foundry, or Azure GitOps way of infrastructure automation with experience on Gardener, Kyma runtime, Grafana and Argo CD Tools: Kibana, Docker, Cloud Foundry, Redis, Build Pipelines, Travis or Jenkins, K8s, Prometheus, Pagerduty Further skills: Familiar with Unix, Scripting (eg. Bash, Perl, Python), Networking & Security Meet Your Team Cloud Lifecycle Management - Application Management (CLM AM) team is providing central tools and architectures for provisioning and operating various SAP Cloud solutions. One of our main tools is the Service Provider Cockpit (SPC), which is the de-facto standard suite for service operations in SAP’s major cloud units like S/4HANA, C/4HANA, HANA Enterprise Cloud, HCM and others. In addition to managing SAP’s inhouse IaaS platforms, this tool is also used to orchestrate workloads on all major hyperscalers(Azure, AWS, GCP, Ali Cloud). The team drives the design, implementation and “productization” of the key lifecycle management services required to drive operations excellence for SAP’s cloud delivery. We have highly talented and motivated team members from across the globe like Ecuador, Mexico, Cambodia, China, India, Pakistan, Iran, Kosovo, Albania, Russia and Germany. Success is what you make it. At SAP, we help you make it your own. A career at SAP can open many doors for you. If you’re searching for a company that’s dedicated to your ideas and individual growth, recognizes you for your unique contributions, fills you with a strong sense of purpose, and provides a fun, flexible and inclusive work environment – apply now Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 427776 | Work Area: Software-Development Operations | Expected Travel: 0 - 10% | Career Status: Graduate | Employment Type: Regular Full Time | Additional Locations: . Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

About Tide At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. About The Team The cloud engineering team at Tide are responsible for managing all our Cloud Infrastructure. This is mainly in AWS, but we also have smaller workloads in both Azure and GCP. About The Role We are looking for a highly experienced Principal Cloud Engineer to join our fully remote team. The ideal candidate will have a deep understanding of cloud computing platforms and technologies, with specific experience with Kubernetes, AWS, Argo CD, APIs, containers, cloud security, Agile ways of working, and hands-on coding. They will also be able to lead and mentor other engineers, and help to architect and implement cloud-based solutions. As a Principal Cloud Engineer You’ll Design, build, and maintain cloud-based solutions Lead and mentor other engineers Stay up-to-date on the latest cloud computing technologies Troubleshoot and resolve cloud-based and networking issues Work with other teams to ensure that cloud-based solutions meet the needs of the business, strong collaboration skills What We Are Looking For Degree in Computer Science or a related field 10+ years of experience in distributed computing Experience with a variety of cloud computing platforms and technologies, including AWS, Kubernetes, Terraform and Github Experience with Argo CD, APIs, containers, cloud security, Agile ways of working, and hands-on coding in Python, Java or Go GitOps as a deployment methodology. Strong problem-solving and analytical skills Excellent written and verbal communication skills Ability to work independently and as part of a team Experience in Pair Coding Ability to automate - Infrastructure as Code OUR TECH STACK Tide’s Cloud environment is 100% containerised using AWS EKS. All platform infrastructure is managed via IaC using Terraform and Terragrunt. Deployments are done via Argo-CD using a GitOps approach, with assistance from Helm and Crossplane to manage any custom infrastructure like DB’s or S3 buckets required by each container. All our source code is hosted in Github, using Github Actions as our CI/CD provider. What You’ll Get In Return Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 15 days of Privilege leaves 12 days of Casual leaves 12 days of Sick leaves 3 paid days off for volunteering or L&D activities Stock Options TIDEAN WAYS OF WORKING At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members’ diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone’s voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone’s voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice . Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: You are a forward-thinking Cloud DevOps Engineer with a passion for modernizing infrastructure and enhancing the capabilities of CI/CD pipelines, containerization strategies, and hybrid cloud deployments. You thrive in environments where you can leverage your expertise in cloud infrastructure, distributed processing workloads, and AI-driven automation. Your collaborative spirit drives you to work closely with development, data, and GenAI teams to build resilient, scalable, and intelligent DevOps solutions. You are adept at integrating cutting-edge technologies and best practices to enhance both traditional and AI-driven workloads. Your proactive approach and problem-solving skills make you an invaluable asset to any team. What You’ll Be Doing: Designing, implementing, and optimizing CI/CD pipelines for cloud and hybrid environments. Integrating AI-driven pipeline automation for self-healing deployments and predictive troubleshooting. Leveraging GitOps (ArgoCD, Flux, Tekton) for declarative infrastructure management. Implementing progressive delivery strategies (Canary, Blue-Green, Feature Flags). Containerizing applications using Docker & Kubernetes (EKS, AKS, GKE, OpenShift, or on-prem clusters). Optimizing service orchestration and networking with service meshes (Istio, Linkerd, Consul). Implementing AI-enhanced observability for containerized services using AIOps-based monitoring. Automating provisioning with Terraform, CloudFormation, Pulumi, or CDK. Supporting and optimizing distributed computing workloads, including Apache Spark, Flink, or Ray. Using GenAI-driven copilots for DevOps automation, including scripting, deployment verification, and infra recommendations. The Impact You Will Have: Enhancing the efficiency and reliability of CI/CD pipelines and deployments. Driving the adoption of AI-driven automation to reduce downtime and improve system resilience. Enabling seamless application portability across on-prem and cloud environments. Implementing advanced observability solutions to proactively detect and resolve issues. Optimizing resource allocation and job scheduling for distributed processing workloads. Contributing to the development of intelligent DevOps solutions that support both traditional and AI-driven workloads. What You’ll Need: 5+ years of experience in DevOps, Cloud Engineering, or SRE. Hands-on expertise with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Tekton, etc.). Strong experience with Kubernetes, container orchestration, and service meshes. Proficiency in Terraform, CloudFormation, Pulumi, or Infrastructure as Code (IaC) tools. Experience working in hybrid cloud environments (AWS, Azure, GCP, on-prem). Strong scripting skills in Python, Bash, or Go. Knowledge of distributed data processing frameworks (Spark, Flink, Ray, or similar). Who You Are: You are a collaborative and innovative professional with a strong technical background and a passion for continuous learning. You excel in problem-solving and thrive in dynamic environments where you can apply your expertise to drive significant improvements. Your excellent communication skills enable you to work effectively with diverse teams, and your commitment to excellence ensures that you consistently deliver high-quality results. The Team You’ll Be A Part Of: You will join a dynamic team focused on optimizing cloud infrastructure and enhancing workloads to contribute to overall operational efficiency. This team is dedicated to driving the modernization and optimization of Infrastructure CI/CD pipelines and hybrid cloud deployments, ensuring that Synopsys remains at the forefront of technological innovation. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. Show more Show less

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Chennai

Work from Office

Naukri logo

What youll be doing You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leveraging monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Working with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implementing security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Working closely with the engineering and operations teams to design and implement cloud-based solutions. Maintaining detailed documentation of cloud architecture and platform configurations and regularly provide status reports and performance metrics. What were looking for... Youll need to have: Bachelors degree or one or more year of work experience. Experience years in Kubernetes administration Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.)

Posted 2 weeks ago

Apply

5.0 - 6.0 years

15 - 16 Lacs

Chennai

Work from Office

Naukri logo

Job Description: We are looking for a highly skilled DevOps Engineer with strong experience in Red Hat OpenShift Container Platform (v4.x) and related DevOps tools like Argo CD , Jenkins , and Red Hat Data Grid . The ideal candidate will be responsible for automation, managing containerized environments, and ensuring robust CI/CD pipelines across hybrid cloud infrastructure supporting our fintech solutions. Key Responsibilities: OpenShift Platform Engineering: Deploy, manage, and maintain apps on OpenShift v4.x. Manage Operators, Helm charts, and OpenShift GitOps (Argo CD). Handle Red Hat Data Grid deployments. Perform OCP upgrades, patching, and troubleshooting. CI/CD & Automation: Implement CI/CD pipelines using Jenkins, Argo CD, GitHub Actions. Ensure seamless code integration and automated deployment. Infrastructure as Code (IaC): Automate infrastructure using Terraform, Ansible, CloudFormation. Manage infrastructure on AWS, Azure, or GCP. Monitoring & Optimization: Set up observability stacks (Prometheus, Grafana, ELK, Splunk). Troubleshoot and optimize system performance. Security & Collaboration: Apply DevSecOps best practices and ensure compliance. Collaborate with development and DevOps teams for solution implementation. Desired Candidate Profile: Technical Skills: Red Hat OpenShift (v4.x) administration & operations. CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Kubernetes, Docker, Helm, GitOps. Red Hat Data Grid or other in-memory data grids. IaC tools: Terraform, Ansible, CloudFormation. Monitoring tools: Prometheus, Grafana, ELK, Splunk. Scripting: Bash, Python, or Shell. Soft Skills: Excellent analytical and problem-solving skills. Strong communication and collaboration abilities. Ability to work independently and with customer DevOps teams. Education: BE / B.Tech / MCA or equivalent in Computer Science or related fields. Work Location: Chennai

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

**Cette description est disponible en anglais seulement** What is Equisoft? Equisoft is a global provider of digital solutions for insurance and investment, recognized by over 250 of the world's leading financial institutions. We offer a comprehensive ecosystem of scalable solutions that help our customers meet all the challenges brought about by this era of digital transformation, thanks to our business needs-driven approach, industry knowledge, cutting-edge technologies and experts. With its business-driven approach, in-depth industry knowledge, cutting-edge technologies and multicultural team of experts based in North America, the Caribbean, Latin America, Europe, Africa, Asia and Australia, Equisoft helps its customers meet the challenges of this era of digital transformation. Why Choose Equisoft? With 950+ employees, we are a stable organization that offers career advancement and fosters a stimulant environment. If that’s not enough, then check out these other perks below: Hiring Location: India You are welcome to work remote Full-time Permanent Role Benefits available day 1: medical, dental, term life/personal accident coverage, wellness sessions, telemedicine program, etc. Flexible hours Educational Support (LinkedIn Learning, LOMA Courses and Equisoft University) Role : The Senior Network Administrator reports to the Manager, Enterprise IT & Resiliency and works closely with 4 others infrastructure/networking specialists. The incumbent will be responsible for designing, implementing, and maintaining our hybrid network infrastructure while ensuring optimal security, performance, and reliability across on-premises and cloud environments. Your Day with Equisoft: Networking Design, implement, and maintain enterprise-wide network infrastructure including LAN, WAN, and wireless networks Manage Cisco networking equipment, including switches, routers, and wireless controllers Lead the administration of Palo Alto Networks firewalls, including security policies, VPNs, and threat prevention Implement and maintain SD-WAN solutions across multiple sites Cloud Infrastructure Design and implement hybrid cloud networking solutions using Azure, AWS and OCI Manage Virtual Network environments in Azure, including VNet peering, ExpressRoute, Load balancers and Firewall Configure and maintain networking components, including VPCs, Transit Gateways, VPN and Direct Connect Implement cloud security best practices and maintain compliance requirements Security and Operations Perform regular security assessments and implement remediation strategies Manage network monitoring tools across cloud and on-premises infrastructure Develop and maintain disaster recovery and business continuity plans Create and maintain comprehensive network documentation Lead infrastructure modernization initiatives Mentor junior team members on networking and cloud technologies Project and Initiatives Assist in Corporate Initiatives related to Infrastructure, including, vendors management, service providers quotes and design, coordinate initiative deliverables and deployment, including travel as needed. Participate in solution design, vendor selection and POC’s Requirements: Technical Bachelor’s degree in computer engineering or information Technology or College Diploma combined to 3 years of relevant experience Minimum of 8 years’ experience of enterprise network administration Strong knowledge and experience with Cisco, Palo Alto Firewalls and cloud (Azure) networking Experience in Layer 2/3 protocols, services and advance routing protocols (OSPF, BGP, EIGRP) Experience in VPN & Wan technologies Experience with Load Balancing (F5/Nginx) methods and traffic management Experience in infrastructure a Code (Terraform, CloudFormation) Experience in scripting development (Python, PowerShell, Perl or Bash) Knowledge in network deployment with CI/CD pipelines Experience in Windows and Linux server administration Being available outside of normal working hours when necessary Being available to travel between company locations (10%) Excellent knowledge of English (spoken and written) Soft Skills Strong sense of organization and prioritizing Analytical and problem-solving skills Ability to lead technical teams Excellent project management capabilities Ability to communicate, write and synthesize information Ability to multi-task in a rapid-paced environment Team spirit, tact, diplomacy, autonomy, rigor, and discipline Nice to Haves: Familiar with AWS networking, Kubernetes and GitOps workflows CCNA, CCNP or equivalent Palo Alto Networks Certified Network Security Engineer (PCNSE) Azure Administrator/Network Engineer Associate AWS Certified Advanced Networking Equisoft is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Click here to view all career opportunities. We thank you for your interest in our company and we guarantee that all submitted applications will be considered. Only those whose applications are selected will be contacted for interview purposes. By submitting your application, you consent to Equisoft collecting, using & storing your personal data in order to apply for a job and for Equisoft to analyze your application. Due to the nature of its products and services, Equisoft will perform thorough background checks prior to confirming one’s employment. Show more Show less

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Job Description Job Description : This is an engineering position that will involve working on Oracle Communication’s market-leading Session Border Controller (SBC) product. The SBC is deployed across multiple top-tier operators and numerous large and medium enterprises. This position is to specifically support an initiative to develop a modernized cloud-native SBC product that embraces DevOps automation following an Agile development methodology. Growth opportunities: Become an expert in products that deliver scale and quality of automation Advance to expert developer with exposure to variety of programming languages Develop expertise in wide variety of product stacks and Cloud solutions Exposure to a variety of OS, engineering systems and infrastructure platforms Preferred Qualifications: Minimum 15 years of hands-on operational, development, DevOps or SRE experience Experience in a technical leadership role with a history of embracing automated processes, cloud native application design principles and a CI/CD DevOps model. Experience with production operations and best practices for deploying quality code in production and troubleshooting issues when they arise. Experience with operational support of containerized, microservice-based application(s) in a production-level Kubernetes environment for a highly available product or service offering. Experience deploying, configuring, managing and debugging cloud infrastructure and platform software such as OpenStack, Kubernetes, etc. Experience with commercial Kubernetes on-prem products (such as OpenShift, Tanzu, Rancher) or public cloud managed Kubernetes (such as OCI/OKE, AWS/EKS, GCP/GKE, Azure/AKS). Experience with cloud-native administration and monitoring technologies such as Docker, Helm, Prometheus, Grafana, EFK/ELK, Jaeger, or similar technologies. Knowledge of Infrastructure as Code (IaaC), Configuration as Code (CaC), GitOps and tools such as Terraform, Argo CD, Flux, etc. Experience designing and implementing CI/CD pipelines, platforms and components such as Jenkins. Experience and working knowledge in scripting languages like Python, Perl, and/or Shell Scripting. Knowledge of orchestration tools like Ansible and Chef. Knowledge of version control using Git. Knowledge and understanding of REST Architecture and JSON is a plus. Experience with application frameworks such as Spring, Helidon, Micronaut, etc. is a plus. Experience developing or designing telecommunications software is a plus. Experience working in Agile/Scrum development process is a plus. Experience in Linux/Unix environment Strong trouble shooting capabilities targeting complicated problems in remote systems Strong communication skills required. Strong writing skills required. Ability to multi-task and handle changing priorities. Excellent team skills, can-do attitude, focus on quality. BS or MS in Computer Science, Computer Engineering, or equivalent Responsibilities Develop and support SRE framework and automation Develop metric collection of failure events and analytics Analyze failure events, identify and dissect failures by infrastructure layers and by service stack and by application components and their inter-relationship Provide recommendation to improve product development Provide support for components going onto Cloud infrastructure Provide support on other Dev Test and System Test infrastructure Provide best practice on frameworks, automation, methodologies Be a team player and encourage cross learning and cross functional support Qualifications Career Level - IC5 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

About The Chatterjee Group The Chatterjee Group (TCG) has an enviable track record as a strategic investor, with businesses in many sectors. Founded by Dr. Purnendu Chatterjee in 1989, the Group specializes in the Petrochemicals, Pharmaceuticals, Biotech, Financial Services, Real Estate and Technology sectors in the US, Europe and South Asia. It provides end-to-end product and service capabilities through its investments and companies in these sectors. TCG is one of the biggest PE groups in the country with significant brand presence abroad and in India. About The Company – First Livingspaces First Livingspaces is a fully owned subsidiary of the TCG group. First Livingspaces is an AI first company which intends to create an ecosystem for daily living needs and necessities. We want to simplify the life of an individual who intends to rent, buy or Co-live, and provide all amenities for a happy seamless living. We are building a large universe, powered by technology at the core. About The Role In this senior role, you will lead the development and implementation of MLOps strategies for our AI/ML systems within India's first real estate ecosystem. You will manage the entire ML model lifecycle, ensuring scalability, reliability, and performance from development to production. You will collaborate with data science and engineering teams, define and implement MLOps best practices, and take ownership of the ML infrastructure. Your technical expertise will be crucial in building robust MLOps systems that deliver reliable results at scale. Responsibilities Architect and implement CI/CD pipelines for ML models using PyTorch or TensorFlow. Design and build infrastructure for model versioning, deployment, monitoring, and scaling. Implement and maintain feature stores (Feast, Tecton) and experiment tracking(MLflow, Weights & Biases). Enable continuous training and monitoring with tools like SageMaker Model Monitor, Evidently, etc. Develop and deploy ML model registries and automated model serving solutions. Collaborate with data science and engineering teams to transition from experimentation to production. Establish infrastructure-as-code practices using Terraform, Pulumi, or CloudFormation. Technical Requirements 5+ years of experience in MLOps, DevOps, or related fields, with at least 2 years in a senior role. MLOps Tools: MLflow, Kubeflow Mastery of CI/CD tools (GitHub Actions, Jenkins, ArgoCD, Tekton). Expertise with infrastructure-as-code (Terraform, Pulumi, CloudFormation) and GitOps practices. Knowledge of containerization (Docker) and orchestration (Kubernetes, EKS, GKE, AKS). Proficiency in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face). Experience with ML deployment frameworks (TensorFlow Serving, TorchServe, KServe, BentoML, NVIDIA Triton). Why Join Us? Work on impactful projects leveraging cutting-edge AI technologies. Your work here won’t sit in a sandbox — see your models deployed at scale, making a tangible difference in the real world. Competitive compensation, perks, and a flexible, inclusive work environment. Inclusive and Equal Workplace First Livingspaces is dedicated to creating an inclusive workplace that promotes equality and fairness. We celebrate diversity and ensure that no one is discriminated against based on gender, caste, race, religion, sexual orientation, protected veteran status, disability, age, or any other characteristic, fostering an environment where everyone can thrive. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Driven by transformative digital technologies and trends, we are RIB and we’ve made it our purpose to propel the industry forward and make engineering and construction more efficient and sustainable. Built on deep industry expertise and best practice, and with our people at the heart of everything we do, we deliver the world's leading end-to-end lifecycle solutions that empower our industry to build better. With a steadfast commitment to innovation and a keen eye on the future, RIB comprises over 2,500 talented individuals who extend our software’s reach to over 100 countries worldwide. We are experienced experts and professionals from different cultures and backgrounds and we collaborate closely to provide transformative software products, innovative thinking and professional services to our global market. Our strong teams across the globe enable sustainable product investment and enhancements, to keep our clients at the cutting-edge of engineering, infrastructure and construction technology. We know our people are our success – join us to be part of a global force that uses innovation to enhance the way the world builds. Find out more at RIB Careers. Job Description Job Summary: The Platform DevOps Engineer supports the build, deployment, and operation of our Azure-based platform infrastructure. You will implement infrastructure as code, CI/CD automation, and container orchestration practices to ensure reliable and scalable platform capabilities that can be consumed by application and product teams. Key Responsibilities Implement and maintain platform automation pipelines using CI/CD tools (e.g., GitHub Actions, Azure DevOps). Deploy and manage Kubernetes-based environments (AKS) and GitOps tools (e.g., Flux). Collaborate with Cloud Architects, SREs, and development teams to ensure consistent platform delivery and environments. Write Infrastructure-as-Code using Terraform or Bicep for reusable cloud resources. Manage secrets, configuration, and secure access via Azure Key Vault and RBAC. Troubleshoot platform-related issues and support platform service delivery processes. Contribute to operational documentation, runbooks, and automation playbooks. Skills And Qualifications 3–5 years in a DevOps, Cloud Engineer, or Platform Engineering role. Hands-on experience with Azure cloud infrastructure and platform services. Experience deploying and supporting Kubernetes clusters and GitOps workflows. Familiar with CI/CD automation, including branching strategies and testing integration. Practical experience with IaC (Terraform, Bicep) and scripting languages (PowerShell, Bash). Knowledge of containerization (Docker), monitoring, and logging best practices. Preferred Certifications Microsoft Certified: Azure Administrator or DevOps Engineer Associate. Kubernetes (CKA or CKAD) certification preferred. RIB may require all successful applicants to undergo and pass a comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third party personal data may involve additional background check criteria. RIB is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. Come and join RIB to create the transformative technology that enables our customers to build a better world. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Driven by transformative digital technologies and trends, we are RIB and we’ve made it our purpose to propel the industry forward and make engineering and construction more efficient and sustainable. Built on deep industry expertise and best practice, and with our people at the heart of everything we do, we deliver the world's leading end-to-end lifecycle solutions that empower our industry to build better. With a steadfast commitment to innovation and a keen eye on the future, RIB comprises over 2,500 talented individuals who extend our software’s reach to over 100 countries worldwide. We are experienced experts and professionals from different cultures and backgrounds and we collaborate closely to provide transformative software products, innovative thinking and professional services to our global market. Our strong teams across the globe enable sustainable product investment and enhancements, to keep our clients at the cutting-edge of engineering, infrastructure and construction technology. We know our people are our success – join us to be part of a global force that uses innovation to enhance the way the world builds. Find out more at RIB Careers. Job Description Job Summary: The Platform DevOps Engineer supports the build, deployment, and operation of our Azure-based platform infrastructure. You will implement infrastructure as code, CI/CD automation, and container orchestration practices to ensure reliable and scalable platform capabilities that can be consumed by application and product teams. Key Responsibilities Implement and maintain platform automation pipelines using CI/CD tools (e.g., GitHub Actions, Azure DevOps). Deploy and manage Kubernetes-based environments (AKS) and GitOps tools (e.g., Flux). Collaborate with Cloud Architects, SREs, and development teams to ensure consistent platform delivery and environments. Write Infrastructure-as-Code using Terraform or Bicep for reusable cloud resources. Manage secrets, configuration, and secure access via Azure Key Vault and RBAC. Troubleshoot platform-related issues and support platform service delivery processes. Contribute to operational documentation, runbooks, and automation playbooks. Skills And Qualifications 3–5 years in a DevOps, Cloud Engineer, or Platform Engineering role. Hands-on experience with Azure cloud infrastructure and platform services. Experience deploying and supporting Kubernetes clusters and GitOps workflows. Familiar with CI/CD automation, including branching strategies and testing integration. Practical experience with IaC (Terraform, Bicep) and scripting languages (PowerShell, Bash). Knowledge of containerization (Docker), monitoring, and logging best practices. Preferred Certifications Microsoft Certified: Azure Administrator or DevOps Engineer Associate. Kubernetes (CKA or CKAD) certification preferred. RIB may require all successful applicants to undergo and pass a comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third party personal data may involve additional background check criteria. RIB is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. Come and join RIB to create the transformative technology that enables our customers to build a better world. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Fatepura, Gujarat, India

On-site

Linkedin logo

Driven by transformative digital technologies and trends, we are RIB and we’ve made it our purpose to propel the industry forward and make engineering and construction more efficient and sustainable. Built on deep industry expertise and best practice, and with our people at the heart of everything we do, we deliver the world's leading end-to-end lifecycle solutions that empower our industry to build better. With a steadfast commitment to innovation and a keen eye on the future, RIB comprises over 2,500 talented individuals who extend our software’s reach to over 100 countries worldwide. We are experienced experts and professionals from different cultures and backgrounds and we collaborate closely to provide transformative software products, innovative thinking and professional services to our global market. Our strong teams across the globe enable sustainable product investment and enhancements, to keep our clients at the cutting-edge of engineering, infrastructure and construction technology. We know our people are our success – join us to be part of a global force that uses innovation to enhance the way the world builds. Find out more at RIB Careers. Job Description Job Summary: The Platform DevOps Engineer supports the build, deployment, and operation of our Azure-based platform infrastructure. You will implement infrastructure as code, CI/CD automation, and container orchestration practices to ensure reliable and scalable platform capabilities that can be consumed by application and product teams. Key Responsibilities Implement and maintain platform automation pipelines using CI/CD tools (e.g., GitHub Actions, Azure DevOps). Deploy and manage Kubernetes-based environments (AKS) and GitOps tools (e.g., Flux). Collaborate with Cloud Architects, SREs, and development teams to ensure consistent platform delivery and environments. Write Infrastructure-as-Code using Terraform or Bicep for reusable cloud resources. Manage secrets, configuration, and secure access via Azure Key Vault and RBAC. Troubleshoot platform-related issues and support platform service delivery processes. Contribute to operational documentation, runbooks, and automation playbooks. Skills And Qualifications 3–5 years in a DevOps, Cloud Engineer, or Platform Engineering role. Hands-on experience with Azure cloud infrastructure and platform services. Experience deploying and supporting Kubernetes clusters and GitOps workflows. Familiar with CI/CD automation, including branching strategies and testing integration. Practical experience with IaC (Terraform, Bicep) and scripting languages (PowerShell, Bash). Knowledge of containerization (Docker), monitoring, and logging best practices. Preferred Certifications Microsoft Certified: Azure Administrator or DevOps Engineer Associate. Kubernetes (CKA or CKAD) certification preferred. RIB may require all successful applicants to undergo and pass a comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third party personal data may involve additional background check criteria. RIB is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. Come and join RIB to create the transformative technology that enables our customers to build a better world. Show more Show less

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Project Role : Service Delivery Operations Lead Project Role Description : Manage end-to-end operations for client deals and delivery capabilities for a complex offering. Own service quality, cost, and leadership of delivery teams. Contribute to solution development, growth and sales. Must have skills : Agile Project Management Good to have skills : NA Minimum 15 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Engineering Productivity Lead & Developer Experience Location: [Insert Location or "Remote"] Job Summary: We are seeking a visionary and execution-focused leader to spearhead our engineering productivity, developer experience, and platform standardization initiatives across the organization. This leader will be responsible for defining and delivering a centralized Developer Internal Developer Platform (IDP), driving tooling modernization, cloud platform standardization, and improving developer satisfaction at scale. The role will directly impact how our engineering teams build, deploy, and operate software efficiently across the enterprise. Key Responsibilities: Lead the strategy, design, and execution of a centralized Developer IDP to streamline onboarding, development, CI/CD, and observability. Drive tooling standardization and modernization, ensuring best-in-class development environments and reducing cognitive load for developers. Establish and enforce cloud and infrastructure standards that improve security, cost efficiency, and scalability while maintaining developer autonomy. Build strong partnerships with engineering, product, DevOps, security, and compliance teams to drive adoption and alignment of platform capabilities. Define KPIs and measure impact on engineering productivity, operational efficiency, and developer satisfaction. Foster a culture of continuous improvement, automation, and self-service across engineering. Attract, mentor, and lead a team of platform engineers, developer advocates, and productivity experts. Qualifications: 15+ years of experience in engineering, with 3+yrs leadership roles with a strong background in developer platforms, DevOps, or infrastructure engineering. Proven success in driving enterprise-wide platform adoption and engineering enablement. Deep understanding of modern software delivery practices (CI/CD, GitOps, IaC, microservices, cloud-native architecture). Experience with cloud platforms (AWS, Azure, GCP) and tooling ecosystems (e.g., Backstage, ArgoCD, Terraform, Kubernetes). Exceptional stakeholder management, communication, and change leadership skills. Preferred: Experience with implementing or scaling ,Port,Backstage or similar Developer Portals. Demonstrated impact on developer satisfaction and productivity metrics (e.g., DORA, SPACE framework). Prior experience in highly regulated or large-scale enterprise environments. 15 years full time education Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies