Jobs
Interviews

954 Gitops Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Overview At SingleStore we're not just building a database company, we are defining the future of data management. As a Software Engineer on the infrastructure platform team, you will be contributing to our internal applications that power engineering productivity at SingleStore. Your expertise in building out distributed applications will drive technical decisions critical to both team and company success. Role And Responsibilities Build observable microservice-oriented applications that empower our engineers to build, test, and monitor our applications in a multi-cloud environment Design and implement novel technical solutions that raises engineering productivity throughout the company Relentlessly improve upon CI/CD throughput, debugging/triage operations, and workload execution runtimes for a global engineering workforce Work within a highly performing team to create the most productive development culture that the world has ever seen Required Skills And Experience 3+ Years of experience with one or more general purpose programming languages including but not limited to: Python, C/C++, JavaScript, or Go. Strong levels of empathy to your fellow developer and a desire to build something great Passion for Quality Engineering and software testing efficiency Comprehension of GitOps and managing Infrastructure as Code (IaC) Experience with containers and Kubernetes based applications running in the public cloud or in on premises data centers (you’ll have both available to you) Deep interest and experience working on distributed systems, databases, networking, storage, and multi-tenant services, and Unix/Linux environments. B.S. degree in Computer Science or a similar field or equivalent working experience Benefits Company Wide Technology Stipend for New Employees Monthly Cell Phone and Internet Stipend Health and Wellness benefit Company and team events Flexible time off Volunteer time off Stock Options As employees are located in many different countries around the world, some benefits may differ from country to country. In all cases, we do our best to provide equitable perks and benefits across our locations. About SingleStore SingleStore is one platform for all data, built so you can engage with insight in every moment. Trusted by industry leaders, SingleStore enables enterprises to adapt to change as it happens, embrace diverse data with ease, and accelerate the pace of innovation. SingleStore is venture-backed and headquartered in San Francisco with offices in Portland, Seattle, Boston, Hyderabad, London, Lisbon, and Kyiv. Defining the future starts with The Database of Now™. Consistent with our commitment to diversity & inclusion, we value individuals with the ability to work on diverse teams and with a diverse range of people. To all recruitment agencies: SingleStore does not accept agency resumes. Please do not forward resumes to SingleStore employees. SingleStore is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company.

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Adani Digital Labs is the digital arm of the Adani Group, focused on driving digital transformation across all Adani businesses. The lab leverages cutting-edge technologies such as AI, ML, data analytics, cloud computing, and automation to enhance customer experience, streamline operations, and enable innovation. With a strong emphasis on building scalable digital platforms and customer-centric solutions, Adani Digital Labs plays a pivotal role in shaping the group's digital strategy across sectors like energy, infrastructure, logistics, airports, and more. DevOps Professional Position: DevOps Engineer Location: Ahmedabad (On-site at office) Experience: 3 to 7 years of relevant experience Purpose: We are looking for a highly skilled DevOps professional with 3 to 7 years of experience to work with us in Ahmedabad, Gujarat. The candidate will bring expertise in GCP Platform, containerization & orchestration, SDLC, operating systems, version control, languages, scripting, CI/CD, infrastructure as code, and databases. Experience in the Azure Platform, in addition to the GCP platform, is necessary. Experience: - 3-7 years of experience in DevOps. - Proven experience in implementing DevOps best practices and driving automation. - Demonstrated ability to manage and optimize cloud-based infrastructure. Roles and Responsibilities: The DevOps professional will be responsible for: - Implementing and managing the GCP Platform, including Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices. - Leading efforts in containerization and orchestration using Docker and Kubernetes. - Optimizing and managing the Software Development Lifecycle (SDLC). - Administering Linux and Windows Server environments proficiently. - Managing version control using Git (BitBucket) and GitOps (preferred). - Automating and configuring tasks using YAML and Python.

Posted 4 weeks ago

Apply

0 years

3 - 4 Lacs

Bengaluru, Karnataka, India

On-site

Skillfyme is hiring a Full Stack Developer Intern who excels in Backend Development and has the potential to work on large-scale, production-grade applications . You’ll be building and scaling platforms like LMS (Learning Management Systems) and other high-traffic web apps. This is not a basic internship — we’re seeking someone with strong technical depth who’s ready to work in a startup-like, high-impact environment and grow fast. Eligibility Criteria Minimum CGPA: 8.8 and above Degree: B.Tech / M.Tech / MCA or equivalent Final-year students or recent graduates (2022/2023/2024/2025 batch) Technical Skills Required Backend Expertise: Strong in Node.js, Python (FastAPI/Django/Flask), or any backend framework Frontend Basics: React.js / Next.js or equivalent (Bonus if strong in frontend too) Database: MySQL, MongoDB, or PostgreSQL Cloud: AWS / OCI (mandatory) DevOps: Docker, CI/CD, GitOps, basic Kubernetes knowledge preferred System Design: Should know how to build and scale large applications (e.g., LMS, e-commerce, SaaS platforms) What We Offer Opportunity to convert into a full-time role Direct mentorship from industry experts Work on live product features, not dummy projects High ownership, startup energy, and real-world challenges Letter of Recommendation + Internship Certificate Internship Duration 3 to 6 months (Extension or Pre-Placement Offer based on performance) Note: This is a unpaid internship.Skills: devops,next.js,gitops,flask,mysql,backend development,system design,oci,ci/cd,django,mongodb,node.js,docker,react.js,kubernetes,aws,python,postgresql,fastapi

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position: DevOps Engineer Location: Ahmedabad (On-site at office) Experience: 3 to 7 years of relevant experience Purpose We are looking for a highly skilled DevOps professional with 3 to 7 years of experience to work with us. The candidate will bring expertise in GCP Platform, containerization & orchestration, SDLC, operating systems, version control, languages, scripting, CI/CD, infrastructure as code, and databases. Experience in the Azure Platform, in addition to the GCP platform, will be highly valued. Experience:  3-7 years of experience in DevOps.  Proven experience in implementing DevOps best practices and driving automation.  Demonstrated ability to manage and optimize cloud-based infrastructure. Roles and Responsibilities: The DevOps professional will be responsible for:  Implementing and managing the GCP Platform, including Google Kubernetes Engine (GKE), CloudBuild and DevOps practices.  Leading efforts in containerization and orchestration using Docker and Kubernetes.  Optimizing and managing the Software Development Lifecycle (SDLC).  Administering Linux and Windows Server environments proficiently.  Managing version control using Git (BitBucket) and GitOps (preferred).  Automating and configuring tasks using YAML and Python.  Developing and maintaining Bash and PowerShell scripts.  Designing and developing CI/CD pipelines using Jenkins and optionally CloudBuild.  Implementing infrastructure as code through Terraform to optimize resource management.  Managing CloudSQL and MySQL databases for reliable performance. Education Qualification  Bachelor’s degree in Computer Science, Engineering, or a related field.  Master’s degree in a relevant field (preferred). Certifications Preferred  Professional certifications in GCP, Kubernetes, Docker, and DevOps methodologies.  Additional certifications in CI/CD tools and infrastructure as code (preferred). Behavioural Skills  Strong problem-solving abilities and keen attention to detail.  Excellent communication and collaboration skills.  Ability to adapt to a fast-paced and dynamic work environment.  Strong leadership and team management capabilities. Technical Skills  Proficiency in Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Expertise in Docker and Kubernetes for containerization and orchestration.  Deep understanding of the Software Development Lifecycle (SDLC).  Proficiency in administering Linux and Windows Server environments.  Experience with Git (BitBucket) and GitOps (preferred).  Proficiency in YAML and Python for automation and configuration.  Skills in Bash and PowerShell scripting.  Strong ability to design and manage CI/CD pipelines using Jenkins and optionally CloudBuild.  Experience with Terraform for infrastructure as code.  Management of CloudSQL and MySQL databases. Non-Negotiable Skills  GCP Platform: Familiarity with Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Experience with Azure  Containerization & Orchestration: Expertise in Docker and Kubernetes.  SDLC: Deep understanding of the Software Development Lifecycle.

Posted 4 weeks ago

Apply

7.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position: DevOps Specialist Total Experience: 7-8Years Budget: 15-22 LPA Location: Hyderabad Work Mode: WFO Shift: 24*5 (Rotational shifts) Must have Skills: Dockers, Kubernetes, Microservices Secondary skills: SDLC,CI/CD Notice Period-:Immediate joiner only Primary Skills: Strong experience with Docker Kubernetes for container orchestration. Configure and maintain Kubernetes deployments, services, ingresses, and other resources using YAML manifests or GitOps workflows. Experience in Microservices based architecture design. Understanding of SDLC including CI and CD pipeline architecture. Experience with configuration management (Ansible). Experience with Infrastructure as code(Teraform/Pulumi/CloudFormation). Experience with Git and version control systems. Secondary Skills: Experience with CI/CD pipeline using Jankins or AWS CodePipeline or Github actions. Experience with building and maintaining Dev, Staging, and Production environments. Familiarity with scripting languages (e. g., Python, Bash) for automation. Monitoring and logging tools like Prometheus, Grafana. Knowledge of Agile and DevOps methodologies. Incidence management and root cause analysis. Excellent problem solving and analytical skills. Excellent communication skills. Mandatory work from Office 24x5 Support

Posted 4 weeks ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description AWS Cloud Infrastructure : Design, deploy, and manage scalable, secure, and highly available systems on AWS. Optimize cloud costs, enforce tagging, and implement security best practices (IAM, VPC, GuardDuty, etc.). Automate infrastructure provisioning using Terraform or AWS CDK. Ensure backup, disaster recovery, and high availability (HA) strategies are in place. Kubernetes (EKS preferred) : Manage and scale Kubernetes clusters (preferably Amazon EKS). Implement CI/CD pipelines with GitOps (e.g., ArgoCD or Flux) or traditional tools (e.g., Jenkins, GitLab). Enforce RBAC policies, namespaces isolation, and pod security policies. Monitor cluster health, optimize pod scheduling, autoscaling, and resource limits/requests. Monitoring and Observability (Datadog) : Build and maintain Datadog dashboards for real-time visibility across systems and services. Set up alerting policies, SLOs, SLIs, and incident response workflows. Integrate Datadog with AWS, Kubernetes, and applications for full-stack observability. Conduct post-incident reviews using Datadog analytics to reduce MTTR. Automation and DevOps : Automate manual processes (e.g., server setup, patching, scaling) using Python, Bash, or Ansible. Maintain and improve CI/CD pipelines (Jenkins) for faster and more reliable deployments. Drive Infrastructure-as-Code (IaC) practices using Terraform to manage cloud resources. Promote GitOps and version-controlled deployments. Linux Systems Administration : Administer Linux servers (Ubuntu, RHEL, Amazon Linux) for stability and performance. Harden OS security, configure SELinux, firewalls, and ensure timely patching. Troubleshoot system-level issues: disk, memory, network, and processes. Optimize system performance using tools like top, htop, iotop, netstat, etc. (ref:hirist.tech)

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Greater Lucknow Area

On-site

Kyndryl Software Engineering Chennai, Tamil Nadu, India Hyderabad, Telangana, India Bengaluru, Karnataka, India Gurugram, Haryana, India Pune, Maharashtra, India Greater Noida, Uttar Pradesh, India Posted on Jul 4, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Networking DevOps engineering team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Responsibilities Includes Requirement Gathering and Analysis: Collaborate with stakeholders to gather automation requirements, understanding business objectives, and network infrastructure needs. Analyse existing network configurations and processes to identify areas for automation and optimization. Analyse existing automation, opportunities to reuse/redeploy them with required modifications. End-to-End Automation Development: Design, develop and implement automation solutions for network provisioning, configuration management, monitoring and troubleshooting. Utilize programming languages such as Ansible, Terraform, Python, PHP to automate network tasks and workflows. Ensure scalability, reliability, and security of automation solutions across diverse network environments. Testing and Bug Fixing: Develop comprehensive test plans and procedures to validate the functionality and performance of automation scripts and frameworks. Identify and troubleshoot issues, conduct root cause analysis and implement corrective actions to resolve bugs and enhance automation stability. Collaborative Development: Work closely with cross-functional teams, including network engineers, software developers, and DevOps teams, to collaborate on automation projects and share best practices. Reverse Engineering and Framework Design: Reverse engineer existing Ansible playbooks, Python scripts and automation frameworks to understand functionality and optimize performance. Design and redesign automation frameworks, ensuring modularity, scalability, and maintainability for future enhancements and updates. Network Design and Lab Deployment: Provide expertise in network design, architecting interconnected network topologies, and optimizing network performance. Setup and maintain network labs for testing and development purposes, deploying lab environments on demand and ensuring their proper maintenance and functionality. Documentation and Knowledge Sharing: Create comprehensive documentation, including design documents, technical specifications, and user guides, to facilitate knowledge sharing and ensure continuity of operations. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Minimum 5+ years of relevant experience as a Network DevOps SME / Automation Engineer Hands On Experience In Below Technologies Data Network: Strong experience in configuring, managing, and troubleshooting Cisco, Juniper, HP, and Nokia routers and switches. Hands-on experience with SDWAN & SDN technologies (e.g., Cisco Viptela, Versa, VMWare NSX, Cisco ACI, DNAC, etc.) Network Security: Experience in configuring, managing, and troubleshooting firewalls and load balancers, including Firewalls: Palo Alto, Checkpoint, Cisco ASA/FTD, Juniper SRX Load Balancers: F5 LTM/GTM, Citrix NetScaler, A10. Deep understanding of network security principles, firewall policies, NAT, VPN (IPsec/SSL), IDS/IPS. Programming & Automation: Proficiency in Ansible development and testing for network automation. Strong Python or Shell scripting skills for automation. Experience with REST APIs, JSON, YAML, Jinja2 templates and GitHub for version control. Cloud & Linux Skills: Hands-on experience with Linux server administration (RHEL, CentOS, Ubuntu). Experience working with cloud platforms such as Azure, AWS, or GCP. DevOps: Basic understanding of CI/CD pipelines, GitOps, and automation tools. Familiarity with Docker, Kubernetes, Jenkins, and Terraform in a DevOps environment. Experience working with Infrastructure as Code (IaC) and configuration management tools Ansible Architecture & Design: Ability to design, deploy, and recommend network setups or labs independently. Strong problem-solving skills in troubleshooting complex network and security issues. Certifications Required: CCNP Security / CCNP Enterprise (Routing & Switching) Preferred Technical And Professional Experience Bachelor’s degree and above. Experience in Terraform experience is a plus (for infrastructure as code). Experience in Zabbix template development is a plus. Certifications Preferred: CCIE-level working experience (Enterprise, Security, or Data Center) – PCNSE (Palo Alto), CCSA (Checkpoint), Automation & Cloud, Python, Ansible, Terraform. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl

Posted 4 weeks ago

Apply

3.0 years

20 - 25 Lacs

Faridabad, Haryana, India

On-site

About Us CLOUDSUFI, a Google Cloud Premier Partner , a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/. What We Are Looking For Experience: 3-6years Education: BTech / BE / MCA / MSc Computer Science About Role Primary Skills -Devops with strong circleCI, argoCD, Github, Terraform, Helm, kubernetes and google cloud experience Required Skills And Experience 3+ years of experience in DevOps, infrastructure automation, or related fields. Strong proficiency with CircleCI for building and managing CI/CD pipelines. Advanced expertise in Terraform for infrastructure as code. Solid experience with Helm for managing Kubernetes applications. Hands-on knowledge of ArgoCD for GitOps-based deployment strategies. Proficient with GitHub for version control, repository management, and workflows. Extensive experience with Kubernetes for container orchestration and management. In-depth understanding of Google Cloud Platform (GCP) services and architecture. Strong scripting and automation skills (e.g., Python, Bash, or equivalent). Familiarity with monitoring and logging tools like Prometheus, Grafana, and ELK stack. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities in agile development environments. Note :Kindly share your LinkedIn profile when applying. Skills:- Google Cloud Platform (GCP), DevOps, Kubernetes, grafana and Docker

Posted 4 weeks ago

Apply

3.0 - 5.0 years

14 - 15 Lacs

Mumbai, Pune

Work from Office

We're hiring a Backend Developer (MongoDB, Node.js) for a fast-growing Tech Logistics start-up shaping the future of logistics. You'll build next-gen solutions using modern technologies and support customers across multiple geographies.

Posted 4 weeks ago

Apply

0 years

0 Lacs

India

On-site

Core Stack/Keywords GitHub – Source control, GitHub Actions (CI/CD) GKE (Google Kubernetes Engine) – Kubernetes cluster management Terraform – Infrastructure as Code (IaC) GCP – Cloud platform for compute, networking, IAM, etc. Networking – VPC, load balancers, firewall rules, peering Security – IAM, secrets management, workload identity, policies Argo CD – GitOps-based deployment to Kubernetes 📌 Typical Responsibilities CI/CD Pipelines Create and maintain GitHub Actions workflows Integrate Argo CD for GitOps-style continuous delivery Infrastructure Automation Write and maintain Terraform code for GCP infrastructure Modularize Terraform for reusable networking, compute, and GKE clusters Kubernetes Operations (GKE) Deploy and manage workloads Helm or Kustomize usage Auto-scaling, pod disruption budgets, affinity rules Cloud Networking Configure VPCs, subnets, Cloud NAT, private Google access Manage internal/external load balancers Hybrid connectivity (if needed): VPN, Interconnect Cloud Security Enforce least privilege using IAM Manage secrets with Secret Manager or Vault Use workload identity federation for Kubernetes-to-GCP auth Policy-as-code with OPA or GCP Org Policies Monitoring & Logging Set up observability stack (Cloud Monitoring, Prometheus/Grafana) Logging with Cloud Logging or Loki Alerting and SLOs Thanks & Regards Prashant Awasthi Vastika Technologies PVT LTD 9711189829

Posted 4 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

About The Job Be the expert customers turn to when they need to build strategic, scalable systems. Red Hat Services is looking for a well-rounded Architect to join our team in Mumbai covering Asia Pacific. In this role, you will design and implement modern platforms, onboard and build cloud-native applications, and lead architecture engagements using the latest open source technologies. You’ll be part of a team of consultants who are leaders in open hybrid cloud, platform modernisation, automation, and emerging practices - including foundational AI integration. Working in agile teams alongside our customers, you’ll build, test, and iterate on innovative prototypes that drive real business outcomes. This role is ideal for architects who can work across application, infrastructure, and modern AI-enabling platforms like Red Hat OpenShift AI. If you're passionate about open source, building solutions that scale, and shaping the future of how enterprises innovate — this is your opportunity. What Will You Do Design and implement modern platform architectures with a strong understanding of Red Hat OpenShift, container orchestration, and automation at scale. Strong experience in managing “Day-2” operations of Kubernetes container platforms by collaborating with infrastructure teams in defining practices for platform deployment, platform hardening, platform observability, monitoring and alerting, capacity management, scalability, resiliency, security operations. Lead the discovery, architecture, and delivery of modern platforms and cloud-native applications, using technologies such as containers, APIs, microservices, and DevSecOps patterns. Collaborate with customer teams to co-create AI-ready platforms, enabling future use cases with foundational knowledge of AI/ML workloads. Remain hands-on with development and implementation — especially in prototyping, MVP creation, and agile iterative delivery. Present strategic roadmaps and architectural visions to customer stakeholders, from engineers to executives. Support technical presales efforts, workshops, and proofs of concept, bringing in business context and value-first thinking. Create reusable reference architectures, best practices, and delivery models, and mentor others in applying them. Contribute to the development of standard consulting offerings, frameworks, and capability playbooks. What Will You Bring Strong experience with Kubernetes, Docker, and Red Hat OpenShift or equivalent platforms In-depth expertise in managing multiple Kubernetes clusters across multi-cloud environments. Proven expertise in operationalisation of Kubernetes container platform through the adoption of Service Mesh, GitOps principles, and Serverless frameworks Migrating from XKS to OpenShift Proven leadership of modern software and platform transformation projects Hands-on coding experience in multiple languages (e.g., Java, Python, Go) Experience with infrastructure as code, automation tools, and CI/CD pipelines Practical understanding of microservices, API design, and DevOps practices Applied experience with agile, scrum, and cross-functional team collaboration Ability to advise customers on platform and application modernisation, with awareness of how platforms support emerging AI use cases. Excellent communication and facilitation skills with both technical and business audiences Willingness to travel up to 40% of the time Nice To Have Experience with Red Hat OpenShift AI, Open Data Hub, or similar MLOps platforms Foundational understanding of AI/ML, including containerized AI workloads, model deployment, open source AI frameworks Familiarity with AI architectures (e.g., RAG, model inference, GPU-aware scheduling) Engagement in open source communities or contributor background About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Key Responsibilities Maintain and evolve Terraform modules across core services. Enhance GitHub Actions and GitLab CI pipelines with policy-as-code integrations. Automate Kubernetes secret management and migrate from shared init containers to native methods. Review and deploy Helm charts for service releases. Own rollback reliability. Track and resolve environment drift. Automate consistency checks between environments. Drive incident response tooling (Datadog + PagerDuty) and participate in post-incident reviews. Assist in cost-optimization efforts through resource sizing reviews. Implement and monitor standardized SLA/SLO targets for key services. Requirements Minimum of 5 years of hands-on experience in DevOps or Platform Engineering roles. Deep technical knowledge of Terraform, Terraform Cloud, and infrastructure module design. Production experience managing Kubernetes clusters (preferably on GKE). Demonstrable expertise in CI/CD automation (GitHub Actions, ArgoCD, Helm-based deployments). Proficient in securing cloud-native environments and integrating with secret management solutions such as Google Secret Manager or HashiCorp Vault. Hands-on experience in observability tooling, especially with Datadog. Strong grasp of GCP networking, service configurations, and container workload security. Proven ability to lead engineering initiatives, work cross-functionally, and manage infrastructure roadmaps. Desirable Experience Background in implementing GitOps and automated infrastructure policy enforcement. Familiarity with service mesh, workload identity, and multi-cluster deployments. Prior experience establishing DevOps functions or maturing legacy environments. Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

About company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. · Job Title: Fullstack Devops · Location: Hyderabad(Hybrid) · Experience: 4+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: Preferred Qualifications: AWS Certified (Solutions Architect / SysOps / DevOps). Experience in 24x7 production support. Exposure to IAC (Terraform, CloudFormation, AWS CDK with Typescript). Strong communication and coordination skills. Key Responsibilities: Monitor production systems and respond to alerts and incidents. Ensure 24/7 availability of the applications including e-Commerce production environment. Facilitate blameless post-mortems and drive incident resolution. Manage AWS infrastructure (EKS, RDS, S3, EC2 etc.). Partner with other SRE and Cloud engineering functions to continuously improve the SRE ecosystem by automation, toil reduction, service improvements, observability improvements, etc. Assess operational opportunities to increase service quality & efficiency with an eye on optimizing the total operational cost . Ensure compliance with the recommendations from Information Security and Risk team Define the SLA, SLO, and SLI parameters to establish measurement criteria. Once onboarded to SRE eco-system SLA adherence to be ensured by timely resolution of incidents and SLA is measured/reported adequately to application teams to reinforce quality of the application and adherence to SRE best practices. Maintain system documentation and standard operating procedures. Required Skills: Solid understanding and hands-on experience on Kubernetes workload management including cluster administration, workload scaling, debugging application performance and availability issues. Solid understanding of AWS services mainly EKS (Kubernetes) and Infrastructure as Code using AWS CDK with Typescript. Good understanding / experience on GitOps model using tools like ArgoCD. Experience in monitoring/logging tools like Grafana, Prometheus, ELK, or Dynatrace. Understanding of Microservice architecture using container orchestration services like Kubernetes. Moderate scripting skills with languages including Shell and Python. Familiarity with CI/CD tools (GitHub Actions, Code Build, CodePipeline, ArgoCD). Proven ability to work remotely with teams of various sizes in same/different timezones, from anywhere and remain highly motivated, productive, and organized. Good written, oral, and presentation skills. Strong interpersonal skills and stakeholder management skills. Understanding of ITIL practices

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Kenvue is currently recruiting for a: Senior Software Engineer What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Senior Manager Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do Who we are: At Kenvue, part of the Johnson & Johnson Family of Companies, we believe there is extraordinary power in everyday care. Built on over a century of heritage and propelled forward by science, our iconic brands—including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® —are category leaders trusted by millions of consumers who use our products to improve their daily lives. Our employees share a digital-first mindset, an approach to innovation grounded in deep human insights, and a commitment to continually earning a place for our products in consumers’ hearts and homes. What will you do: The Senior Engineer Kubernetes is a hands-on engineer responsible for designing, implementing, and managing Cloud Native Kubernetes-based platform ecosystem and solutions for organization. This includes developing and implementing containerization strategies, developer workflows, designing and deploying Kubernetes platform, and ensuring high availability and scalability of Kubernetes infrastructure aligned with modern GitOps practices. Key Responsibilities: Implement platform capabilities and containerization plan using Kubernetes, Docker, service mesh and other modern containerization tools and technologies. Design and collaborate with other engineering stakeholders in developing architecture patterns and templates for application runtime platform such as K8s Cluster topology, traffic shaping, API, CI CD, and observability aligned with DevSecOps principles. Automate Kubernetes infrastructure deployment and management using tools such as Terraform, Jenkins, Crossplane to develop self-service platform workflows. Serve as member of micro-services platform team to closely work with Security and Compliance organization to define controls. Develop self-service platform capabilities focused on developer workflows such as API, service mesh, external DNS, cert management and K8s life cycle management in general. Participate in a cross-functional IT Architecture group discussion that reviews design from an enterprise cloud platform perspective. Optimize Kubernetes platform infrastructure for high availability and scalability. What we are looking for Qualifications Bachelor’s Degree required, preferably in STEM field. 5+ years of progressive experience in a combination of development, design in areas of cloud computing. 3+ years of experience in developing cloud native platform capabilities based of Kubernetes (Preferred EKS and/or AKS). Strong Infrastructure as a Code (IaC) experience on public Cloud (AWS and/or Azure) Experience in working on a large scale, highly available, cloud native, multi-tenant, infrastructure platforms on public cloud, preferably in a consumer business Expertise in building platform using tools like Kubernetes, Istio, OpenShift, Linux, Helm, Terraform, CI/CD. Experience in working high scale, critically important products running across public clouds (AWS, Azure) and private data centers is a plus. Strong hand-on development experience with one or more of the following languages: Go, Scala, Java, Ruby, Python Prior experience on working in a team involved in re-architecting and migrating monolith applications to microservices will be a plus Prior experience of Observability, through tools such as Prometheus, Elasticsearch, Grafana, DataDog or Zipkin is a plus. Must have a solid understanding of Continuous Development and Deployment in AWS and/or Azure. Understanding of basic Linux kernel and window server operating system Experience in working with bash, PowerShell scripting. Must be results-driven, a quick learner, and a self-starter Cloud engineering experience is a plus. If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

OpenShift Administrator Locations : Pan India (Multiple Locations Available) Experience Levels : Analyst | Consultant | Senior Consultant | Manager Employment Type : Full-time - Hybrid Mode About The Opportunity A reputed IT Services & Consulting firm is looking to onboard experienced OpenShift Administrators at various experience levels across India. This role is ideal for professionals skilled in OpenShift platform administration, who can support enterprise containerized application environments and drive platform adoption. Role Overview You will be responsible for designing, deploying, managing, and scaling OpenShift environments. You'll also play a key role in ensuring platform performance, enforcing security standards, supporting developers, and implementing automation across the infrastructure. Key Responsibilities Design and implement robust, secure, and scalable OpenShift (OCP) clusters based on business needs Architect hybrid and cloud-native solutions including ROSA, ARO, or Anthos Handle full cluster lifecycle management including deployment, upgrades, patching, and scaling Administer core OpenShift components, networking, and service routing Monitor performance, implement logging (Prometheus, EFK), and manage alerts Apply platform governance using RBAC, SCCs, and integrate authentication systems (LDAP, SAML, OAuth) Configure persistent storage and manage cluster networking policies Provide guidance and operational support to application teams working on OpenShift Conduct onboarding sessions and contribute to documentation and best practices Preferred Skills & Experience 5+ years of hands-on experience with OpenShift administration Strong working knowledge of Kubernetes, Docker, and cloud environments (AWS, Azure, GCP) Scripting ability (e.g., Bash, Python) for automation Familiarity with CI/CD pipelines and GitOps practices Exposure to OpenShift add-ons like Pipelines, GitOps, Service Mesh, and Quay Red Hat certifications (e.g., RHCE, OpenShift Specialist) are a plus Solid understanding of performance tuning, audit logging, security, and cost management Note : Candidates with an immediate or up to 60-day notice period will be given preference. (ref:hirist.tech)

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

Remote

About The Role We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence. Key Responsibilities Infrastructure Management : Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP). CI/CD Pipelines : Design and optimize CI/CD pipelines, to improve development velocity and deployment quality. Automation : Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible. Monitoring & Logging : Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog. Security : Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes. Collaboration : Work closely with development, QA, and IT teams to align DevOps strategies with project goals. Mentorship : Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise. Incident Management : Oversee production system reliability, including root cause analysis and performance tuning. Required Skills & Qualifications Technical Expertise : Strong proficiency in cloud platforms like AWS, Azure, or GCP. Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes). Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi. Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI. Proficiency in scripting languages (e.g., Python, Bash, PowerShell). Soft Skills Excellent communication and leadership skills. Strong analytical and problem-solving abilities. Proven ability to manage and lead a team effectively. Experience 4 years + of experience in DevOps or Site Reliability Engineering (SRE). 4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration. Strong understanding of microservices, APIs, and serverless architectures. Nice To Have Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar. Experience with GitOps tools such as ArgoCD or Flux. Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001). Perks & Benefits Competitive salary and performance bonuses. Comprehensive health insurance for you and your family. Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise. Flexible working hours and remote work options. Collaborative and inclusive work culture. (ref:hirist.tech)

Posted 4 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Position: Senior Executive- DevOps Engineer Location: Vikhroli East, Mumbai Business: Godrej Living Role & responsibilities: Infrastructure Management & Provisioning Design, implement, and manage cloud/on-prem infrastructure (Azure). Automate infrastructure provisioning. Optimize infrastructure for high availability, scalability, and cost-effectiveness. Manage containerized environments. CI/CD & Deployment Automation Develop and maintain CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or similar tools. Automate software releases, rollbacks, and rollouts with zero downtime. Ensure smooth deployment and rollback strategies for production and staging environments. Monitoring & Logging Set up and manage monitoring tools. Implement centralized logging solutions. Proactively identify and resolve performance bottlenecks and system failures. Access Management & Security Implement and manage IAM policies, role-based access control (RBAC), and least privilege access principles. Maintain secrets management. Conduct regular security audits and ensure compliance with best practices. Incident Response & Disaster Recovery Design and implement backup and disaster recovery strategies. Establish incident response processes and ensure quick resolution of system failures. Maintain high system availability and reliability through redundancy and failover mechanisms. Collaboration & Documentation Work closely with development and security teams to ensure seamless integration of DevOps practices. Document infrastructure setup, CI/CD workflows, and incident response playbooks. Provide training and support to developers on DevOps best practices. Who are we looking for? Education: Bachelor's / masters degree in engineering with Computer Science / IT / Software Engineering or a related field. Experience: 3-5 Years experience in DevSecOps. Certifications (AWS Certified DevOps Engineer, Azure certification, CKA, Terraform Associate, etc.). Experience with GitOps (ArgoCD, FluxCD). Experience in a high-scale, production-grade environment. Skills : Experience in managing cloud infrastructure (Azure preferred) and on-prem setups. Expertise in containerization (Docker, Kubernetes) and infrastructure-as-code (Terraform, Ansible). Proficiency in CI/CD tools like Jenkins, GitLab CI/CD, or GitHub Actions. Strong knowledge of monitoring/logging tools (Prometheus, Grafana, ELK Stack, CloudWatch). Familiarity with security best practices, IAM, and access control mechanisms. Experience in scripting languages (Bash, Python, Go, etc.). Ability to troubleshoot and optimize system performance. Good understanding of networking, DNS, VPN, and firewalls. Familiarity with compliance standards (SOC2, ISO 27001, GDPR, etc.) is a plus.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderābād

On-site

As a Database Administrator at Epicor, you will be part of a team which does database management, optimization, and cloud-based solutions. You participate in database administration strategic decisions, ensure sustainable system performance, and interact with support and development teams. Your expertise in database technologies and cloud environments will be instrumental in success. What you will be doing: Providing timely and clear estimates, work with developers, and complete tasks to maintain quality standards. Contributing to cloud and database strategy, participating in best practices for performance, scalability, and sustainability. Contributing to functional database requirements and reviewing solutions to ensure they align with organizational needs. Proactively resolving database-related issues. Utilizing software development methodologies (GitOps and similar ) tailored for database management and optimization. Keep up to date with training and expertise in modern database technologies. What you will likely bring: Strong technical acumen with hands-on experience in MS SQL Server, PostgreSQL, and MySQL database management. Experience working with cloud platforms such as Azure, AWS, or GCP , with Azure preferred. Strong understanding of database administration principles , including performance tuning and sustainability best practices. Knowledge in PowerShell scripting and automation. Strong problem-solving skills and ability to foresee challenges within complex database systems. What could set you apart: Passion for product development and innovation, particularly in database-driven applications. Ability to work with team members , ensuring high performance and knowledge growth. Prior experience as a Database Administrator Manager or in a similar leadership role focused on database optimization. Proven leadership experience in managing database-focused development teams . Experience designing and implementing scalable, cloud-based database solutions for enterprise applications. Strong decision-making skills with the ability to drive database product development initiatives effectively. Experience with adoption of GitOps from traditional operations models #HYBRID #LI-VV1 About Epicor At Epicor, we’re truly a team. Join 5,000 talented professionals in creating a world of better business through data, AI, and cognitive ERP. We help businesses stay future-ready by connecting people, processes, and technology. From software engineers who command the latest AI technology to business development reps who help us seize new opportunities, the work we do matters. Together, Epicor employees are creating a more resilient global supply chain. We’re Proactive, Proud, Partners . Whatever your career journey, we’ll help you find the right path. Through our training courses, mentorship, and continuous support, you’ll get everything you need to thrive. At Epicor, your success is our success. And that success really matters, because we’re the essential partners for the world’s most essential businesses—the hardworking companies who make, move, and sell the things the world needs. Competitive Pay & Benefits Health and Wellness: Comprehensive health and wellness benefits designed to support your overall well-being. Internal Mobility: Opportunities for mentorship, continuing education, and focused career goal setting, with 25% of positions filled internally. Career Development: Free LinkedIn Learning licenses for everyone, along with our Mentoring Program to boost your personal development. Education Support: Geographically specific programs to balance the cost of education with the benefits of continued learning and personal development. Inclusive Workplace: Collaborate with a diverse team in an inclusive, global workplace that fosters innovation and celebrates partnership. Work-Life Balance: Policies built on mutual trust and support, encouraging time off to rest, recharge, and reconnect. Global Mobility: Comprehensive support for international relocations and permanent residency processes. Equal Opportunities and Accommodations Statement Epicor is committed to creating a workplace and global community where inclusion is valued; where you bring the whole and real you— that’s who we’re interested in. If you have interest in this or any role- but your experience doesn’t match every qualification of the job description, that’s okay- consider applying regardless. We are an equal-opportunity employer. Recruiter: Vidya Vardhni

Posted 1 month ago

Apply

2.0 years

3 - 9 Lacs

Cochin

On-site

We are looking for skilled trainers(full-time/part-time) who can deliver high-quality training sessions either offline or online in any of the following domains: Full Stack Development: Java / Spring Boot / Angular / React Python / Django / Flask JavaScript / Node.js / Express Data Science & Analytics: Python, NumPy, Pandas, Scikit-learn Machine Learning, Deep Learning Power BI / Tableau Data Engineering: Big Data (Hadoop, Spark) ETL, SQL, NoSQL Cloud Data Platforms (AWS/GCP/Azure) Generative AI & AI Tools: OpenAI, LangChain, Prompt Engineering LLM fine-tuning, RAG, Vector DB Cloud Computing: AWS, Azure, GCP Cloud Architecture, IAM, DevOps basics DevOps: Docker, Kubernetes, Jenkins CI/CD, GitOps, Infrastructure as Code (Terraform, Ansible) Who Can Apply: Working professionals looking to share knowledge Trainers with 2+ years of teaching experience Freelancers and part-time educators Candidates with strong industry expertise and good communication skills Key Responsibilities: Design and deliver engaging, hands-on training sessions Prepare real-time projects and assignments for students Evaluate and support learners through assessments Provide mentoring and guidance when needed Why Join Us? Competitive compensation Flexibility in schedule (weekdays/weekends) Work with a passionate and expert team Job Types: Full-time, Part-time Pay: ₹300,000.00 - ₹900,000.00 per year Application Question(s): How many years of experience do you have in the IT industry and/or as a trainer? (Please specify both, if applicable) Are you applying for a full-time or part-time trainer position? Which tech stack or domain(s) are you interested in providing training for? Full-time Part-time Freelance / Weekend-only Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

DevOps Engineer Key Responsibilities: Design, implement, and manage scalable, secure, and highly available infrastructure on GCP Automate infrastructure provisioning using tools like Terraform or Deployment Manager Build and manage CI/CD pipelines using Jenkins, GitLab CI, or similar tools Manage containerized applications using Kubernetes (GKE) and Docker Monitor system performance and troubleshoot infrastructure issues using tools like Stackdriver, Prometheus, or Grafana Implement security best practices across cloud infrastructure and deployments Collaborate with development and operations teams to streamline release processes Ensure high availability, disaster recovery, and backup strategies are in place Participate in performance tuning and cost optimization of GCP resources Strong hands-on experience with Google Cloud Platform (GCP) services Harness as an optional skill. Proficiency in Infrastructure as Code tools like Terraform or Google Deployment Manager Experience with Kubernetes (especially GKE) and Docker Knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI, or CircleCI Familiarity with scripting languages (e.g., Bash, Python) Experience with logging and monitoring tools (e.g., Stackdriver, Prometheus, ELK, Grafana) Understanding of networking, security, and IAM in a cloud environment Strong problem-solving and communication skills Experience in Agile environments and DevOps culture GCP Associate or Professional Cloud DevOps Engineer certification Experience with Helm, ArgoCD, or other GitOps tools Familiarity with other cloud platforms (AWS, Azure) is a plus Knowledge of application performance tuning and cost management on GCP About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company:- Product Software is a leader in full-stack automation for mission-critical business processes. They are helping enterprises streamline their IT and business operations, Their SaaS-based automation platform is designed for ERP systems. We’re excited to announce an amazing opportunity for a Platform Engineering Manager with one of our esteemed clients, based in Hyderabad! About the Role: Experience Required:- 11 to 18 years Job location:- Hyderabad Notice Period:- 60 Days Max Required Skills:- Programming languages mandatory – Python/.net/Java/etc. GitHub + Github actions (CICD) SDLC Container tools – Docker / K8’s 80% Managing and 20% technical experience IAAC Responsibilities: ● Lead and manage a team of platform engineers, fostering a collaborative and high-performance environment. Programming languages mandatory – Development exp. In Java. Python/.net/etc. ● Define and execute the platform roadmap in alignment with the overall technology strategy and the needs of product development teams. ● Design, build, and maintain the core components of the internal developer platform, including infrastructure provisioning, CI/CD pipelines, monitoring and logging solutions, and security controls. ● Drive the adoption of self-service capabilities to empower development teams and reduce operational overhead. ● Implement and promote DevOps best practices, including infrastructure as code, continuous integration and continuous delivery, and automated testing. ● Collaborate closely with product development teams to understand their requirements and provide them with the necessary platform tools and support. ● Ensure the platform is secure, reliable, scalable, and cost-effective. ● Troubleshoot and resolve platform-related issues, working with the team to identify root causes and implement effective solutions. ● Stay up-to-date on the latest platform engineering technologies and trends, and evaluate their potential benefits for the organization. ● Build and mature the platform engineering team, including hiring, mentoring, and performance management. ● Create and maintain comprehensive documentation for the platform and its components. ● Strong experience with cloud platforms (e.g., AWS, Azure, GCP). ● Hands-on experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). ● Proficiency in infrastructure as code tools (e.g., Terraform, CloudFormation). ● Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI/CD, CircleCI). ● Experience with monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). ● Deep understanding of DevOps principles and practices. ● Strong leadership and management skills. ● Excellent communication, collaboration, and problem-solving skills. Nice to Have: ● Experience with GitOps methodologies. If interested kindly revert resume to mansi.sh@peoplefy.com and share me below details: Total exp.: - Relevant Exp. In Java. Python/.net/: Relevent exp. in Devops:- Working as a Manager(Yes/No):- Team handling(Yes/No):- Current location:- Current company Product based(Yes/No):- Current CTC:- Expected CTC:- Notice Period:-

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We’re looking for a Backend Engineer (.NET) who expects more from their career. This role offers an opportunity to build scalable, high-performance, and multi-tenant solutions within the platform engineering team. You'll play a key role in enhancing code quality and engineering excellence, contributing to critical architecture decisions, and enabling multitenant capabilities across distributed systems in a data-driven environment. What we expect from you: • 6+ years of hands-on experience in backend development, focusing on performance, scalability, security, multi-tenancy, and maintainability. • Strong proficiency in C# and .NET Core, with expertise in developing RESTful APIs and microservices. • Drive code quality, ensuring adherence to best practices, design patterns, and SOLID principles. • Experience with cloud platforms (Google Cloud Platform & Azure), implementing cloud-native and multi-tenant best practices for scalability, security, and resilience. • Hands-on experience with containerization (Docker) and orchestration (Kubernetes, Helm). • Strong focus on non-functional requirements (NFRs), especially in multi-tenant environments — including tenant isolation, security boundaries, performance optimization under variable load, scalability across tenants, and comprehensive observability (monitoring/logging/alerting) for tenant-specific insights. • Experience implementing unit testing, integration testing, and automated testing frameworks. • Proficiency in CI/CD automation, with experience in GitOps workflows and Infrastructure-as-Code (Terraform, Helm, or similar). • Experience working in Agile methodologies (Scrum, Kanban) and DevOps best practices. • Identify dependencies, risks, and bottlenecks early, working proactively with engineering leads to resolve them. • Stay updated with emerging technologies and industry best practices to drive continuous improvement. Key Technical Skills: • Strong proficiency in C#, .NET Core, and RESTful API development. • Experience with asynchronous programming, concurrency control, and event-driven architecture (Pub/Sub, Kafka, etc.). • Deep understanding of object-oriented programming, data structures, and algorithms. • Experience with unit testing frameworks and a TDD approach to development. • Hands-on experience with Docker and Kubernetes (K8s) for containerized applications. • Strong knowledge of performance tuning, security best practices, and observability (monitoring/logging/alerting). • Experience with CI/CD pipelines, GitOps workflows, and infrastructure-as-code (Terraform, Helm, or similar). • Exposure to multi-tenant architectures with strong understanding of NFRs, including tenant isolation strategies, secure resource partitioning, performance profiling across tenants, shared vs. isolated resource models, and scalable, resilient design patterns for onboarding and operating multiple tenants concurrently. • Proficiency in relational databases (PostgreSQL preferred) and exposure to NoSQL solutions. Preferred Skills: • Exposure and experience in working with front-end technologies such as React.js • Knowledge of gRPC, GraphQL, event-driven or other modern API technologies • Familiarity with feature flagging, blue-green deployments, and canary releases

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bengaluru, Karnataka Work Type: Full Time About Uniqode: Uniqode is on a mission to connect the physical and digital worlds seamlessly through technology. Over the years, we’ve become the trusted platform for over 50,000 businesses worldwide, enabling proximity marketing and driving digital engagement at scale. With 200+ million QR Code scans globally and a steep growth rate, Uniqode is at the forefront of innovation in how businesses and consumers interact. As pioneers in digital business cards, we’re redefining how professionals and companies share contact details—offering modern, paperless solutions that are efficient and environmentally friendly. Backed by leading investors like Accel and Telescope, we’re building not just a product but a global ecosystem. With offices in New York and Bangalore and a team spread across India and the USA, Uniqode combines the best of creativity, collaboration, and cutting-edge technology to deliver exceptional results. About the Role: We are looking for a strategic and technically strong Engineering Manager to lead our Platform Engineering team. This role is foundational to scaling Uniqode’s platform and will oversee core infrastructure, internal and external analytics, and shared platform capabilities. As an Engineering Manager, you’ll lead a team of backend engineers, data engineers, and DevOps specialists. You’ll be responsible for managing infrastructure and security, guiding customer-facing and internal analytics platforms, and driving the development of shared services used across Uniqode’s product lines—including Authentication, Authorization, Orchestration, Payments, and AI Platform components. This is a high-impact leadership role at the intersection of infrastructure, data, and platform services. Key Focus Areas: Lead, mentor, and grow a high-performing team of Platform, DevOps, and Data Engineers. Manage and scale infrastructure (primarily on AWS), including CI/CD, Kubernetes (EKS), IaC, monitoring, and security frameworks. Drive cloud cost optimization and performance tuning across services. Ensure strong security practices, including IAM, vulnerability scanning, and compliance automation. Provide technical leadership and mentorship to data engineers managing customer-facing analytics and internal analytics. Oversee the development, reliability, and performance of shared platform services like Authentication, Authorization, Payments, Subscription Management, and Orchestration engines. Collaborate across product teams to build scalable and reusable components supporting multiple Uniqode offerings. Drive standardization, reliability, and observability across environments and services. Ensure platform services are documented, discoverable, and supportable by downstream teams. Partner with stakeholders to align technical direction with business goals and delivery timelines. Promote a culture of ownership, continuous improvement, and operational excellence across the platform engineering team. About You: 8+ years of experience in backend/platform/data engineering roles, with 2+ years of engineering management experience. Strong background in cloud-native infrastructure (preferably AWS), Kubernetes, and DevOps practices. Experience leading data engineering teams and guiding architecture and delivery of analytics solutions (e.g., Apache Pinot, Redshift). Deep understanding of shared platform services like AuthN/AuthZ, Orchestrators, Payment systems, and API platforms. Proficiency in CI/CD pipelines, Infrastructure as Code (e.g., Terraform), and GitOps methodologies. Hands-on experience with observability tools (e.g., Datadog, Prometheus, ELK, Grafana). Familiarity with security best practices and compliance standards (SOC2, GDPR, HIPAA). Strong stakeholder management, technical program management, and team coaching abilities. Track record of delivering scalable, reliable systems and leading cross-functional initiatives. Excellent communication, documentation, and decision-making skills. Bonus Skills (Nice-to-Have): Experience with FinOps or infrastructure cost management at scale. Exposure to metadata platforms or AI-enabled observability systems. Contributions to open-source platform infrastructure or data tools. Experience building unified developer portals or internal platforms. What’s in it for You? A well-deserved compensation package that recognizes your leadership and impact. Opportunity to shape the engineering backbone of a high-growth, global SaaS product. Equity ownership to share in the company’s success. Work with a talented and driven team passionate about building world-class products. Hybrid work flexibility across our offices in New York and Bangalore. Comprehensive health insurance for you and your family. Dedicated mental health support in a nurturing and inclusive environment. About Our Culture: At Uniqode, we’re shaping a workplace where bold ideas, deep ownership, and relentless curiosity thrive. We move fast, aim high, and obsess over delivering real value to our customers. Transparency and collaboration define how we work, while continuous learning and adaptability fuel our growth. If you’re ready to make an impact and grow alongside a passionate, driven team, Uniqode is the place for you.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Title : API DevOps : Mode : Skill : API DevOps Engineer, IBM API Connect (APIC), Jenkins, Azure DevOps (ADO), GitHub, Terraform , Ansible, Kubernetes, : Any Responsibilities : Deploy, configure, and manage IBM API Connect (APIC) environments in line with architectural and security standards. Design and maintain CI/CD pipelines using Jenkins, Azure DevOps (ADO), and GitHub. Implement infrastructure-as-code using Terraform and Ansible for repeatable, automated provisioning. Manage containerized workloads using Kubernetes (K8s), preferably on Azure Kubernetes Service (AKS). Support GitOps practices for continuous delivery and platform automation. Monitor, troubleshoot, and ensure high availability (HA) and disaster recovery (DR) readiness of the API platform. Collaborate with development, security, and platform teams to drive performance, compliance, and operational Skills & Experience : 5+ years of DevOps experience, ideally in API Platform or Integration domains. Strong hands-on experience with IBM API Connect v10. Proven experience with CI/CD automation using Jenkins, ADO, and GitHub. Proficiency in Ansible, Terraform, and scripting (Shell/Python). Experience with Azure and Kubernetes (AKS) deployments. Working knowledge of GitOps methodologies. Strong understanding of API gateway policies, security (OAuth, TLS), and operational Qualifications : Experience with IBM App Connect and API-led integrations. Familiarity with enterprise security, compliance frameworks, and agile DevSecOps practices. Exposure to hybrid and multi-cloud Skills : Strong communication and collaboration skills across global teams. Proactive mindset with a focus on automation, reliability, and scalability. Ability to work under pressure in a fast-paced environment with multiple stakeholders. (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

As a Database Administrator at Epicor, you will be part of a team which does database management, optimization, and cloud-based solutions. You participate in database administration strategic decisions, ensure sustainable system performance, and interact with support and development teams. Your expertise in database technologies and cloud environments will be instrumental in success. What You Will Be Doing Providing timely and clear estimates, work with developers, and complete tasks to maintain quality standards. Contributing to cloud and database strategy, participating in best practices for performance, scalability, and sustainability. Contributing to functional database requirements and reviewing solutions to ensure they align with organizational needs. Proactively resolving database-related issues. Utilizing software development methodologies (GitOps and similar) tailored for database management and optimization. Keep up to date with training and expertise in modern database technologies. What You Will Likely Bring Strong technical acumen with hands-on experience in MS SQL Server, PostgreSQL, and MySQL database management. Experience working with cloud platforms such as Azure, AWS, or GCP, with Azure preferred. Strong understanding of database administration principles, including performance tuning and sustainability best practices. Knowledge in PowerShell scripting and automation. Strong problem-solving skills and ability to foresee challenges within complex database systems. What Could Set You Apart Passion for product development and innovation, particularly in database-driven applications. Ability to work with team members, ensuring high performance and knowledge growth. Prior experience as a Database Administrator Manager or in a similar leadership role focused on database optimization. Proven leadership experience in managing database-focused development teams. Experience designing and implementing scalable, cloud-based database solutions for enterprise applications. Strong decision-making skills with the ability to drive database product development initiatives effectively. Experience with adoption of GitOps from traditional operations models #HYBRID About Epicor At Epicor, we’re truly a team. Join 5,000 talented professionals in creating a world of better business through data, AI, and cognitive ERP. We help businesses stay future-ready by connecting people, processes, and technology. From software engineers who command the latest AI technology to business development reps who help us seize new opportunities, the work we do matters. Together, Epicor employees are creating a more resilient global supply chain. We’re Proactive, Proud, Partners. Whatever your career journey, we’ll help you find the right path. Through our training courses, mentorship, and continuous support, you’ll get everything you need to thrive. At Epicor, your success is our success. And that success really matters, because we’re the essential partners for the world’s most essential businesses—the hardworking companies who make, move, and sell the things the world needs. Competitive Pay & Benefits Health and Wellness: Comprehensive health and wellness benefits designed to support your overall well-being. Internal Mobility: Opportunities for mentorship, continuing education, and focused career goal setting, with 25% of positions filled internally. Career Development: Free LinkedIn Learning licenses for everyone, along with our Mentoring Program to boost your personal development. Education Support: Geographically specific programs to balance the cost of education with the benefits of continued learning and personal development. Inclusive Workplace: Collaborate with a diverse team in an inclusive, global workplace that fosters innovation and celebrates partnership. Work-Life Balance: Policies built on mutual trust and support, encouraging time off to rest, recharge, and reconnect. Global Mobility: Comprehensive support for international relocations and permanent residency processes. Equal Opportunities and Accommodations Statement Epicor is committed to creating a workplace and global community where inclusion is valued; where you bring the whole and real you—that’s who we’re interested in. If you have interest in this or any role- but your experience doesn’t match every qualification of the job description, that’s okay- consider applying regardless. We are an equal-opportunity employer. Recruiter Vidya Vardhni

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies