Jobs
Interviews

2846 Helm Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 years

0 Lacs

India

Remote

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Location: Any PAN India - Hybrid work model Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities ✅ Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. ✅ Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). ✅ DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. ✅ Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). ✅ Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Ignition Application Administrator Position: We are seeking a highly motivated Ignition Application Administrator to join the Enterprise Services – Data team. Working very closely with peer platform administrators, developers, Product/Project Seniors and Customers, you will play an active role in administering the existing analytics platforms. You will join a team of platform administrators who are specialized in one tool, but cross-trained on other tools. While you will focus on Ignition, administration knowledge of these other platforms is beneficial – Qlik Sense, Tableau, PowerBI, SAP Business Objects, Matillion, Snowflake, Informatica (EDC, IDQ, Axon), Alteryx, HVR or Databricks. This role requires a willingness to dive into complex problems to help the team find elegant solutions. How you communicate and approach problems is important to us. We are looking for team players, who are willing to bring people across the disciplines together. This position will provide the unique opportunity to operate in a start-up-like environment within a Fortune 50 company. Our digital focus is geared towards releasing the insights inherent to our best-in-class products and services. Together we aim to achieve new levels of productivity by changing the way we work and identifying new sources of growth for our customers. Responsibilities include, but are not limited to, the following: Install and configure Ignition. Monitor the Ignition platform, including integration with observability and alerting solutions, and recommend platform improvements. Troubleshoot and resolve Ignition platform issues. Configure data source connections and manage asset libraries. Identify and raise system capacity related issues (storage, licenses, performance threshold). Define best practices for Ignition deployment. Integrate Ignition with other ES Data platforms and Business Unit installations of Ignition. Participate in overall data platform architecture and strategy. Research and recommend alternative actions for problem resolution based on best practices and application functionality with minimal direction. Knowledge and Skills: 3+ years working in customer success or in a customer-facing engineering capacity is required. Large scale implementation experience with complex solutions environment. Experience in customer-facing positions, preferably industry experience in technology-based solutions. Experience being able to navigate, escalate and lead efforts on complex customer/partner requests or projects. Experience with Linux command line. An aptitude for both analysing technical concepts and translating them into business terms, as well as for mapping business requirements into technical features. Knowledge of the software development process and of software design methodologies helpful 3+ years’ experience in a cloud ops / Kubernetes application deployment and management role, working with an enterprise software or data product. Experience with Attribute-based Access Control (ABAC), Virtual Director Services (VDS), PING Federate or Azure Active Directory (AAD) helpful. Cloud platform architecture, administration and programming experience desired. Experience with Helm, Argo CD, Docker, and cloud networking. Excellent communication skills: interpersonal, written, and verbal. Education and Work Experience: This position requires a minimum A BA/BS Degree (or equivalent) in technology, computing or other related field of study. Experience in lieu of education may be considered if the individual has ten (3+) or more years of relevant experience. Hours: Normal work schedule hours may vary, Monday through Friday. May be required to work flexible hours and/or weekends, as needed, to meet deadlines or to fulfil application administration obligations. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Ignition Application Administrator Position: We are seeking a highly motivated Ignition Application Administrator to join the Enterprise Services – Data team. Working very closely with peer platform administrators, developers, Product/Project Seniors and Customers, you will play an active role in administering the existing analytics platforms. You will join a team of platform administrators who are specialized in one tool, but cross-trained on other tools. While you will focus on Ignition, administration knowledge of these other platforms is beneficial – Qlik Sense, Tableau, PowerBI, SAP Business Objects, Matillion, Snowflake, Informatica (EDC, IDQ, Axon), Alteryx, HVR or Databricks. This role requires a willingness to dive into complex problems to help the team find elegant solutions. How you communicate and approach problems is important to us. We are looking for team players, who are willing to bring people across the disciplines together. This position will provide the unique opportunity to operate in a start-up-like environment within a Fortune 50 company. Our digital focus is geared towards releasing the insights inherent to our best-in-class products and services. Together we aim to achieve new levels of productivity by changing the way we work and identifying new sources of growth for our customers. Responsibilities include, but are not limited to, the following: Install and configure Ignition. Monitor the Ignition platform, including integration with observability and alerting solutions, and recommend platform improvements. Troubleshoot and resolve Ignition platform issues. Configure data source connections and manage asset libraries. Identify and raise system capacity related issues (storage, licenses, performance threshold). Define best practices for Ignition deployment. Integrate Ignition with other ES Data platforms and Business Unit installations of Ignition. Participate in overall data platform architecture and strategy. Research and recommend alternative actions for problem resolution based on best practices and application functionality with minimal direction. Knowledge and Skills: 3+ years working in customer success or in a customer-facing engineering capacity is required. Large scale implementation experience with complex solutions environment. Experience in customer-facing positions, preferably industry experience in technology-based solutions. Experience being able to navigate, escalate and lead efforts on complex customer/partner requests or projects. Experience with Linux command line. An aptitude for both analysing technical concepts and translating them into business terms, as well as for mapping business requirements into technical features. Knowledge of the software development process and of software design methodologies helpful 3+ years’ experience in a cloud ops / Kubernetes application deployment and management role, working with an enterprise software or data product. Experience with Attribute-based Access Control (ABAC), Virtual Director Services (VDS), PING Federate or Azure Active Directory (AAD) helpful. Cloud platform architecture, administration and programming experience desired. Experience with Helm, Argo CD, Docker, and cloud networking. Excellent communication skills: interpersonal, written, and verbal. Education and Work Experience: This position requires a minimum A BA/BS Degree (or equivalent) in technology, computing or other related field of study. Experience in lieu of education may be considered if the individual has ten (3+) or more years of relevant experience. Hours: Normal work schedule hours may vary, Monday through Friday. May be required to work flexible hours and/or weekends, as needed, to meet deadlines or to fulfil application administration obligations. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Surat, Gujarat, India

On-site

About Us We are a premium menswear brand redefining modern Indian menswear with timeless sophistication and tailored craftsmanship. Primarily driven through e-commerce, our growing retail presence also plays a key role in delivering the brand experience to our discerning customers. Position Overview We are seeking an experienced and visionary Vice President – E-commerce & Retail Operations to lead and streamline our multi-channel operations. This is a strategic leadership role responsible for end-to-end operational execution across both digital and offline sales channels. The VP will oversee Six key departments — from customer care and logistics to social media and offline retail — ensuring a unified brand experience and operational excellence. Key Responsibilities 1. Strategic Leadership Drive operational strategy across online and offline channels to support company growth. Lead 6 department heads across: Customer Care, Shipping, NDR (Non-Delivery Report), Warehouse, Social Media, Photo Editing. Translate company vision into actionable operational plans and department goals. 2. E-commerce Operations Ensure seamless functioning of all online order workflows — from order placement to last-mile delivery. Optimize NDR management and logistics coordination for improved fulfilment rates and cost efficiency. Oversee product imagery and content workflow through the Photo Editing team for timely listings and brand-aligned visuals. Ensure high-performing customer service operations with a focus on customer satisfaction (CSAT), first response time, and resolution efficiency. 3. Offline Sales & Retail Operations Manage and scale offline sales strategy across company-owned or partnered retail stores and SISs. Ensure strong alignment between retail store operations and brand standards — including staff training, inventory flow, and customer service. Coordinate in-store product assortment and marketing efforts in sync with online launches. 4. Brand & Digital Engagement Supervise the Social Media team to drive consistent storytelling and digital engagement across platforms. Ensure timely coordination between social campaigns, product drops, and stock readiness. 5. Inventory, Fulfilment & Logistics Oversee warehouse operations to maintain accuracy, efficiency, and scalability. Work closely with logistics partners to improve speed, reduce cost, and minimize delivery issues. Optimize inventory allocation between online and offline channels. 6. Data, Reporting & Team Development Define and track KPIs for each function; use dashboards to monitor operational health. Lead weekly reviews, strategic planning sessions, and performance evaluations. Mentor department heads and build high-performance teams across functions. Qualifications & Skills Bachelor’s degree in Business, Supply Chain, Retail Management, or related field; MBA preferred. 5–7 years of experience in Ecommerce, including 5+ years in a leadership role. Strong expertise in e-commerce and omnichannel operations. Proven ability to lead diverse teams and scale operations in a fast-paced environment. Analytical thinker with strong decision-making, communication, and problem-solving skills. Passion for fashion, detail orientation, and a customer-first approach. CTC for this role is between ₹4.20 – ₹5.40 LPA , depending on experience and qualifications. Why Join Us? Be at the operational helm of one of India’s rising premium menswear brands. Opportunity to scale both online and offline presence with full ownership of operational strategy. A culture of innovation, speed, and quality with strong growth prospects.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Company Description ACC is an AWS Advance Partner with AWS Mobility Competency. Awarded The Best BFSI industry Consulting Partner for the year 2019, ACC has had several successful cloud migration and application development projects to its credit. Our business offerings include Digitalization, Cloud Services, Product Engineering, Big Data and analytics and Cloud Security. ACC has developed several products to its credit. These include Ottohm – Enterprise Video and OTT Platform, Atlas API – API Management and Development Platform, Atlas CLM – Cloud Life Cycle Management, Atlas HCM – HR Digital Onboarding and Employee Management, Atlas ITSM – Vendor Onboarding and Service Management and Smart Contracts – Contract Automation and Management. Website - http://www.appliedcloudcomputing.com/ Job Description Experience: 5+ years overall in Cloud Operations, including: Minimum 5 years of hands-on experience with Google Cloud Platform (GCP) Minimum 3 years of experience in Kubernetes administration Certifications: ✅ GCP Certified Professional – Mandatory Work Hours: 24x7 support coverage Rotational shifts (including night and weekend shifts) Key Responsibilities Manage and monitor GCP infrastructure resources, ensuring optimal performance, availability, and security. Administer Kubernetes clusters: deployment, scaling, upgrades, patching, and troubleshooting. Implement and maintain automation for provisioning, scaling, and monitoring using tools like Terraform, Helm, or similar. Respond to incidents, perform root cause analysis, and drive issue resolution within SLAs. Configure logging, monitoring, and alerting solutions across GCP and Kubernetes environments. Support CI/CD pipelines and integrate Kubernetes deployments with DevOps processes. Maintain detailed documentation of processes, configurations, and runbooks. Work collaboratively with Development, Security, and Architecture teams to ensure compliance and best practices. Participate in an on-call rotation and respond promptly to critical alerts. Required Skills & Qualifications GCP Certified Professional (Cloud Architect, Cloud Engineer, or equivalent). Strong working knowledge of GCP services (Compute Engine, GKE, Cloud Storage, IAM, VPC, Cloud Monitoring, etc.). Solid experience in Kubernetes cluster administration (setup, scaling, upgrades, security hardening). Proficiency with Infrastructure as Code tools (Terraform, Deployment Manager). Knowledge of containerization concepts and tools (Docker). Experience in monitoring and observability (Prometheus, Grafana, Stackdriver). Familiarity with incident management and ITIL processes. Ability to work in 24x7 operations with rotating shifts. Strong troubleshooting and problem-solving skills. Preferred Skills (Nice To Have) Experience supporting multi-cloud environments. Scripting skills (Python, Bash, Go). Exposure to other cloud platforms (AWS, Azure). Familiarity with security controls and compliance frameworks.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Ignition Application Administrator Position: We are seeking a highly motivated Ignition Application Administrator to join the Enterprise Services – Data team. Working very closely with peer platform administrators, developers, Product/Project Seniors and Customers, you will play an active role in administering the existing analytics platforms. You will join a team of platform administrators who are specialized in one tool, but cross-trained on other tools. While you will focus on Ignition, administration knowledge of these other platforms is beneficial – Qlik Sense, Tableau, PowerBI, SAP Business Objects, Matillion, Snowflake, Informatica (EDC, IDQ, Axon), Alteryx, HVR or Databricks. This role requires a willingness to dive into complex problems to help the team find elegant solutions. How you communicate and approach problems is important to us. We are looking for team players, who are willing to bring people across the disciplines together. This position will provide the unique opportunity to operate in a start-up-like environment within a Fortune 50 company. Our digital focus is geared towards releasing the insights inherent to our best-in-class products and services. Together we aim to achieve new levels of productivity by changing the way we work and identifying new sources of growth for our customers. Responsibilities include, but are not limited to, the following: Install and configure Ignition. Monitor the Ignition platform, including integration with observability and alerting solutions, and recommend platform improvements. Troubleshoot and resolve Ignition platform issues. Configure data source connections and manage asset libraries. Identify and raise system capacity related issues (storage, licenses, performance threshold). Define best practices for Ignition deployment. Integrate Ignition with other ES Data platforms and Business Unit installations of Ignition. Participate in overall data platform architecture and strategy. Research and recommend alternative actions for problem resolution based on best practices and application functionality with minimal direction. Knowledge and Skills: 3+ years working in customer success or in a customer-facing engineering capacity is required. Large scale implementation experience with complex solutions environment. Experience in customer-facing positions, preferably industry experience in technology-based solutions. Experience being able to navigate, escalate and lead efforts on complex customer/partner requests or projects. Experience with Linux command line. An aptitude for both analysing technical concepts and translating them into business terms, as well as for mapping business requirements into technical features. Knowledge of the software development process and of software design methodologies helpful 3+ years’ experience in a cloud ops / Kubernetes application deployment and management role, working with an enterprise software or data product. Experience with Attribute-based Access Control (ABAC), Virtual Director Services (VDS), PING Federate or Azure Active Directory (AAD) helpful. Cloud platform architecture, administration and programming experience desired. Experience with Helm, Argo CD, Docker, and cloud networking. Excellent communication skills: interpersonal, written, and verbal. Education and Work Experience: This position requires a minimum A BA/BS Degree (or equivalent) in technology, computing or other related field of study. Experience in lieu of education may be considered if the individual has ten (3+) or more years of relevant experience. Hours: Normal work schedule hours may vary, Monday through Friday. May be required to work flexible hours and/or weekends, as needed, to meet deadlines or to fulfil application administration obligations. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Devops Engineer : Bangalore Job Description DevOps Engineer_Qilin Lab Bangalore, India Role Role We are seeking an experienced DevOps Engineer to deliver insights from massive-scale data in real time. Specifically, were searching for someone who has fresh ideas and a unique viewpoint, and who enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences for every of this role : Work with DevOps to run the production environment by monitoring availability and taking a holistic view of system health Build software and systems to manage our Data Platform infrastructure Improve reliability, quality, and time-to-market of our Global Data Platform Measure and optimize system performance and innovate for continual improvement Provide operational support and engineering for a distributed Platform at : Define, publish and defend service-level objectives (SLOs) Partner with data engineers to improve services through rigorous testing and release procedures Participate in system design, Platform management and capacity planning Create sustainable systems and services through automation and automated run-books Proactive approach to identifying problems and seeking areas for improvement Mentor the team in infrastructure best : Bachelors degree in Computer Science or an IT related field, or equivalent practical experience with a proven track record. The following hands-on working knowledge and experience is required : Kubernetes , EC2 , RDS,ELK Stack, Cloud Platforms (AWS, Azure, GCP) preferably AWS. Building & operating clusters Related technologies such as Containers, Helm, Kustomize, Argocd Ability to program (structured and OOP) using at least one high-level language such as Python, Java, Go, etc. Agile Methodologies (Scrum, TDD, BDD, etc.) Continuous Integration and Continuous Delivery Tools (gitops) Terraform, Unix/Linux environments Experience with several of the following tools/technologies is desirable : Big Data platforms (eg. Apache Hadoop and Apache Spark)Streaming Technologies (Kafka, Kinesis, etc.) ElasticSearch Service, Mesh Orchestration technologies, e.g., Argo Knowledge of the following is a plus : Security (OWASP, SIEM, etc.)Infrastructure testing (Chaos, Load, Security), Github, Microservices architectures. Notice period : Immediate to 15 days Experience : 3 to 5 years Job Type : Full-time Schedule : Day shift Monday to Friday Work Location : On Site Job Type : Payroll Must Have Skills Python - 3 Years - Intermediate DevOps - 3 Years - Intermediate AWS - 2 Years - Intermediate Agile Methodology - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate ElasticSearch - 3 Years - Intermediate (ref:hirist.tech)

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 3 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes. In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies ea rly, and drive operational excellence. What Youll Do Configure and manage observability agents across AWS, Azure & GCP Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack Experience with different language stacks such as Java, Ruby, Python and Go Instrument services using OpenTelemetry and integrate telemetry pipelines Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs Create dashboards, set up alerts, and track SLIs/SLOs Enable RCA and incident response using observability data Secure the observability pipeline You Bring BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering Strong skills in reading and interpreting logs, metrics, and traces Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. Knowledge of OpenTelemetry, IaC, and security best practices Clear documentation of observability processes, logging standards & instrumentation guidelines Ability to proactively identify, debug, and resolve issues using observability data Focused on maintaining data quality and integrity across the observability pipeline.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Tamil Nadu, India

On-site

Note: We’re hiring for a high-impact early-stage startup. Priority will be given to candidates from Tier-1 colleges and those currently working at fast growing startups . Required Skills 3–5 years of professional Full-Stack development experience Strong backend skills with frameworks like Node.js, Python (FastAPI, Django), Go, or similar Frontend experience with React, Vue.js, Next.js, or similar modern frameworks Solid knowledge and experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis, Neon) Strong API design skills (REST mandatory; GraphQL a plus) Containerization expertise with Docker Container orchestration and management with Kubernetes (including experience with Helm charts, Nice-to-have Experience with TypeScript for full-stack AI SaaS development Use of modern UI frameworks and tooling like Tailwind CSS Familiarity with modern AI-first SaaS concepts viz. vector databases for fast ML data retrieval, prompt engineering for LLM integration, integrating with OpenRouter or similar LLM orchestration frameworks etc.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Title: SeniorDevOps Engineer - GCP Type: Permanent Region: India Location: Noida , UP About the Role: We are seeking a highly skilled and motivated DevOps GCP expert ( AWS experience is an added advantage) to join our team. DevOps Engineer, you'll play a central role in our software development and operations. You'll build and maintain systems and processes to help us deliver top-notch software quickly and consistently. Working closely with our development, operations, and QA teams, you'll streamline workflows, automate tasks, and make sure our applications are scalable, secure, and perform well. Responsibilities: Deploy and manage important applications in a microservices setup. Set up automation, good monitoring, and infrastructure using code. Create and maintain pipelines for continuous integration and deployment in different environments. Work with a diverse engineering team on new technologies. Improve deployment quality and speed by trying out better methods. Share knowledge within the engineering team to keep everyone informed. Help move old systems and find ways to save costs. Share on-call duties with the engineering team. Required Experience: At least 5 years of experience in Devops in Linux environment Experience managing and deploying reliable systems at scale Ability to automate tasks using scripting languages like Bash, PHP, Python, or Ruby Practical knowledge of Kubernetes, including Rancher and GKE Expertise in working with multiple cloud platforms such as GCP, AWS, and OpenStack Familiarity with version control systems like Git and GitLab Experience implementing CI/CD pipelines using tools like GitLab, Jenkins, or Bamboo Operational maintenance skills including high availability tuning and backups Monitoring experience with tools like Prometheus, Grafana, and Zabbix Experience with compliance logging at scale using platforms like BigQuery and ElasticSearch Understanding of NoSQL databases like ElasticSearch and Redis Familiarity with configuration management tools such as Rancher, Helm, and Puppet Proficient communication skills, both verbal and written About Ovyo Ovyo works globally with companies in the TV & Media industries including some of the top household brands. Our people build the platforms that shape the way the world watches video and connects, working on a mix of long-term customer engagements and shorter consulting projects, quickly fast tracking their experience within the industry, and their career. We are a modern, dynamic company with some of the best OTT Engineers out there, and we focus on being a great place to work. Most of our technical teams are based in India and South Africa but we also have people in the UK (where our management office is) and Europe.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity. Ensure high code quality through comprehensive unit, integration, and end-to-end testing, alongside participation in code reviews. What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, Angular UI, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Design and build robust, user-friendly, and highly responsive web applications using Angular (versions 12+). Implement and manage micro-frontend architectures to foster independent deployments and enhance team autonomy. Collaborate closely with DevOps teams, contributing to CI/CD pipeline automation for seamless integration and deployment processes. Utilize Git and Gitflow workflows for efficient source code management, branching, and merging strategies. What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Deep expertise in designing and developing complex, high-performance web applications using Angular (v12+) including advanced state management, performance optimization, and modular design. Proven experience in implementing and managing micro-frontend solutions, enabling independent team development, scalable deployments, and enhanced application resilience. Strong command of core UI technologies including HTML, JavaScript, and CSS frameworks like Bootstrap, ensuring pixel-perfect and responsive user experiences. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Veeam, the #1 global market leader in data resilience, believes businesses should control all their data whenever and wherever they need it. Veeam provides data resilience through data backup, data recovery, data portability, data security, and data intelligence. Based in Seattle, Veeam protects over 550,000 customers worldwide who trust Veeam to keep their businesses running. Join us as we move forward together, growing, learning, and making a real impact for some of the world’s biggest brands. The future of data resilience is here - go fearlessly forward with us. We’re looking for a Platform Engineer to join the Veeam Data Cloud. The mission of the Platform Engineering team is to provide a secure, reliable, and easy to use platform to enable our teams to build, test, deploy, and monitor the VDC product. This is an excellent opportunity for someone with cloud infrastructure and software development experience to build the world’s most successful, modern, data protection platform. Your Tasks Will Include Write and maintain code to automate our public cloud infrastructure, software delivery pipeline, other enablement tools, and internally consumed platform services Document system design, configurations, processes, and decisions to support our async, distributed team culture Collaborate with a team of remote engineers to build the VDC platform Work with a modern technology stack based on containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain On-call rotation for product operations Technologies We Work With Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, etc. What We Expect From You 3+ years of experience in production operations for a SaaS (Software as a Service) or cloud service provider Experience automating infrastructure through code using technologies such as Pulumi or Terraform Experience with GitHub Actions Experience with a breadth and depth of public cloud services Experience building and supporting enterprise SaaS products Understanding of the principles of operational excellence in a SaaS environment. Possessing scripting skills in languages like Bash or Python Understanding and experience implementing secure design principles in the cloud Demonstrated ability to learn new technologies quickly and implement those technologies in a pragmatic manner A strong bias toward action and direct, frequent communication A university degree in a technical field Will Be An Advantage Experience with Azure Experience with high-level programming languages such as Go, Java, C/C++, etc. We Offer Family Medical Insurance Annual flexible spending allowance for health and well-being Life insurance Personal accident insurance Employee Assistance Program A comprehensive leave package, including parental leave Meal Benefit Pass Transportation Allowance Monthly Daycare Allowance Veeam Care Days – additional 24 hours for your volunteering activities Professional training and education, including courses and workshops, internal meetups, and unlimited access to our online learning platforms (Percipio, Athena, O’Reilly) and mentoring through our MentorLab program Please note: If the applicant is permanently located outside India, Veeam reserves the right to decline the application. #Hybrid Veeam Software is an equal opportunity employer and does not tolerate discrimination in any form on the basis of race, color, religion, gender, age, national origin, citizenship, disability, veteran status or any other classification protected by federal, state or local law. All your information will be kept confidential. Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice. The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes. By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

delhi

On-site

As a Lead Engineer specializing in 4G/5G Packet Core & IMS, your primary responsibility will be to deploy, configure, test, integrate, and support 4G/5G Packet Core and IMS product stacks. You will collaborate closely with product engineering and customer teams to ensure the delivery of telecom solutions that comply with industry standards. This role will require you to work in cloud-native environments and utilize your skills to contribute to the success of the Services team. Your key responsibilities will include: 1. Configuration & Deployment: - Deploying and configuring 4G EPC, 5G Core (NSA/SA), and IMS components in OpenStack, Kubernetes, or OpenShift environments. - Automating deployments using tools like Ansible, Helm, Terraform, and CI/CD pipelines. - Ensuring secure, scalable, and resilient deployments across different environments. 2. Testing & Validation: - Developing and executing test plans for functional, performance, and interoperability testing. - Using telecom tools like Spirent for validation and KPI tracking. - Performing regression and acceptance testing in accordance with 3GPP standards. 3. Integration: - Integrating network elements such as MME, SGW, PGW, AMF, SMF, UPF, HSS, PCRF, and CSCF. - Ensuring seamless interworking of legacy and cloud-native network functions. 4. Release Management: - Managing version control, release cycles, rollout strategies, and rollback plans. - Coordinating with cross-functional teams for successful releases. - Maintaining detailed release documentation and logs. 5. Technical Documentation: - Preparing and maintaining High-Level and Low-Level Design documents, MoP & SOPs, deployment guides, technical manuals, and release notes. - Ensuring version-controlled and accessible documentation. 6. Field Support & Troubleshooting: - Providing on-site and remote deployment support. - Troubleshooting live network issues and conducting root cause analysis. - Ensuring service continuity through collaboration with internal and customer teams. 7. Collaboration with Product Development: - Working with development teams to identify, reproduce, and resolve bugs. - Providing logs, diagnostics, and test scenarios to support debugging. - Participating in sprint planning, reviews, and feedback sessions. To excel in this role, you must possess a Bachelor's/Master's degree in Telecommunications, Computer Science, or a related field. Additionally, you should have a deep understanding of 4G/5G Packet Core and IMS architecture and protocols, experience with cloud-native platforms, strong scripting skills, excellent communication abilities, and effective troubleshooting skills. This role presents a unique opportunity to be part of a strategic initiative aimed at building an indigenous telecom technology stack, shaping India's telecom future with cutting-edge LTE/5G core network solutions.,

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title:Sr DevOps Engineer Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Exp : 7year+ Summary: The Senior DevOps Engineer is responsible for designing and managing robust, scalable CI/CD pipelines, automating infrastructure with Terraform, and improving deployment efficiency across GCP-hosted environments Experience Required: 5 –8 years in DevOps engineering roles with proven expertise in CI/CD, infrastructure automation, and Kubernetes.. Mandatory: • OS: Linux • Cloud: GCP (Compute Engine, Load Balancing, GKE, IAM) • CI/CD: Jenkins, GitHub Actions, Argo CD • Containers: Docker, Kubernetes • IaC: Terraform, Helm • Monitoring: Prometheus, Grafana, ELK • Security: Vault, Trivy, OWASP concepts Nice to Have : • Service Mesh (Istio), Pub/Sub, API Gateway – Kong • Advanced scripting (Python, Bash, Node.js) • Skywalking, Rancher, Jira, Freshservice Scope: • Own CI/CD strategy and configuration • Implement DevSecOps practices • Drive automation-first culture Roles and Responsibilities: • Design and implement end-to-end CI/CD pipelines using Jenkins, GitHub Actions, and Argo CD for production-grade deployments. • Define branching strategies and workflow templates for development teams. • Automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests across multiple environments. • Implement and maintain container orchestration strategies on GKE, including Helm-based deployments. • Manage secrets lifecycle using Vault and integrate with CI/CD for secure deployments. • Integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. • Collaborate with engineering leads to review deployment readiness and ensure quality gates are met. • Monitor infrastructure health and capacity planning using Prometheus, Grafana, and Datadog; implement alerting rules. • Implement auto-scaling, self-healing, and other resilience strategies in Kubernetes. • Drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers Notice Period: Immediate- 30 Days Email to : sharmila.m@aptita.com

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Software Developer with 4 to 5 years of Java backend development experience using Spring Boot, you will be responsible for designing, developing, and maintaining RESTful backend services. Your expertise in cloud technologies such as AWS/Azure and DevOps practices will be vital in building and deploying applications with security and scalability on cloud platforms. Your role will involve creating and maintaining CI/CD pipelines for automated testing and deployment, as well as containerizing applications using Docker and managing them with Kubernetes. Monitoring system performance and troubleshooting issues across different environments will also be part of your responsibilities. Collaboration with QA, DevOps, and Product teams is essential to ensure the delivery of high-quality features. You will follow Agile practices, participate in daily stand-ups, sprint planning, and retrospectives to contribute effectively to the development process. Key skills required for this role include Java 8/11, Spring Boot, Spring MVC, Spring Data JPA, Spring Security, RESTful API development and integration, and understanding of microservices architecture. Experience with AWS services like EC2, S3, Lambda, RDS, ECS/EKS, or Azure services such as App Service, AKS, along with knowledge of cloud deployment and monitoring strategies is highly desirable. Proficiency in CI/CD tools like Jenkins, GitHub Actions, GitLab CI/CD, containerization with Docker, and orchestration using Kubernetes and Helm will be beneficial in fulfilling your responsibilities effectively. A Bachelor's degree in Engineering or a related field, along with 4 to 5 years of relevant experience, is preferred for this role.,

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Brief Description We are looking for a seasoned Senior Software Architect to lead and shape the technical direction of complex, scalable, and secure systems. This role demands a deep understanding of modern software architecture patterns and hands-on experience in building robust solutions for banking , financial platforms , and e-commerce systems with high transaction volumes. You will drive architectural decisions, lead engineering discussions, and guide the technical execution of large-scale digital products. Key Responsibilities: • Lead architectural strategy, design, and implementation for enterprise-grade applications. • Collaborate with product, DevOps, and engineering teams to translate business requirements into scalable, secure, and resilient architectures. • Own the architecture lifecycle, including blueprinting, documentation, system evaluation, and refactoring. • Conduct architecture reviews and ensure adherence to best practices, coding standards, and security guidelines. • Evaluate and recommend tools, technologies, and frameworks aligned with architectural vision. • Enable DevOps automation, CI/CD pipelines, infrastructure as code, and observability standards. • Provide mentorship and technical leadership to engineering teams across domains. Preferred Skills • 7+ years of experience in software development, with at least 3 years in a lead or architect role. • Proven experience designing distributed systems, large-scale APIs, and microservices-based applications. • Strong background in event-driven architecture, asynchronous processing, and domain-driven design (DDD). • Experience with REST, GraphQL, and API-first design principles. • Hands-on knowledge of AWS (preferred), Azure, or GCP, with practical experience in services like Lambda, ECS/EKS, S3, API Gateway, DynamoDB, SQS, etc. • Proficiency in Docker, Kubernetes, Helm, and containerized deployments. • Familiarity with Infrastructure as Code tools like Terraform or AWS CloudFormation. • Experience building high-security systems following standards like OWASP, PCI-DSS, and GDPR. • Deep knowledge of authentication, authorization protocols (OAuth2, OIDC, SAML, JWT), and IAM integration. • Experience designing high-performance systems with caching (Redis, Memcached), CDN strategies, and request optimization. • Strong data modeling experience with RDBMS (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra). • Exposure to messaging and streaming systems such as Kafka, RabbitMQ, or Amazon EventBridge. • Experience with software development using one or more of the following: Java, Python, Node.js, TypeScript, .NET, or Go. • Familiarity with CI/CD tools (GitHub Actions, Jenkins, GitLab CI) and version control systems like Git. • Understanding of observability, monitoring, and logging using tools like Prometheus, Grafana, ELK Stack, or Datadog. • Experience in architecting digital platforms in the banking, fintech, or e-commerce domains. • Exposure to Blockchain technologies, smart contracts, and decentralized systems. • Strong analytical, communication, and documentation skills with a leadership mindset. What We Offer: • Leadership role in architecting business-critical digital platforms. • Exposure to cutting-edge technologies and innovation-driven teams. • Competitive compensation and benefits package. • Opportunities for continuous learning and certifications. • A dynamic and flexible work environment. Other Info Requested: Notice period, CTC, and Expected CTC. Note: Kindly mention in the email subject "Application for the post of Senior Software Architect" to careers@spericorn.com

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

The Red Hat Developer team is seeking a Senior Software Engineer to join the team in India. In this role, you will be an integral part of the Developer team, contributing to the development of new features and upstream projects. The primary focus will be on maintaining the Helm partner/ecosystem program to ensure a top-notch Helm experience on OpenShift. As a member of a globally distributed team, you will collaborate with various Red Hat engineering teams, product managers, strategic partners, and open-source communities worldwide. To excel in this position, you should possess motivation, curiosity, problem-solving skills, and expertise in Kubernetes, the Go programming language, container technologies, and open-source development. Responsibilities: - Collaborate in a cross-functional team to deliver products - Contribute to open-source projects - Communicate with engineering and management teams globally - Provide code and peer reviews - Document and maintain software functionality - Participate in a scrum team - Ensure proper testing of projects - Propose new processes to enhance the quality, consistency, and automation of releases - Share the team's work through blogs, web postings, and conference presentations - Engage in community outreach and establish partnerships with external communities Requirements: - Proficiency in Kubernetes with hands-on experience as a developer or administrator - Proficiency in Golang and Python - Familiarity with Helm concepts - Knowledge of Open-Source release best practices - Experience in Cloud Native application development - Strong communication skills, accountability, and eagerness to learn and teach - Ability to define, follow, and enforce processes - Excellent written and verbal communication skills in English Preferred Skills: - Skillful in building and publishing Helm Charts - Understanding of Networking and Storage requirements for Helm repository - Familiarity with Kubernetes Custom Resource Definition (CRDs) concepts - Experience with CI continuous delivery (CD) systems - Passion for open-source development - Experience in leading or contributing to open-source communities - Bachelor's degree in computer science or related field, or equivalent working experience Red Hat is a global leader in enterprise open-source software solutions, leveraging a community-driven approach to deliver high-performing technologies such as Linux, cloud, containers, and Kubernetes. With associates in over 40 countries, Red Hat fosters a flexible work environment, including in-office, office-flex, and fully remote options. Creativity and passion are valued at Red Hat, regardless of title or tenure, in an open and inclusive culture that encourages the contribution of ideas to solve complex problems and drive impact. Inclusion at Red Hat is grounded in the open-source principles of transparency, collaboration, and inclusion, where diverse ideas and perspectives converge to challenge norms and foster innovation. Red Hat is committed to providing equal opportunity and access for all, celebrating the diversity of voices and experiences that enrich our global community. Red Hat supports individuals with disabilities and offers reasonable accommodations to job applicants. For assistance with completing the online job application, please contact application-assistance@redhat.com. General inquiries about job application status will not receive a response.,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

delhi

On-site

You are a highly experienced DevOps Architect and Level 4 DevOps Subject Matter Expert (SME) with over 10 years of relevant experience in the field of DevOps. Your expertise lies in building scalable, secure, and fully automated infrastructure environments, with a focus on delivering robust DevOps solutions, establishing architecture best practices, and driving automation across development and operations teams. Your role involves deep hands-on expertise in Continuous Integration And Continuous Delivery (CI/CD) tools like Jenkins, Azure DevOps, Helm, GIOPS, and ArgoCD to implement reliable, automated software delivery pipelines. You possess advanced Infrastructure as Code (IaC) experience using tools such as Terraform, Ansible, SaltStack, ARM Templates, and Google Cloud Deployment Manager for scalable and consistent infrastructure provisioning. You are an expert in container platforms, particularly Kubernetes and Docker, for orchestrated, secure, and highly available deployments. Your proficiency extends to Kubernetes operations, including production-grade cluster management, autoscaling, Helm chart development, RBAC configuration, ingress controllers, and network policy enforcement. Furthermore, you have extensive cloud experience across ROS, Azure, and GCP, with deep knowledge of core services, networking, storage, identity, and security implementations. Your scripting and automation capabilities using Bash, Python, or Go enable you to develop robust automation tools and system-level integrations. In addition, you have comprehensive monitoring and observability expertise with Prometheus, Grafana, and the ELK stack for end-to-end visibility, alerting, and performance analysis. You are skilled in designing and implementing secure, scalable, and resilient DevOps architectures aligned with industry best practices for both cloud-native and hybrid environments. Your experience also includes artifact management using JFrog Artifactory or Nexus, performing root cause analysis, developing self-healing scripts, and ensuring high system availability and minimal disruption. You are familiar with DevSecOps and compliance frameworks, mentoring engineering teams in DevOps adoption, tooling, automation strategies, and architectural decision-making. As a recognized DevOps expert and L4 SME, you continuously evaluate and recommend emerging tools, frameworks, and practices to enhance deployment speed, pipeline efficiency, and platform reliability. Your strong communication skills allow you to present and explain architectural strategies and system design decisions to both technical and non-technical stakeholders with clarity and confidence.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

You will be responsible for implementing and managing CI/CD pipelines, container orchestration, and cloud services to enhance our software development lifecycle. Collaborate with development and operations teams to streamline processes and improve deployment efficiency. Implement and manage CI/CD tools such as GitLab CI, Jenkins, or CircleCI. Utilize Docker and Kubernetes (k8s) for containerization and orchestration of applications. Write and maintain scripts in at least one scripting language (e.g., Python, Bash) to automate tasks. Manage and deploy applications using cloud services (e.g. AWS, Azure, GCP) and their respective management tools. Understand and apply network protocols, IP networking, load balancing, and firewalling concepts. Implement infrastructure as code (IaC) practices to automate infrastructure provisioning and management. Utilize logging and monitoring tools (e.g., ELK stack, OpenSearch, Prometheus, Grafana) to ensure system reliability and performance. Familiarize with GitOps practices using tools like Flux or ArgoCD for continuous delivery. Work with Helm and Flyte for managing Kubernetes applications and workflows. Bachelors or masters degree in computer science, or a related field. Proven experience in a DevOps engineering role. Strong background in software development and system administration. Experience with CI/CD tools and practices. Proficiency in Docker and Kubernetes. Familiarity with cloud services and their management tools. Understanding of networking concepts and protocols. Experience with infrastructure as code (IaC) practices. Familiarity with logging and monitoring tools. Knowledge of GitOps practices and tools. Experience with Helm and Flyte is a plus. Preferred Qualifications: Experience with cloud-native architectures and microservices. Knowledge of security best practices in DevOps and cloud environments. Understanding database management and optimization (e.g., SQL, NoSQL). Familiarity with Agile methodologies and practices. Experience with performance tuning and optimization of applications. Knowledge of backup and disaster recovery strategies. Familiarity with emerging DevOps tools and technologies.,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

You are joining an innovation team with a mission to revolutionize how enterprises utilize AI. Operating with the agility of a startup and the focus of an incubator, we are assembling a close-knit group of AI and infrastructure experts fueled by bold ideas and a common objective: to reimagine systems from the ground up and deliver groundbreaking solutions that redefine what's achievable - faster, leaner, and smarter. In our fast-paced, experimentation-rich environment, where new technologies are not just embraced but expected, you will collaborate closely with seasoned engineers, architects, and visionaries to develop iconic products capable of reshaping industries and introducing entirely new operational models for enterprises. If you are invigorated by the prospect of tackling challenging problems, enjoy pushing the boundaries of what is possible, and are eager to contribute to shaping the future of AI infrastructure, we are excited to connect with you. As Cisco seeks a forward-thinking Architect for AI Infrastructure Software, you will play a pivotal role in spearheading the development of the next-generation AI infrastructure platform. This strategic leadership position at the intersection of software engineering and AI systems will require you to define the vision, architecture, and execution of high-performance software that directly influences how enterprises deploy, scale, and optimize AI workloads. Your responsibilities will include mentoring a high-caliber team, delivering robust control and data plane solutions, and operating them as a SaaS service with a relentless focus on uptime, quality, and customer success. Additionally, you will guide strategic decisions on resource usages in generative AI systems and collaborate across functions to align product direction with infrastructure capabilities. Key Responsibilities: - Architect and develop a SaaS control plane emphasizing ease of use, scalability, and reliability. - Design data models to drive APIs, ensuring best practices for usability and operations. - Utilize Kubernetes (K8s) to build scalable, resilient, and high-availability (HA) architectures. - Demonstrate a profound understanding of Nvidia and AMD metric collection and AI-driven analysis. - Plan and coordinate engineering work, map tasks to releases, conduct code reviews, and address technical challenges to facilitate releases. - Generate architecture specifications and develop proof-of-concept (POC) solutions for clarity as necessary. - Collaborate with product management to comprehend customer requirements and build architecturally sound solutions, working closely with engineers on implementation to ensure alignment with architectural requirements. - Manage technical debt with a strong emphasis on upholding product quality. - Integrate AI tools into everyday engineering practices, including code reviews, early bug detection, and test coverage automation. Required Skills: - Deep expertise in Golang, Python, C++, eBPF. - Proficiency in Kubernetes (K8s), Helm, Kubebuilder, K8S Operator pattern. - Hands-on experience with CI/CD pipelines and their impact on release quality. - Demonstrated experience in building and running SaaS services. - Strong design skills in distributed systems and large-scale data collection. - Familiarity with SLA/SLO principles and managing application scalability. - Practical experience with the NVIDIA stack and CUDA development. Minimum Qualifications: - Demonstrable experience in Golang development. - Leading CI/CD tools and API-first design practices. - Operations of Kubernetes for running SaaS services. - AI tools and generative AI applications for engineering. - Comprehensive understanding of software release processes, including the use of feature flags to ensure predictability. - Proficiency in utilizing agents during coding, review, CI, and CD processes. - Bachelor's degree or equivalent with 10+ years of engineering experience. Preferred Qualifications: - Proven leadership experience in building and guiding SaaS software teams in high-growth, dynamic environments. - Master's degree or equivalent. #WeAreCisco #WeAreCisco where every individual brings their unique skills and perspectives together to pursue our purpose of powering an inclusive future for all. Our passion is connection - we celebrate our employees" diverse set of backgrounds and focus on unlocking potential. Cisconians often experience one company, many careers where learning and development are encouraged and supported at every stage. Our technology, tools, and culture pioneered hybrid work trends, allowing all to not only give their best but be their best. We understand our outstanding opportunity to bring communities together, and at the heart of that is our people. One-third of Cisconians collaborate in our 30 employee resource organizations, called Inclusive Communities, to connect, foster belonging, learn to be informed allies, and make a difference. Dedicated paid time off to volunteer - 80 hours each year - allows us to give back to causes we are passionate about, and nearly 86% do! Our purpose, driven by our people, is what makes us the worldwide leader in technology that powers the internet. Helping our customers reimagine their applications, secure their enterprise, transform their infrastructure, and meet their sustainability goals is what we do best. We ensure that every step we take is a step towards a more inclusive future for all. Take your next step and be you, with us!,

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a Senior DevOps Engineer to join our DevOps team in India. This is an amazing opportunity to work on key products in the Intellectual Property space, leveraging tools like Kubernetes, Terraform, Datadog, Jenkins, and a wide range of AWS services. We have a great skill set in Kubernetes, AWS, CI/CD automation, and monitoring, and we would love to speak with you if you have experience with Kubernetes, Terraform, and Pipelines. About You – Experience, Education, Skills, And Accomplishments Bachelor’s degree in computer science, Engineering, or equivalent experience. 5 + years of overall experience in development, with 3 + years in a DevOps-focused role. 2 + years of AWS experience managing services like RDS, EC2, IAM, Route53, VPC, EKS, Beanstalk, WAF, CloudFront, and Lambda. Hands-on experience with Kubernetes, Docker, and Terraform, including writing Dockerfiles and Kubernetes manifests. Experience building pipelines using Jenkins, Bamboo, or similar tools. Hands on experience working with monitoring tools. Scripting knowledge in Linux/Bash. It would be great if you also have . . . Experience with Helm and other Kubernetes tools. Python programming experience. What will you be doing in this role? Upgrade and enhance Kubernetes clusters. Troubleshoot environment-related issues alongside development teams. Extend and improve monitoring capabilities. Develop and optimize CI/CD pipelines. Write infrastructure as code using Terraform. Share innovative ideas and contribute to team improvement. About The Team Our team supports several products within the Intellectual Property space, built on technologies like Kubernetes (EKS), Jenkins, AWS services (Beanstalk, IAM, EC2, RDS, Route53, Lambda, CloudFront, WAF, S3), Bamboo, and Kong. We work closely with Developers and QA to deploy and improve systems, and bring in-depth knowledge of AWS, networking, monitoring, security, and infrastructure configuration. Hours of Work This is a full time opportunity with Clarivate, 9 hours per day including 1 hour lunch break. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

You have an exciting opportunity to join Ripplehire as a Senior DevOps Team Lead - GCP Specialist. In this role, you will play a crucial part in shaping and executing the cloud infrastructure strategy using Google Cloud Platform (GCP), with a particular focus on GKE, networking, and optimization strategies. As the Senior DevOps Team Lead, your responsibilities will include designing, implementing, and managing GCP-based infrastructure, optimizing GKE clusters for performance and cost-efficiency, establishing secure VPC architectures and firewall rules, setting up logging and monitoring systems, driving cost optimization initiatives, mentoring team members on GCP best practices, and collaborating with development teams on CI/CD pipelines. To excel in this role, you must possess extensive experience with GCP, including GKE, networking, logging, monitoring, and cost optimization. Additionally, you should have a strong background in Infrastructure as Code, CI/CD pipeline design, container orchestration, troubleshooting, incident management, and performance optimization. Qualifications for this position include at least 5 years of DevOps experience with a focus on GCP environments, GCP Professional certifications (Cloud Architect, DevOps Engineer preferred), experience leading technical teams, cloud security expertise, and a track record of scaling infrastructure for high-traffic applications. If you are ready for a new challenge and an opportunity to advance your career in a supportive work environment, don't miss this chance to apply. Click on Apply, complete the Screening Form, upload your resume, and increase your chances of getting shortlisted for an interview. Uplers is committed to making the hiring process reliable, simple, and fast, and we are here to support you throughout your engagement. Apply today and take the next step in your career journey with us!,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

andhra pradesh

On-site

You are seeking a highly skilled Technical Architect with expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As the ideal candidate, you will play a pivotal role in designing scalable, secure, and high-performance IoT solutions, leading full-stack teams, and collaborating across product, infrastructure, and data teams. Your key responsibilities will include designing and implementing scalable and secure IoT platform architecture, defining data flow and event processing pipelines, architecting micro services-based solutions, and integrating them with React-based front-ends. You will also be responsible for defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring and observability, and enabling auto-scaling and zero-downtime deployments. In addition, you will need to collaborate with product managers and business stakeholders to translate requirements into technical specs, mentor and lead a team of developers and engineers, conduct code and architecture reviews, set goals and targets, and provide coaching and professional development to team members. Your role will also involve conducting unit testing, identifying risks, using coding standards and best practices to ensure quality, and maintaining a long-term outlook on the product roadmap and its enabling technologies. To be successful in this role, you must have hands-on IoT project experience, experience in designing and deploying multi-tenant SaaS platforms, strong knowledge of security best practices in IoT and cloud, excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration into IoT pipelines, exposure to industrial protocols, experience with digital twin concepts, and certifications in relevant technologies. Ideally, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Skills: Kubernetes cluster, Kubernetes, CI/CD, GCP, OpenShift, Red Hat Linux, Role - K8 Expert & Architect Education - B.E./B.Tech/MCA in Computer Science Experience - 8-12 Years Location - Mumbai/Bangalore/Gurgaon Mandatory Skills (Docker And Kubernetes) Should have good understanding of various components of Kubernetes cluster Should have hands on experience of provisioning of Kubernetes cluster. Should have expertise on managing and upgradation Kubernetes Cluster / Redhat Open shift platform. Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin). Should have hands on experience of linux operating system administration. Should have understanding of Cloud Infrastructure preferably Vmware Cloud. Should have good understanding of application life cycle management on container platform Should have basis understanding of cloud networks and container networks. Should have good understanding of Helm and Helm Charts. Should be good in performance optimization of container platform. Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK . Should be able to handle Severity#1 and Severity#2 incidents. Good communication skills. Should have capability to provide the support. Should have analytical and problem-solving capabilities, ability to work with teams. Should have experience on 24*7 operation support framework). Should have knowledge of ITIL Process Preferred Skills/Knowledge - Container Platforms - Docker, CRI/O, Kubernetes and OpenShift . Automation Platforms - Shell Scripts, Ansible, Jenkin. Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu container Storage and Backup Desired Skills Certified Redhat OpenShift Administrator Certification of administration of any Cloud Platform will be an added advantage Soft Skills Must have good troubleshooting skills Must be ready to learn new technologies and acquire new skills. Must be a Team Player. Should be good in Spoken and Written English

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Skills: Kubernetes cluster, Kubernetes, CI/CD, GCP, OpenShift, Red Hat Linux, Role - K8 Expert & Architect Education - B.E./B.Tech/MCA in Computer Science Experience - 8-12 Years Location - Mumbai/Bangalore/Gurgaon Mandatory Skills (Docker And Kubernetes) Should have good understanding of various components of Kubernetes cluster Should have hands on experience of provisioning of Kubernetes cluster. Should have expertise on managing and upgradation Kubernetes Cluster / Redhat Open shift platform. Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin). Should have hands on experience of linux operating system administration. Should have understanding of Cloud Infrastructure preferably Vmware Cloud. Should have good understanding of application life cycle management on container platform Should have basis understanding of cloud networks and container networks. Should have good understanding of Helm and Helm Charts. Should be good in performance optimization of container platform. Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK . Should be able to handle Severity#1 and Severity#2 incidents. Good communication skills. Should have capability to provide the support. Should have analytical and problem-solving capabilities, ability to work with teams. Should have experience on 24*7 operation support framework). Should have knowledge of ITIL Process Preferred Skills/Knowledge - Container Platforms - Docker, CRI/O, Kubernetes and OpenShift . Automation Platforms - Shell Scripts, Ansible, Jenkin. Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu container Storage and Backup Desired Skills Certified Redhat OpenShift Administrator Certification of administration of any Cloud Platform will be an added advantage Soft Skills Must have good troubleshooting skills Must be ready to learn new technologies and acquire new skills. Must be a Team Player. Should be good in Spoken and Written English

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies