Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
3 - 7 Lacs
Mohali
Work from Office
The Cloud Computing Training Expert will be responsible for delivering high-quality training sessions, developing curriculum, and guiding students toward industry certifications and career opportunities. Key Responsibilities 1. Training Delivery Design, develop, and deliver high-quality cloud computing training through courses, workshops, boot camps, and webinars. Cover a broad range of cloud topics, including but not limited to: Cloud Fundamentals (AWS, Azure, Google Cloud) Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Serverless Computing Cloud Security, Identity & Access Management (IAM), Compliance DevOps & CI/CD Pipelines (Jenkins, Docker, Kubernetes, Terraform, Ansible) Networking in the Cloud, Virtualization, and Storage Solutions Multi-cloud Strategies & Cost Optimization 2. Curriculum Development Develop and continuously update training materials, hands-on labs, and real-world projects. Align curriculum with cloud certification programs (AWS Certified Solutions Architect, Azure Administrator, Google Cloud Professional, etc.). 3. Training Management Organize and manage cloud computing training sessions, ensuring smooth delivery and active student engagement. Track student progress and provide guidance, feedback, and additional learning resources. 4. Technical Support & Mentorship Assist students with technical queries and troubleshooting related to cloud platforms. Provide career guidance, helping students pursue cloud certifications and job placements in cloud computing and DevOps roles. 5. Industry Engagement Stay updated on emerging cloud technologies, trends, and best practices. Represent ASB at cloud computing conferences, industry events, and tech forums. 6. Assessment & Evaluation Develop and administer hands-on labs, quizzes, and real-world cloud deployment projects. Evaluate learner performance and provide constructive feedback. Required Qualifications & Skills > Educational Background Bachelors or Masters degree in Computer Science, Information Technology, Cloud Computing, or a related field. > Hands-on Cloud Experience 3+ years of experience in cloud computing, DevOps, or cloud security roles. Strong expertise in AWS, Azure, and Google Cloud, including cloud architecture, storage, and security. Experience in Infrastructure as Code (IaC) using Terraform, CloudFormation, or Ansible. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines. > Teaching & Communication Skills 2+ years of experience in training, mentoring, or delivering cloud computing courses. Ability to explain complex cloud concepts in a clear and engaging way. > Cloud Computing Tools & Platforms Experience with AWS services (EC2, S3, Lambda, RDS, IAM, CloudWatch, etc.). Hands-on experience with Azure and Google Cloud solutions. Familiarity with DevOps tools (Jenkins, GitHub Actions, Kubernetes, Docker, Prometheus, Grafana, etc.). > Passion for Education A strong desire to train and mentor future cloud professionals. Preferred Qualifications > Cloud Certifications (AWS, Azure, Google Cloud) AWS Certified Solutions Architect, AWS DevOps Engineer, Azure Administrator, Google Cloud Professional Architect or a similar architecture. > Experience in Online Teaching Prior experience in delivering online training (Udemy, Coursera, or LMS platforms). > Knowledge of Multi-Cloud & Cloud Security Understanding of multi-cloud strategies, cloud cost optimization, and cloud-native security practices. > Experience in Hybrid Cloud & Edge Computing Familiarity with hybrid cloud deployment, cloud automation, and emerging edge computing trends.
Posted 1 month ago
8.0 - 10.0 years
25 - 30 Lacs
Bengaluru, Indiranagar
Work from Office
Years of Experience 8 to 10 years of experience PD1 Any Project specific Prerequisite skills Candidate will work from customer Location Bangalore (Indiranagar) No of Contractors required 1 Detailed JD Extensive hands-on experience with OpenShift (Azure Redhat OpenShift) - installation, upgrades, administration, and troubleshooting. Strong expertise in Kubernetes, containerization (Docker), and cloud-native development. Deep knowledge of Terraform for infrastructure automation and ArgoCD for GitOps workflows. Experience in CI/CD pipelines, automation, and security integration within a DevSecOps framework. Strong understanding of cybersecurity principles, including vulnerability management, policy enforcement, and access control. Proficiency in Microsoft Azure and its services related to networking, security, and compute. Hands-on experience with monitoring and observability tools (Splunk, Prometheus, Grafana, or similar). Agile mindset, preferably with SAFe Agile experience. Strong communication skills and ability to work with global teams across time zones. Experience with Helm charts and Kubernetes operators. Knowledge of Service Mesh (Istio, Linkerd) for OpenShift (Azure Redhat OpenShift) environments, preferred. Hands-on exposure to Terraform Cloud & Enterprise features. Prior experience in automotive embedded software environments.
Posted 1 month ago
3.0 - 6.0 years
10 - 14 Lacs
Bengaluru
Hybrid
Hi all , we are looking for a role DevOps Engineer experience : 3 - 6 years notice period : Immediate - 15 days location : Bengaluru Description: Job Title: DevOps Engineer with 4+ years experience Job Summary We're looking for a dynamic DevSecOps Engineer to lead the charge in embedding security into our DevOps lifecycle. This role focuses on implementing secure, scalable, and observable cloud-native systems, leveraging Azure, Kubernetes, GitHub Actions, and security tools like Black Duck, SonarQube, and Snyk. Key Responsibilities • Architect, deploy, and manage secure Azure infrastructure using Terraform and Infrastructure as Code (IaC) principles • Build and maintain CI/CD pipelines in GitHub Actions, integrating tools such as Black Duck, SonarQube, and Snyk • Operate and optimize Azure Kubernetes Service (AKS) for containerized applications • Configure robust monitoring and observability stacks using Prometheus, Grafana, and Loki • Implement incident response automation with PagerDuty • Manage and support MS SQL databases and perform basic operations on Cosmos DB • Collaborate with development teams to promote security best practices across SDLC • Identify vulnerabilities early and respond to emerging security threats proactively Required Skills • Deep knowledge of Azure Services, AKS, and Terraform • Strong proficiency with Git, GitHub Actions, and CI/CD workflow design • Hands-on experience integrating and managing Black Duck, SonarQube, and Snyk • Proficiency in setting up monitoring stacks: Prometheus, Grafana, and Loki • Familiarity with PagerDuty for on-call and incident response workflows • Experience managing MSSQL and understanding Cosmos DB basics • Strong scripting ability (Python, Bash, or PowerShell) • Understanding of DevSecOps principles and secure coding practices • Familiarity with Helm, Bicep, container scanning, and runtime security solutions
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Skills (Must have): 3+ years of DevOps experience. Expertise in Kubernetes, Docker, and CI/CD tools (Jenkins, GitLab CI). Hands-on with config management tools like Ansible, Puppet, or Chef. Strong knowledge of cloud platforms (AWS, Azure, or GCP). Proficient in scripting (Bash, Python). Good troubleshooting, analytical, and communication skills. Willingness to explore frontend tech (ReactJS, NodeJS, Angular) is a plus. Skills (Good to have): Experience with Helm charts and service meshes (Istio, Linkerd). Experience with monitoring and logging solutions (Prometheus, Grafana, ELK). Experience with security best practices for cloud and container environments. Contributions to open-source projects or a strong personal portfolio. Role & Responsibility: Manage and optimize Kubernetes clusters, including deployments, scaling, and troubleshooting. Develop and maintain Docker images and containers, ensuring security best practices. Design, implement, and maintain cloud-based infrastructure (AWS, Azure or GCP) using Infrastructure-as-Code (IaC) principles (e.g., Terraform). Monitor and troubleshoot infrastructure and application performance, proactively identifying and resolving issues. Contribute to the development and maintenance of internal tools and automation scripts. Qualification: B.Tech/B.E./M.E./M.Tech in Computer Science or equivalent. Additional Information: We offer a competitive salary and excellent benefits that are above industry standard. Do check our impressive growth rate on and ratings on Please submit your resume in this standard 1-page or 2-page Please hear from our employees on Colleagues Interested in Internal Mobility, please contact your HRBP in-confidence
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Job Title : AI Observability Tools Engineer Experience : 5-7 years Location : Hyderabad (work from office) Shift : Rotational Shift Notice Period : 30 days . Key Responsibilities : Implement observability tools like Prometheus, Grafana, Datadog, Splunk, logic Monitor, thousand eyes for AI/ML environments. Monitor model performance, setting up monitoring thresholds, synthetic test plans, data pipelines, and inference systems. Ensure visibility across infrastructure, application, and network layers Collaborate with SRE, DevOps, and Data Science teams to build proactive alerting and RCA systems. Drive real-time monitoring and AIOps integration for AI workloads Integration with ITSM Solutions like ServiceNow Skills Required Experience with tools: Datadog, Prometheus, Grafana, Splunk, Open Telemetry. Solid understanding of networking concepts (TCP/IP, DNS, Load Balancers) Knowledge of AI/ML infrastructure and observability metrics Scripting : Python, Bash or Go.
Posted 1 month ago
7.0 - 12.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 12+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
4.0 - 6.0 years
6 - 9 Lacs
Ahmedabad
Work from Office
Role Overview: As a DevOps Engineer at ChartIQ , you'll play a critical role not only in building, maintaining, and scaling the infrastructure that supports our Development our Development and QA needs , but also in driving new, exciting cloud-based solutions that will add to our offerings. Your work will ensure that the platforms used by our team remain available, responsive, and high-performing. In addition to maintaining the current infrastructure, you will also contribute to the development of new cloud-based solutions , helping us expand and enhance our platform's capabilities to meet the growing needs of our financial services customers. You will also contribute to light JavaScript programming , assist with QA testing , and troubleshoot production issues. Working in a fast-paced, collaborative environment, you'll wear multiple hats and support the infrastructure for a wide range of development teams. This position is based in Ahmedabad, India , and will require working overlapping hours with teams in the US . The preferred working hours will be until 12 noon EST to ensure effective collaboration across time zones. Key Responsibilities: Design, implement, and manage infrastructure using Terraform or other Infrastructure-as-Code (IaC) tools. Leverage AWS or equivalent cloud platforms to build and maintain scalable, high-performance infrastructure that supports data-heavy applications and JavaScript-based visualizations. Understand component-based architecture and cloud-native applications. Implement and maintain site reliability practices , including monitoring and alerting using tools like DataDog , ensuring the platforms availability and responsiveness across all environments. Design and deploy high-availability architecture to support continuous access to alerting engines. Support and maintain Configuration Management systems like ServiceNow CMDB . Manage and optimize CI/CD workflows using GitHub Actions or similar automation tools. Work with OIDC (OpenID Connect) integrations across Microsoft , AWS , GitHub , and Okta to ensure secure access and authentication. Contribute to QA testing (both manual and automated) to ensure high-quality releases and stable operation of our data visualization tools and alerting systems. Participate in light JavaScript programming tasks, including HTML and CSS fixes for our charting library. Assist with deploying and maintaining mobile applications on the Apple App Store and Google Play Store . Troubleshoot and manage network issues , ensuring smooth data flow and secure access to all necessary environments. Collaborate with developers and other engineers to troubleshoot and optimize production issues. Help with the deployment pipeline , working with various teams to ensure smooth software releases and updates for our library and related services. Required Qualifications: Proficiency with Terraform or other Infrastructure-as-Code tools. Experience with AWS or other cloud services (Azure, Google Cloud, etc.). Solid understanding of component-based architecture and cloud-native applications. Experience with site reliability tools like DataDog for monitoring and alerting. Experience designing and deploying high-availability architecture for web based applications. Familiarity with ServiceNow CMDB and other configuration management tools. Experience with GitHub Actions or other CI/CD platforms to manage automation pipelines. Strong understanding and practical experience with OIDC integrations across platforms like Microsoft , AWS , GitHub , and Okta . Solid QA testing experience, including manual and automated testing techniques (Beginner/Intermediate). JavaScript , HTML , and CSS skills to assist with troubleshooting and web app development. Experience with deploying and maintaining mobile apps on the Apple App Store and Google Play Store that utilize web-based charting libraries. Basic network management skills, including troubleshooting and ensuring smooth network operations for data-heavy applications. Knowledge of package publishing tools such as Maven , Node , and CocoaPods to ensure seamless dependency management and distribution across platforms. Additional Skills and Traits for Success in a Startup-Like Environment: Ability to wear multiple hats : Adapt to the ever-changing needs of a startup environment within a global organization. Self-starter with a proactive attitude, able to work independently and manage your time effectively. Strong communication skills to work with cross-functional teams, including engineering, QA, and product teams. Ability to work in a fast-paced, high-energy environment. Familiarity with agile methodologies and working in small teams with a flexible approach to meeting deadlines. Basic troubleshooting skills to resolve infrastructure or code-related issues quickly. Knowledge of containerization tools such as Docker and Kubernetes is a plus. Understanding of DevSecOps and basic security practices is a plus. Preferred Qualifications: Experience with CI/CD pipeline management , automation, and deployment strategies. Familiarity with serverless architectures and AWS Lambda . Experience with monitoring and logging frameworks, such as Prometheus , Grafana , or similar. Experience with Git , version control workflows, and source code management. Security-focused mindset , experience with vulnerability scanning, and managing secure application environments.
Posted 1 month ago
5.0 - 10.0 years
15 - 20 Lacs
Pune
Hybrid
Team: SRE & Operations Duration 12 Months Shift: General Shift 9:00AM- 5:00 PM Location: Pune Interviews: 2 Round YOE: 5-7 Years (4 Relevant ) NOTES: Preffered Immediate joiner or 15 days np Top Skills Splunk - Querys, Dashboard and application creation Grafana dashboard, Prometheus, Data Visualization Open Telemetry, Grafana, Prometheus Dynatrace, Datadog tools are good to have. Some infra knowledge on servers, storage and web application infrastructure.
Posted 1 month ago
4.0 - 8.0 years
0 - 0 Lacs
Thiruvananthapuram
Work from Office
DevOps/SRE with 5+ yrs exp in Azure, Terraform, Kubernetes (EKS/GKE/AKS), Docker, CI/CD (Jenkins/GitHub), scripting (Python/Bash), Linux admin, monitoring (Prometheus/Grafana/ELK), GitOps, Helm, and strong networking/security skills.
Posted 1 month ago
4.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Job TitleLead Engineer (Core Wireless Testing) LocationBengaluru Work EmploymentFull time DepartmentProduct Engineering DomainProduct Validation Reporting toManager : Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Join Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who We Are Product Engineering team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. What You Work: As a Lead Engineer you will be responsible for driving technical projects, managing resources effectively, balancing team workloads. You will design solutions, oversee testing, and mentor junior engineers to ensure productivity and skill development. Also, you will manage resources, troubleshoot, debug issues, writing and reviewing test cases to ensure code quality, and collaborate with cross-functional teams to deliver high-quality products on time. Knowledge of software development methodology, build tools, and product life cycle Build a 5G Cloud-native test solution in a virtualized environment with end to end understanding of 5G Network functions (i.e., AMF, SMF, UPF and PCF) and protocols Exposure to customer deployment models and configuration of large mobile packet core solutions Have 4-12 years of Industry experience in Mobile packet core technologies with validation background and solid exposure in automation You have End to End or System Testing background Good knowledge in Kubernetes, docker and Cloud Native solutions Experience in bringing up Open stack , VMWare based test setups Interest & Passion in Automation and framework development using Python and Robot Framework Exposure in Spirent Landslide, Mobilium DsTest or Ixia simulators Exposure in automation frameworks like pyats and robot framework. Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Mandatory skills: Solid experience in 5G core End to End validation Kubernetes, Docker, OpenStack Working exposure on AMF and UPF Have been to customer escalation role Spirent Landslide Python, Shell Scripting, Robot framework Desired skills: Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Preferred Qualifications Experience 6 to 10 years of relevant experience Education B.Tech/BE or any other equivalent degree, PG in communication field Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
10.0 - 15.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Job TitleStaff Engineer (Core Wireless Testing) LocationBengaluru Work EmploymentFull time DepartmentProduct Engineering DomainProduct Validation Reporting toGroup Manager : Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Join Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who We Are Product Engineering team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. What You Work: As a Staff Engineer you will be responsible for driving technical projects, managing resources effectively, balancing team workloads. You will design solutions, oversee testing, and mentor junior engineers to ensure productivity and skill development. You’ll lead technical initiatives, mentor team members and collaborate closely with cross functional teams to drive innovation and ensure high-quality deliverables. You’ll leverage your expertise to solve challenging problems and contribute to strategic engineering decisions. Knowledge of software development methodology, build tools, and product life cycle Build a 5G Cloud-native test solution in a virtualized environment with end to end understanding of 5G Network functions (i.e., AMF, SMF, UPF and PCF) and protocols Exposure to customer deployment models and configuration of large mobile packet core solutions Have 10+ years of Industry experience in Mobile packet core technologies with validation background and solid exposure in automation You have End to End or System Testing background Good knowledge in Kubernetes, docker and Cloud Native solutions Experience in bringing up Open stack , VMWare based test setups Interest & Passion in Automation and framework development using Python and Robot Framework Exposure in Spirent Landslide, Mobilium DsTest or Ixia simulators Exposure in automation frameworks like pyats and robot framework. Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Mandatory skills: Solid experience in 5G core End to End validation Kubernetes, Docker, OpenStack Working exposure on AMF and UPF Have been to customer escalation role Spirent Landslide Python, Shell Scripting, Robot framework Desired skills: Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Preferred Qualifications Experience 10 to 15 years of relevant experience Education B.Tech/BE or any other equivalent degree, PG in communication field Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
4.0 - 9.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Job TitleSenior Engineer (Core Wireless Testing) LocationBengaluru Work EmploymentFull time DepartmentProduct Engineering DomainProduct Validation Reporting toManager : Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Join Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who We Are Product Engineering team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. What You Work: As a Senior Engineer, you will be responsible for, Knowledge of software development methodology, build tools, and product life cycle Build a 5G Cloud-native test solution in a virtualized environment with end to end understanding of 5G Network functions (i.e., AMF, SMF, UPF and PCF) and protocols Exposure to customer deployment models and configuration of large mobile packet core solutions Have 4-12 years of Industry experience in Mobile packet core technologies with validation background and solid exposure in automation You have End to End or System Testing background Good knowledge in Kubernetes, docker and Cloud Native solutions Experience in bringing up Open stack , VMWare based test setups Interest & Passion in Automation and framework development using Python and Robot Framework Exposure in Spirent Landslide, Mobilium DsTest or Ixia simulators Exposure in automation frameworks like pyats and robot framework. Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Mandatory skills: Solid experience in 5G core End to End validation Kubernetes, Docker, OpenStack Working exposure on AMF and UPF Have been to customer escalation role Spirent Landslide Python, Shell Scripting, Robot framework Desired skills: Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Preferred Qualifications Experience 4 to 6 years of relevant experience Education B.Tech/BE or any other equivalent degree, PG in communication field Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
6.0 - 9.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Job TitleLead Engineer – CI CD Devops LocationBengaluru Work EmploymentFull time DepartmentWireline DomainSoftware Reporting toGroup Engineer Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why join Tejas We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who we are: In the dynamic world of enterprise technology, the shift towards cloud-native solutions is not just a trend but a necessity. As we embark on developing a state-of-the-art Network Management System (NMS) and Reporting tool, our goal is to leverage the latest technologies to create a robust, scalable, and efficient solution. This initiative is crucial for ensuring our network’s optimal performance, security, and reliability while providing insightful analytics through advanced reporting capabilities. Our project aims to design and implement a cloud-native NMS and reporting tool that will revolutionize how we manage and monitor our network infrastructure. By utilizing cutting-edge technologies, we will ensure that our solution is not only future-proof but also capable of adapting to the ever-evolving demands of our enterprise environment. What you work Develop and implement automation strategies for software build, deployment, and infrastructure management. Design and maintain CI/CD pipelines to enable frequent and reliable software releases. Collaborate with development, QA, and operations teams to optimize workflows and enhance software quality. Automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Monitor and troubleshoot CI/CD pipelines to ensure smooth operation and quick resolution of issues. Implement and maintain robust monitoring and alerting tools to ensure system reliability. Work with various tools and technologies such as Git, Jenkins, Docker, Kubernetes, and cloud platforms (e.g., AWS, Azure). Ensure compliance with security standards and best practices throughout the development lifecycle. Continuously improve the CI/CD processes by incorporating new tools, techniques, and best practices. Provide training and guidance to team members on DevOps principles and practices. You will be responsible for leading a team and guiding them for optimum output. Mandatory skills Strong experience in software development and system administration. Proficiency in programming languages such as Python, Java, or similar. Strong understanding of CI/CD concepts and experience with tools like Jenkins, Git, Docker, and Kubernetes. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced, dynamic environment. Desired skills Experience with infrastructure such as code (IaC) tools like Terraform or Ansible. Knowledge of container orchestration tools like Kubernetes or Rancher Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Certification in AWS, Azure, or other relevant technologies. Preferred Qualifications: Experience 6 to 9 years’ experience from Telecommunication or Networking background. Education B.Tech/BE (CSE/ECE/EEE/IS) or any other equivalent degree Candidate should be good at coding skills in CI CD, Devops with Java . Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
6.0 - 10.0 years
13 - 17 Lacs
Noida
Work from Office
We are looking for a skilled Azure L3 Architect with 6 to 10 years of experience in designing and implementing scalable, secure, and highly available cloud-based solutions on Microsoft Azure. This position is based in Pune. Roles and Responsibility Design and implement robust Azure-based infrastructure for critical BFSI applications. Manage and optimize Kubernetes clusters on Azure, ensuring scalability, security, and high availability. Develop CI/CD pipelines and automate workflows using tools like Terraform, Helm, and Azure DevOps. Ensure adherence to BFSI industry standards by implementing advanced security measures. Analyze and optimize Azure resource usage to minimize costs while maintaining performance and compliance standards. Collaborate with cross-functional teams to support application deployment, monitoring, troubleshooting, and lifecycle management. Job Minimum 6 years of hands-on experience in Azure and Kubernetes environments within BFSI or similar industries. Expertise in AKS, Azure IaaS, PaaS, and security tools like Azure Security Center. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong knowledge of cloud security principles and tools such as Azure Security Center and Azure Key Vault. Experience with cost management tools such as Azure Cost Management + Billing. Familiarity with monitoring tools such as Prometheus, Grafana, New Relic, Azure Log Analytics, and ADF. Understanding of BFSI compliance regulations and standards. Process improvement experience using frameworks like Lean, Six Sigma, or similar methodologies. Bachelor's degree in Computer Science, Engineering, or a related field. Certifications like Azure Solutions Architect, Certified Kubernetes Administrator (CKA), or Certified Azure DevOps Engineer are advantageous.
Posted 1 month ago
10.0 - 12.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Senior Java Developer with strong expertise in Java and Spring Boot framework. The ideal candidate should have extensive experience with AWS cloud services and deploying applications in a cloud environment. This position is located in Hyderabad and requires 10 to 12 years of experience. Roles and Responsibility Design, develop, and deploy high-quality software applications using Java and Spring Boot. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale Java-based systems with scalability and performance. Troubleshoot and resolve complex technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with the latest trends and technologies in Java and related fields. Job Strong hands-on experience with Apache Kafka (producer/consumer, topics, partitions). Deep knowledge of PostgreSQL including schema design, indexing, and query optimization. Experience with JUnit test cases and developing unit/integration test suites. Familiarity with code coverage tools such as JaCoCo or SonarQube. Excellent verbal and written communication skills to explain complex technical concepts clearly. Demonstrated leadership skills with experience managing, mentoring, and motivating technical teams. Proven experience in stakeholder management, including gathering requirements, setting expectations, and delivering technical solutions aligned with business goals. Familiarity with microservices architecture and RESTful API design. Experience with containerization (Docker) and orchestration platforms like Kubernetes (EKS). Strong understanding of CI/CD pipelines and DevOps practices. Solid problem-solving skills with the ability to handle complex technical challenges. Familiarity with monitoring tools like Prometheus and Grafana, and log management. Experience with version control systems (Git) and Agile/Scrum methodologies.
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Database Engineer with 5 to 10 years of experience to design, develop, and maintain our database infrastructure. This position is based remotely. Roles and Responsibility Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale and big data processing. Implement data security measures to protect sensitive information and comply with relevant regulations. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to relational database systems or cloud-based solutions like Google BigQuery and AWS. Develop import workflows and scripts to automate data import processes. Ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and resolve issues, while collaborating with the full-stack web developer to implement efficient data access and retrieval mechanisms. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows, exploring third-party technologies as alternatives to legacy approaches for efficient data pipelines. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices, and use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines, taking accountability for achieving development milestones. Prioritize tasks to ensure timely delivery in a fast-paced environment with rapidly changing priorities, while also collaborating with fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems, leveraging online resources effectively like StackOverflow, ChatGPT, Bard, etc., considering their capabilities and limitations. Job Proficiency in SQL and relational database management systems like PostgreSQL or MySQL, along with database design principles. Strong familiarity with Python for scripting and data manipulation tasks, with additional knowledge of Python OOP being advantageous. Demonstrated problem-solving skills with a focus on optimizing database performance and automating data import processes. Knowledge of cloud-based databases like AWS RDS and Google BigQuery. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. About Company Marketplace is an experienced team of industry experts dedicated to helping readers make informed decisions and choose the right products with ease. We arm people with trusted advice and guidance, so they can make confident decisions and get back to doing the things they care about most.
Posted 1 month ago
4.0 - 6.0 years
10 - 14 Lacs
Mumbai
Work from Office
We are seeking a highly skilled Linux Administrator/Infrastructure Cloud Engineer with experience in managing both physical and virtual Linux servers. The ideal candidate will have a strong background in Linux administration, cloud services, and containerization technologies. This role requires a proactive individual who can optimize system performance, manage critical production incidents, and collaborate effectively with cross-functional teams. Key Responsibilities: • Administer and maintain Linux servers (Ubuntu, Debian, Redhat, CentOS) for both physical and virtual environments, including OS installation, performance monitoring, optimization, kernel tuning, LVM management, file system management, and security management. • Configure and manage servers for NFS, SAMBA, DNS, and other services. • Develop and maintain shell scripts for automation and configuration management, preferably using Ansible. • Manage Linux file systems and implement effective backup strategies. • Perform OS upgrades and patch management to ensure system security and compliance. • Respond to critical production incidents, ensuring that SLAs are maintained while troubleshooting and resolving issues. • Coordinate with database, DevOps, and other related teams to address system issues and ensure seamless operations. • Configure and manage network settings, including VLANs, switch configurations, gateways, and firewalls. • Develop automation solutions and documentation for recurring technical issues to improve efficiency. • Utilize AWS cloud services (e.g., EC2, S3, Lambda, Route53, IAM, SQS, SNS, SFTP) to configure and maintain cloud infrastructure, including virtual machines, storage systems, and network settings. • Monitor and optimize cloud performance, focusing on resource utilization and cost management. • Troubleshoot and resolve cloud infrastructure issues, conducting root cause analysis to prevent future incidents. • Utilize containerization technologies like Docker to manage and deploy applications effectively.
Posted 1 month ago
3.0 - 5.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Job Summary The NetApp Keystone team is responsible for cutting-edge technologies that enable NetApp’s pay as you go offering. Keystone helps customers manage data on prem or in the cloud and have invoices that are charged in a subscription manner. As an engineer in the NetApp’s Keystone organization, you will be executing our most challenging and complex projects. You will be responsible for decomposing complex product requirements into simple solutions, understanding system interdependencies and limitations and engineering best practices. Job Requirements Strong knowledge of Go programming language, paradigms, constructs, and idioms. Bachelor’s/Master’s degree in computer science, information technology, or engineering/ or anything specific that you prefer. Knowledge of various Go frameworks and tool 1 year experience working with the Go programming languag Strong written and communication skills with proven fluency in English Familiarity with database technologies such as NoSQL, Prometheus and MongoD Hands-on experience with code conversion tools like Git Passionate about learning new tools, languages, philosophies, and workflow Working with generated code and code generation technique Working with document databases and Golang ORM librarie Knowledge of programming methodologies - Object Oriented/Functional/Design Pattern Knowledge of software development methodologies - SCRUM/AGILE/LEA Knowledge of software deployment - Docker/Kubernete Knowledge of software team tools - GIT/JIRA/CICD Education Minimum of 2 to 4 years experience required with B.Tech or M.Tech background.
Posted 1 month ago
1.0 - 4.0 years
11 - 15 Lacs
Gurugram
Work from Office
The Engineer will be responsible for developing and driving an infrastructure platform that delivers secure, reliable, and scalable services They will leverage their expertise in datacenters and cloud providers, automation, and data analytics to create modern, cloud-based technology solutions The Engineer will collaborate with stakeholders to ensure effective communication and contribute to continuous improvement efforts Additionally, they will independently manage routine tasks and proactively identify opportunities for optimization and enhancements of cloud environments Key responsibilities Infrastructure Engineering (60%) Independently create modules, templates or scripts that can automate new use cases Independently create or manually deploys services for uncommon or larger use cases Explain information and technology back to stakeholders effectively Regular and independent communication and collaboration with leadership and peers on improvements or potential technical debt Implement Datacenter/Hybrid technologies with minimal guidance Support development teams by identifying the right solutions from our library of cloud templates and automation Hybrid/Cloud Operations (40%) Independently manages routine, well-established issue resolution or requests Identifies trends to better optimize or secure private, hybrid or public cloud environments Communicates effective issue resolution, collaboration or triaging to stakeholders Identify and support areas to improve technical documentation or knowledge base articles Qualifications 3-5 years of experience post Associate's/Bachelors degree or an equivalent combination of education, training and experience Experience with at least one public cloud provider (AWS, Azure or GCP) or private datacenter management Experience with scripting, infrastructure as code and automation languages and tools (Ansible, Terraform, Helm, GitHub Actions, or PowerShell) Experience with CI/CD processes, code reviews, code deployments and pipelines Experience with logging and monitoring solutions such as Datadog, Grafana and Prometheus Familiarity with containers, Docker and Kubernetes Familiarity with public cloud landing zone architectures Experience with identity solutions such as Active Directory, AzureAD, Okta or LDAP Familiarity with Go or Python for cloud platform and infrastructure API automation Familiarity with Hashicorp Cloud Platform (Terraform & Vault) Familiar with cross-discipline technology used by various stakeholders Competent analytical, conceptual, and problem-solving abilities Effective communication skills Competent written communication skills Knowledge of Agile methodologies and processes Knowledge of: At least one public cloud provider (AWS, Azure or GCP) or private datacenter management Scripting, infrastructure as code and automation languages and tools (Ansible, Terraform, Helm, GitHub Actions, PowerShell or Python)
Posted 1 month ago
4.0 - 5.0 years
0 - 0 Lacs
Hyderabad
Work from Office
Role & responsibilities Assist in the deployment, management, and scaling of applications on Kubernetes. Monitor and troubleshoot Kubernetes clusters and networking issues. Collaborate with development teams to ensure CI/CD pipelines are efficient and effective. Contribute to incident response and root cause analysis for production issues. Preferred candidate profile Experience with monitoring tools (Prometheus, Grafana, etc.). Experience with Terraform or Ansible is a plus. Familiarity with Git and version control practices. Understanding of security best practices in a DevOps environment. Handson experience in Docker and Kubernetes. Understanding of networking concepts. Experience with cloud platforms (AWS or Azure or Google Cloud). Strong problem-solving skills and the ability to work in a team-oriented environment.
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
Production experience on AWS (IAM, ECS, EC2, VPC, ELB, RDS, Autoscaling, Cost Optimisation, Trusted Advisors, Guard Duty, Security, etc.) Must have monitoring experience on tools like Nagios/ Prometheus/ Grafana/ Datadog/ New Relic, etc. Required Candidate profile Must have experience in Linux Administration. Must have a working knowledge of scripting (Python/Shell).
Posted 1 month ago
3.0 - 6.0 years
5 - 9 Lacs
Gurugram
Work from Office
Experience : 8-10 years. Job Title : Devops Engineer. Location : Gurugram. Job Summary. We are seeking a highly skilled and experienced Lead DevOps Engineer to drive the design, automation, and maintenance of secure and scalable cloud infrastructure. The ideal candidate will have deep technical expertise in cloud platforms (AWS/GCP), container orchestration, CI/CD pipelines, and DevSecOps practices.. You will be responsible for leading infrastructure initiatives, mentoring team members, and collaborating. closely with software and QA teams to enable high-quality, rapid software delivery.. Key Responsibilities. Cloud Infrastructure & Automation :. Design, deploy, and manage secure, scalable cloud environments using AWS, GCP, or similar platforms.. Develop Infrastructure-as-Code (IaC) using Terraform for consistent resource provisioning.. Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, AWS CodePipeline, or Azure DevOps.. Containerization & Orchestration :. Containerize applications using Docker for seamless development and deployment.. Manage and scale Kubernetes clusters (on-premise or cloud-managed like AWS EKS).. Monitor and optimize container environments for performance, scalability, and cost-efficiency.. Security & Compliance :. Enforce cloud security best practices including IAM policies, VPC design, and secure secrets management (e.g., AWS Secrets Manager).. Conduct regular vulnerability assessments, security scans, and implement remediation plans.. Ensure infrastructure compliance with industry standards and manage incident response protocols.. Monitoring & Optimization :. Set up and maintain monitoring/observability systems (e.g., Grafana, Prometheus, AWS CloudWatch, Datadog, New Relic).. Analyze logs and metrics to troubleshoot issues and improve system performance.. Optimize resource utilization and cloud spend through continuous review of infrastructure configurations.. Scripting & Tooling :. Develop automation scripts (Shell/Python) for environment provisioning, deployments, backups, and log management.. Maintain and enhance CI/CD workflows to ensure efficient and stable deployments.. Collaboration & Leadership :. Collaborate with engineering and QA teams to ensure infrastructure aligns with development needs.. Mentor junior DevOps engineers, fostering a culture of continuous learning and improvement.. Communicate technical concepts effectively to both technical and non-technical :. Education. Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent hands-on : AWS Certified DevOps Engineer Professional (preferred) or other relevant cloud :. 8+ years of experience in DevOps or Cloud Infrastructure roles, including at least 3 years in a leadership capacity.. Strong hands-on expertise in AWS (ECS, EKS, RDS, S3, Lambda, CodePipeline) or GCP equivalents.. Proven experience with CI/CD tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps.. Advanced knowledge of Docker and Kubernetes ecosystem.. Skilled in Infrastructure-as-Code (Terraform) and configuration management tools like Ansible.. Proficient in scripting (Shell, Python) for automation and tooling.. Experience implementing DevSecOps practices and advanced security configurations.. Exposure to data tools (e.g., Apache Superset, AWS Athena, Redshift) is a plus.. Soft Skills. Strong problem-solving abilities and capacity to work under pressure.. Excellent communication and team collaboration.. Organized with attention to detail and a commitment to Skills :. Experience with alternative cloud platforms (e.g., Oracle Cloud, DigitalOcean).. Familiarity with advanced observability stacks (Grafana, Prometheus, Loki, Datadog).. (ref:hirist.tech). Show more Show less
Posted 1 month ago
10.0 - 13.0 years
27 - 30 Lacs
Hyderabad
Hybrid
Proven PO/TPO experience in cloud/Dev Ops. Hands-on with Azure Dev Ops, Terra form, Kubernetes, CI/CD & IaC. Strong in Agile & stakeholder mg mt. .NET/C# & Azure certs a plus. Drive infra automation & cloud-native initiatives.
Posted 1 month ago
8.0 - 13.0 years
15 - 25 Lacs
Hyderabad
Work from Office
Role Summary Akrivia HCM is seeking an experienced Site Reliability Engineer to safeguard the performance, scalability, and availability of our global HR tech platform. You will define service-level objectives, automate infrastructure, lead incident response, and partner with engineering squads to deliver reliable releases at high velocity. Key Responsibilities Define and track SLIs/SLOs for latency, availability, and error budgets. Build and maintain Terraform/Helm/ArgoCD stacks; convert manual toil into code. Instrument services with Prometheus, Grafana, Datadog, and OpenTelemetry; create actionable alerts & dashboards. Serve in the on-call rotation, lead rapid mitigation, run blameless post-mortems, and close action items. Model load growth, tune autoscaling policies, run load tests, and drive cost-optimisation reviews. Design chaos game-days and fault-injection experiments to validate fail-over and recovery paths. Review designs/PRs for reliability anti-patterns and coach development teams on SRE best practices. Must-Have Qualifications 5+ years operating large-scale, user-facing SaaS systems on AWS, GCP, or Azure (Kubernetes/EKS preferred). Proficiency with Infrastructure-as-Code (Terraform, Helm, Pulumi, or CloudFormation) and GitOps (ArgoCD/Flux). Hands-on experience building observability stacks (Prometheus, Grafana, Datadog, New Relic, etc.). Proven track record reducing MTTR and change-failure rate through automation and robust incident processes. Strong scripting or programming skills in Go, Python, or TypeScript. Deep debugging skills across Linux, networking, containers, databases, and web/API layers. Excellent written and verbal communication skills. Good-to-Have Skills Exposure to AWS Well-Architected reviews, FinOps, or cost-optimisation initiatives. Experience with service mesh (Istio/Linkerd), event-driven systems (Kafka/NATS), or serverless (Lambda). Familiarity with SOC 2 / ISO 27001 controls and secrets management (AWS KMS, Vault). Chaos engineering tools (ChaosMesh, Gremlin) and performance testing (k6, Gatling). Certifications such as AWS DevOps Pro, CKA/CKAD, or Google Cloud SRE.
Posted 1 month ago
3.0 - 5.0 years
12 - 20 Lacs
Noida
Remote
Design & manage secure Kubernetes, Terraform IaC, CI/CD, and cloud infra. Ensure availability, security, and automation. 3-5 yrs DevOps/SRE, strong K8s, Terraform, CI/CD, Linux, Docker, scripting. Remote, top tools, growth, certifications.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France