Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
10 - 14 Lacs
Bengaluru
Hybrid
Hi all , we are looking for a role DevOps Engineer experience : 3 - 6 years notice period : Immediate - 15 days location : Bengaluru Description: Job Title: DevOps Engineer with 4+ years experience Job Summary We're looking for a dynamic DevSecOps Engineer to lead the charge in embedding security into our DevOps lifecycle. This role focuses on implementing secure, scalable, and observable cloud-native systems, leveraging Azure, Kubernetes, GitHub Actions, and security tools like Black Duck, SonarQube, and Snyk. Key Responsibilities • Architect, deploy, and manage secure Azure infrastructure using Terraform and Infrastructure as Code (IaC) principles • Build and maintain CI/CD pipelines in GitHub Actions, integrating tools such as Black Duck, SonarQube, and Snyk • Operate and optimize Azure Kubernetes Service (AKS) for containerized applications • Configure robust monitoring and observability stacks using Prometheus, Grafana, and Loki • Implement incident response automation with PagerDuty • Manage and support MS SQL databases and perform basic operations on Cosmos DB • Collaborate with development teams to promote security best practices across SDLC • Identify vulnerabilities early and respond to emerging security threats proactively Required Skills • Deep knowledge of Azure Services, AKS, and Terraform • Strong proficiency with Git, GitHub Actions, and CI/CD workflow design • Hands-on experience integrating and managing Black Duck, SonarQube, and Snyk • Proficiency in setting up monitoring stacks: Prometheus, Grafana, and Loki • Familiarity with PagerDuty for on-call and incident response workflows • Experience managing MSSQL and understanding Cosmos DB basics • Strong scripting ability (Python, Bash, or PowerShell) • Understanding of DevSecOps principles and secure coding practices • Familiarity with Helm, Bicep, container scanning, and runtime security solutions
Posted 1 month ago
2.0 - 7.0 years
10 - 20 Lacs
Bengaluru
Hybrid
Senior Cloud Infra Engineer, 5+ yrs, AWS (EC2, EKS, IAM), Terraform, Jenkins, Linux Admin, Grafana, Kibana, DevOps, Kafka, MySQL. Prod ops + scaling exp needed. C2H via TE Infotech(Exotel) Convertible to Permanent, Loc: BLR @ ssankala@toppersedge.com
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Skills (Must have): 3+ years of DevOps experience. Expertise in Kubernetes, Docker, and CI/CD tools (Jenkins, GitLab CI). Hands-on with config management tools like Ansible, Puppet, or Chef. Strong knowledge of cloud platforms (AWS, Azure, or GCP). Proficient in scripting (Bash, Python). Good troubleshooting, analytical, and communication skills. Willingness to explore frontend tech (ReactJS, NodeJS, Angular) is a plus. Skills (Good to have): Experience with Helm charts and service meshes (Istio, Linkerd). Experience with monitoring and logging solutions (Prometheus, Grafana, ELK). Experience with security best practices for cloud and container environments. Contributions to open-source projects or a strong personal portfolio. Role & Responsibility: Manage and optimize Kubernetes clusters, including deployments, scaling, and troubleshooting. Develop and maintain Docker images and containers, ensuring security best practices. Design, implement, and maintain cloud-based infrastructure (AWS, Azure or GCP) using Infrastructure-as-Code (IaC) principles (e.g., Terraform). Monitor and troubleshoot infrastructure and application performance, proactively identifying and resolving issues. Contribute to the development and maintenance of internal tools and automation scripts. Qualification: B.Tech/B.E./M.E./M.Tech in Computer Science or equivalent. Additional Information: We offer a competitive salary and excellent benefits that are above industry standard. Do check our impressive growth rate on and ratings on Please submit your resume in this standard 1-page or 2-page Please hear from our employees on Colleagues Interested in Internal Mobility, please contact your HRBP in-confidence
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Job Title : AI Observability Tools Engineer Experience : 5-7 years Location : Hyderabad (work from office) Shift : Rotational Shift Notice Period : 30 days . Key Responsibilities : Implement observability tools like Prometheus, Grafana, Datadog, Splunk, logic Monitor, thousand eyes for AI/ML environments. Monitor model performance, setting up monitoring thresholds, synthetic test plans, data pipelines, and inference systems. Ensure visibility across infrastructure, application, and network layers Collaborate with SRE, DevOps, and Data Science teams to build proactive alerting and RCA systems. Drive real-time monitoring and AIOps integration for AI workloads Integration with ITSM Solutions like ServiceNow Skills Required Experience with tools: Datadog, Prometheus, Grafana, Splunk, Open Telemetry. Solid understanding of networking concepts (TCP/IP, DNS, Load Balancers) Knowledge of AI/ML infrastructure and observability metrics Scripting : Python, Bash or Go.
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad, Ahmedabad
Work from Office
Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Masters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Kafka, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , SPARK , Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing.
Posted 1 month ago
7.0 - 12.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 12+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
4.0 - 6.0 years
6 - 9 Lacs
Ahmedabad
Work from Office
Role Overview: As a DevOps Engineer at ChartIQ , you'll play a critical role not only in building, maintaining, and scaling the infrastructure that supports our Development our Development and QA needs , but also in driving new, exciting cloud-based solutions that will add to our offerings. Your work will ensure that the platforms used by our team remain available, responsive, and high-performing. In addition to maintaining the current infrastructure, you will also contribute to the development of new cloud-based solutions , helping us expand and enhance our platform's capabilities to meet the growing needs of our financial services customers. You will also contribute to light JavaScript programming , assist with QA testing , and troubleshoot production issues. Working in a fast-paced, collaborative environment, you'll wear multiple hats and support the infrastructure for a wide range of development teams. This position is based in Ahmedabad, India , and will require working overlapping hours with teams in the US . The preferred working hours will be until 12 noon EST to ensure effective collaboration across time zones. Key Responsibilities: Design, implement, and manage infrastructure using Terraform or other Infrastructure-as-Code (IaC) tools. Leverage AWS or equivalent cloud platforms to build and maintain scalable, high-performance infrastructure that supports data-heavy applications and JavaScript-based visualizations. Understand component-based architecture and cloud-native applications. Implement and maintain site reliability practices , including monitoring and alerting using tools like DataDog , ensuring the platforms availability and responsiveness across all environments. Design and deploy high-availability architecture to support continuous access to alerting engines. Support and maintain Configuration Management systems like ServiceNow CMDB . Manage and optimize CI/CD workflows using GitHub Actions or similar automation tools. Work with OIDC (OpenID Connect) integrations across Microsoft , AWS , GitHub , and Okta to ensure secure access and authentication. Contribute to QA testing (both manual and automated) to ensure high-quality releases and stable operation of our data visualization tools and alerting systems. Participate in light JavaScript programming tasks, including HTML and CSS fixes for our charting library. Assist with deploying and maintaining mobile applications on the Apple App Store and Google Play Store . Troubleshoot and manage network issues , ensuring smooth data flow and secure access to all necessary environments. Collaborate with developers and other engineers to troubleshoot and optimize production issues. Help with the deployment pipeline , working with various teams to ensure smooth software releases and updates for our library and related services. Required Qualifications: Proficiency with Terraform or other Infrastructure-as-Code tools. Experience with AWS or other cloud services (Azure, Google Cloud, etc.). Solid understanding of component-based architecture and cloud-native applications. Experience with site reliability tools like DataDog for monitoring and alerting. Experience designing and deploying high-availability architecture for web based applications. Familiarity with ServiceNow CMDB and other configuration management tools. Experience with GitHub Actions or other CI/CD platforms to manage automation pipelines. Strong understanding and practical experience with OIDC integrations across platforms like Microsoft , AWS , GitHub , and Okta . Solid QA testing experience, including manual and automated testing techniques (Beginner/Intermediate). JavaScript , HTML , and CSS skills to assist with troubleshooting and web app development. Experience with deploying and maintaining mobile apps on the Apple App Store and Google Play Store that utilize web-based charting libraries. Basic network management skills, including troubleshooting and ensuring smooth network operations for data-heavy applications. Knowledge of package publishing tools such as Maven , Node , and CocoaPods to ensure seamless dependency management and distribution across platforms. Additional Skills and Traits for Success in a Startup-Like Environment: Ability to wear multiple hats : Adapt to the ever-changing needs of a startup environment within a global organization. Self-starter with a proactive attitude, able to work independently and manage your time effectively. Strong communication skills to work with cross-functional teams, including engineering, QA, and product teams. Ability to work in a fast-paced, high-energy environment. Familiarity with agile methodologies and working in small teams with a flexible approach to meeting deadlines. Basic troubleshooting skills to resolve infrastructure or code-related issues quickly. Knowledge of containerization tools such as Docker and Kubernetes is a plus. Understanding of DevSecOps and basic security practices is a plus. Preferred Qualifications: Experience with CI/CD pipeline management , automation, and deployment strategies. Familiarity with serverless architectures and AWS Lambda . Experience with monitoring and logging frameworks, such as Prometheus , Grafana , or similar. Experience with Git , version control workflows, and source code management. Security-focused mindset , experience with vulnerability scanning, and managing secure application environments.
Posted 1 month ago
5.0 - 10.0 years
15 - 20 Lacs
Pune
Hybrid
Team: SRE & Operations Duration 12 Months Shift: General Shift 9:00AM- 5:00 PM Location: Pune Interviews: 2 Round YOE: 5-7 Years (4 Relevant ) NOTES: Preffered Immediate joiner or 15 days np Top Skills Splunk - Querys, Dashboard and application creation Grafana dashboard, Prometheus, Data Visualization Open Telemetry, Grafana, Prometheus Dynatrace, Datadog tools are good to have. Some infra knowledge on servers, storage and web application infrastructure.
Posted 1 month ago
4.0 - 8.0 years
0 - 0 Lacs
Thiruvananthapuram
Work from Office
DevOps/SRE with 5+ yrs exp in Azure, Terraform, Kubernetes (EKS/GKE/AKS), Docker, CI/CD (Jenkins/GitHub), scripting (Python/Bash), Linux admin, monitoring (Prometheus/Grafana/ELK), GitOps, Helm, and strong networking/security skills.
Posted 1 month ago
4.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Job TitleLead Engineer (Core Wireless Testing) LocationBengaluru Work EmploymentFull time DepartmentProduct Engineering DomainProduct Validation Reporting toManager : Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Join Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who We Are Product Engineering team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. What You Work: As a Lead Engineer you will be responsible for driving technical projects, managing resources effectively, balancing team workloads. You will design solutions, oversee testing, and mentor junior engineers to ensure productivity and skill development. Also, you will manage resources, troubleshoot, debug issues, writing and reviewing test cases to ensure code quality, and collaborate with cross-functional teams to deliver high-quality products on time. Knowledge of software development methodology, build tools, and product life cycle Build a 5G Cloud-native test solution in a virtualized environment with end to end understanding of 5G Network functions (i.e., AMF, SMF, UPF and PCF) and protocols Exposure to customer deployment models and configuration of large mobile packet core solutions Have 4-12 years of Industry experience in Mobile packet core technologies with validation background and solid exposure in automation You have End to End or System Testing background Good knowledge in Kubernetes, docker and Cloud Native solutions Experience in bringing up Open stack , VMWare based test setups Interest & Passion in Automation and framework development using Python and Robot Framework Exposure in Spirent Landslide, Mobilium DsTest or Ixia simulators Exposure in automation frameworks like pyats and robot framework. Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Mandatory skills: Solid experience in 5G core End to End validation Kubernetes, Docker, OpenStack Working exposure on AMF and UPF Have been to customer escalation role Spirent Landslide Python, Shell Scripting, Robot framework Desired skills: Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Preferred Qualifications Experience 6 to 10 years of relevant experience Education B.Tech/BE or any other equivalent degree, PG in communication field Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
10.0 - 15.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Job TitleStaff Engineer (Core Wireless Testing) LocationBengaluru Work EmploymentFull time DepartmentProduct Engineering DomainProduct Validation Reporting toGroup Manager : Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Join Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who We Are Product Engineering team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. What You Work: As a Staff Engineer you will be responsible for driving technical projects, managing resources effectively, balancing team workloads. You will design solutions, oversee testing, and mentor junior engineers to ensure productivity and skill development. You’ll lead technical initiatives, mentor team members and collaborate closely with cross functional teams to drive innovation and ensure high-quality deliverables. You’ll leverage your expertise to solve challenging problems and contribute to strategic engineering decisions. Knowledge of software development methodology, build tools, and product life cycle Build a 5G Cloud-native test solution in a virtualized environment with end to end understanding of 5G Network functions (i.e., AMF, SMF, UPF and PCF) and protocols Exposure to customer deployment models and configuration of large mobile packet core solutions Have 10+ years of Industry experience in Mobile packet core technologies with validation background and solid exposure in automation You have End to End or System Testing background Good knowledge in Kubernetes, docker and Cloud Native solutions Experience in bringing up Open stack , VMWare based test setups Interest & Passion in Automation and framework development using Python and Robot Framework Exposure in Spirent Landslide, Mobilium DsTest or Ixia simulators Exposure in automation frameworks like pyats and robot framework. Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Mandatory skills: Solid experience in 5G core End to End validation Kubernetes, Docker, OpenStack Working exposure on AMF and UPF Have been to customer escalation role Spirent Landslide Python, Shell Scripting, Robot framework Desired skills: Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Preferred Qualifications Experience 10 to 15 years of relevant experience Education B.Tech/BE or any other equivalent degree, PG in communication field Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
4.0 - 9.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Job TitleSenior Engineer (Core Wireless Testing) LocationBengaluru Work EmploymentFull time DepartmentProduct Engineering DomainProduct Validation Reporting toManager : Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why Join Tejas: We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who We Are Product Engineering team is responsible for Platform and software validation for the entire product portfolio. They will develop automation Framework for the entire product portfolio. Team will develop and deliver customer documentation and training solutions. Compliance with technical certifications such as TL9000 and TSEC is essential for ensuring industry standards and regulatory requirements are met. Team works closely with PLM, HW and SW architects, sales and customer account teams to innovate and develop network deployment strategy for a broad spectrum of networking products and software solutions. As part of this team, you will get an opportunity to validate, demonstrate and influence new technologies to shape future optical, routing, fiber broadband and wireless networks. What You Work: As a Senior Engineer, you will be responsible for, Knowledge of software development methodology, build tools, and product life cycle Build a 5G Cloud-native test solution in a virtualized environment with end to end understanding of 5G Network functions (i.e., AMF, SMF, UPF and PCF) and protocols Exposure to customer deployment models and configuration of large mobile packet core solutions Have 4-12 years of Industry experience in Mobile packet core technologies with validation background and solid exposure in automation You have End to End or System Testing background Good knowledge in Kubernetes, docker and Cloud Native solutions Experience in bringing up Open stack , VMWare based test setups Interest & Passion in Automation and framework development using Python and Robot Framework Exposure in Spirent Landslide, Mobilium DsTest or Ixia simulators Exposure in automation frameworks like pyats and robot framework. Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Mandatory skills: Solid experience in 5G core End to End validation Kubernetes, Docker, OpenStack Working exposure on AMF and UPF Have been to customer escalation role Spirent Landslide Python, Shell Scripting, Robot framework Desired skills: Certification in Kubernetes, Exposure to Grafana and Prometheus is added advantage. Experience in CI/CD tools Jenkins and GIT Preferred Qualifications Experience 4 to 6 years of relevant experience Education B.Tech/BE or any other equivalent degree, PG in communication field Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all-inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
6.0 - 9.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Job TitleLead Engineer – CI CD Devops LocationBengaluru Work EmploymentFull time DepartmentWireline DomainSoftware Reporting toGroup Engineer Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why join Tejas We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who we are: In the dynamic world of enterprise technology, the shift towards cloud-native solutions is not just a trend but a necessity. As we embark on developing a state-of-the-art Network Management System (NMS) and Reporting tool, our goal is to leverage the latest technologies to create a robust, scalable, and efficient solution. This initiative is crucial for ensuring our network’s optimal performance, security, and reliability while providing insightful analytics through advanced reporting capabilities. Our project aims to design and implement a cloud-native NMS and reporting tool that will revolutionize how we manage and monitor our network infrastructure. By utilizing cutting-edge technologies, we will ensure that our solution is not only future-proof but also capable of adapting to the ever-evolving demands of our enterprise environment. What you work Develop and implement automation strategies for software build, deployment, and infrastructure management. Design and maintain CI/CD pipelines to enable frequent and reliable software releases. Collaborate with development, QA, and operations teams to optimize workflows and enhance software quality. Automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Monitor and troubleshoot CI/CD pipelines to ensure smooth operation and quick resolution of issues. Implement and maintain robust monitoring and alerting tools to ensure system reliability. Work with various tools and technologies such as Git, Jenkins, Docker, Kubernetes, and cloud platforms (e.g., AWS, Azure). Ensure compliance with security standards and best practices throughout the development lifecycle. Continuously improve the CI/CD processes by incorporating new tools, techniques, and best practices. Provide training and guidance to team members on DevOps principles and practices. You will be responsible for leading a team and guiding them for optimum output. Mandatory skills Strong experience in software development and system administration. Proficiency in programming languages such as Python, Java, or similar. Strong understanding of CI/CD concepts and experience with tools like Jenkins, Git, Docker, and Kubernetes. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced, dynamic environment. Desired skills Experience with infrastructure such as code (IaC) tools like Terraform or Ansible. Knowledge of container orchestration tools like Kubernetes or Rancher Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Certification in AWS, Azure, or other relevant technologies. Preferred Qualifications: Experience 6 to 9 years’ experience from Telecommunication or Networking background. Education B.Tech/BE (CSE/ECE/EEE/IS) or any other equivalent degree Candidate should be good at coding skills in CI CD, Devops with Java . Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
6.0 - 10.0 years
13 - 17 Lacs
Noida
Work from Office
We are looking for a skilled Azure L3 Architect with 6 to 10 years of experience in designing and implementing scalable, secure, and highly available cloud-based solutions on Microsoft Azure. This position is based in Pune. Roles and Responsibility Design and implement robust Azure-based infrastructure for critical BFSI applications. Manage and optimize Kubernetes clusters on Azure, ensuring scalability, security, and high availability. Develop CI/CD pipelines and automate workflows using tools like Terraform, Helm, and Azure DevOps. Ensure adherence to BFSI industry standards by implementing advanced security measures. Analyze and optimize Azure resource usage to minimize costs while maintaining performance and compliance standards. Collaborate with cross-functional teams to support application deployment, monitoring, troubleshooting, and lifecycle management. Job Minimum 6 years of hands-on experience in Azure and Kubernetes environments within BFSI or similar industries. Expertise in AKS, Azure IaaS, PaaS, and security tools like Azure Security Center. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong knowledge of cloud security principles and tools such as Azure Security Center and Azure Key Vault. Experience with cost management tools such as Azure Cost Management + Billing. Familiarity with monitoring tools such as Prometheus, Grafana, New Relic, Azure Log Analytics, and ADF. Understanding of BFSI compliance regulations and standards. Process improvement experience using frameworks like Lean, Six Sigma, or similar methodologies. Bachelor's degree in Computer Science, Engineering, or a related field. Certifications like Azure Solutions Architect, Certified Kubernetes Administrator (CKA), or Certified Azure DevOps Engineer are advantageous.
Posted 1 month ago
5.0 - 10.0 years
3 - 7 Lacs
Mumbai
Work from Office
We are looking for a skilled Java Backend Developer with 5 to 12 years of experience to develop and maintain backend services using Java Spring and JavaScript. The ideal candidate will have hands-on experience as a backend developer, proficiency in Java Spring framework and JavaScript, and experience with at least one cloud provider. Roles and Responsibility Develop and maintain scalable and efficient backend systems using Java Spring and JavaScript. Design, implement, and optimize cloud-based solutions on AWS, GCP, or Azure. Work with SQL and NoSQL databases such as PostgreSQL, MySQL, and MongoDB for data persistence. Architect and develop Kubernetes-based microservices caching solutions and messaging systems like Kafka. Implement monitoring, logging, and alerting using tools like Grafana, CloudWatch, Kibana, and PagerDuty. Participate in on-call rotations, handle incident response, and contribute to operational playbooks. Job Hands-on experience as a backend developer with strong understanding of data structures, algorithms, and software design principles. Proficiency in Java Spring framework and JavaScript, with experience in developing scalable and efficient backend systems. Experience with at least one cloud provider, preferably AWS, GCP, or Azure, and knowledge of cloud-based solutions and containerization. Familiarity with microservice architectures, caching solutions, and event-driven architectures using Kafka. Strong communication skills with an emphasis on technical documentation and the ability to work in a globally distributed environment. Ability to contribute to high availability services and participate in on-call rotations.
Posted 1 month ago
10.0 - 12.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Senior Java Developer with strong expertise in Java and Spring Boot framework. The ideal candidate should have extensive experience with AWS cloud services and deploying applications in a cloud environment. This position is located in Hyderabad and requires 10 to 12 years of experience. Roles and Responsibility Design, develop, and deploy high-quality software applications using Java and Spring Boot. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale Java-based systems with scalability and performance. Troubleshoot and resolve complex technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with the latest trends and technologies in Java and related fields. Job Strong hands-on experience with Apache Kafka (producer/consumer, topics, partitions). Deep knowledge of PostgreSQL including schema design, indexing, and query optimization. Experience with JUnit test cases and developing unit/integration test suites. Familiarity with code coverage tools such as JaCoCo or SonarQube. Excellent verbal and written communication skills to explain complex technical concepts clearly. Demonstrated leadership skills with experience managing, mentoring, and motivating technical teams. Proven experience in stakeholder management, including gathering requirements, setting expectations, and delivering technical solutions aligned with business goals. Familiarity with microservices architecture and RESTful API design. Experience with containerization (Docker) and orchestration platforms like Kubernetes (EKS). Strong understanding of CI/CD pipelines and DevOps practices. Solid problem-solving skills with the ability to handle complex technical challenges. Familiarity with monitoring tools like Prometheus and Grafana, and log management. Experience with version control systems (Git) and Agile/Scrum methodologies.
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Database Engineer with 5 to 10 years of experience to design, develop, and maintain our database infrastructure. This position is based remotely. Roles and Responsibility Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale and big data processing. Implement data security measures to protect sensitive information and comply with relevant regulations. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to relational database systems or cloud-based solutions like Google BigQuery and AWS. Develop import workflows and scripts to automate data import processes. Ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and resolve issues, while collaborating with the full-stack web developer to implement efficient data access and retrieval mechanisms. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows, exploring third-party technologies as alternatives to legacy approaches for efficient data pipelines. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices, and use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines, taking accountability for achieving development milestones. Prioritize tasks to ensure timely delivery in a fast-paced environment with rapidly changing priorities, while also collaborating with fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems, leveraging online resources effectively like StackOverflow, ChatGPT, Bard, etc., considering their capabilities and limitations. Job Proficiency in SQL and relational database management systems like PostgreSQL or MySQL, along with database design principles. Strong familiarity with Python for scripting and data manipulation tasks, with additional knowledge of Python OOP being advantageous. Demonstrated problem-solving skills with a focus on optimizing database performance and automating data import processes. Knowledge of cloud-based databases like AWS RDS and Google BigQuery. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. About Company Marketplace is an experienced team of industry experts dedicated to helping readers make informed decisions and choose the right products with ease. We arm people with trusted advice and guidance, so they can make confident decisions and get back to doing the things they care about most.
Posted 1 month ago
3.0 - 8.0 years
10 - 15 Lacs
Hyderabad
Work from Office
We are seeking a proactive and detail-oriented Mobile Release Engineer to manage and streamline the release process for our mobile applications across iOS and Android platforms. The ideal candidate will have hands-on experience with app store deployments, performance monitoring, CI/CD pipelines, and cross-functional collaboration in a federated team environment. Primary Responsibilities: Manage end-to-end mobile app releases on App Store and Google Play Store, ensuring timely and quality deployments. Monitor app performance using tools like Datadog, Grafana, and other observability platforms. Coordinate with federated teams across geographies, leading release calls and ensuring alignment on timelines and deliverables. Utilize GitHub and GitHub Actions for version control, CI/CD workflows, and cherry-picking commits for hotfixes and patch releases. Handle ServiceNow operations including change requests, issue triaging, assignment, and ensuring adherence to SLOs. Collaborate with QA and development teams to identify and execute production and regression test cases. Work closely with business stakeholders, product managers, and engineering teams to align on release goals and priorities. Contribute to initiatives focused on AI-driven automation and process optimization.(Preferred) Familiarity with AWS and cloud-native architectures Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree in Computer Science, Engineering, or a related field Experience: 3+ years of experience in mobile app release management or DevOps Proven experience in developing mobile applications using React Native Experience working in agile, federated team environments. Solid understanding of CI/CD pipelines and automation tools Proven excellent communication and coordination skills Passion for automation, innovation, and continuous improvement
Posted 1 month ago
4.0 - 6.0 years
10 - 14 Lacs
Mumbai
Work from Office
We are seeking a highly skilled Linux Administrator/Infrastructure Cloud Engineer with experience in managing both physical and virtual Linux servers. The ideal candidate will have a strong background in Linux administration, cloud services, and containerization technologies. This role requires a proactive individual who can optimize system performance, manage critical production incidents, and collaborate effectively with cross-functional teams. Key Responsibilities: • Administer and maintain Linux servers (Ubuntu, Debian, Redhat, CentOS) for both physical and virtual environments, including OS installation, performance monitoring, optimization, kernel tuning, LVM management, file system management, and security management. • Configure and manage servers for NFS, SAMBA, DNS, and other services. • Develop and maintain shell scripts for automation and configuration management, preferably using Ansible. • Manage Linux file systems and implement effective backup strategies. • Perform OS upgrades and patch management to ensure system security and compliance. • Respond to critical production incidents, ensuring that SLAs are maintained while troubleshooting and resolving issues. • Coordinate with database, DevOps, and other related teams to address system issues and ensure seamless operations. • Configure and manage network settings, including VLANs, switch configurations, gateways, and firewalls. • Develop automation solutions and documentation for recurring technical issues to improve efficiency. • Utilize AWS cloud services (e.g., EC2, S3, Lambda, Route53, IAM, SQS, SNS, SFTP) to configure and maintain cloud infrastructure, including virtual machines, storage systems, and network settings. • Monitor and optimize cloud performance, focusing on resource utilization and cost management. • Troubleshoot and resolve cloud infrastructure issues, conducting root cause analysis to prevent future incidents. • Utilize containerization technologies like Docker to manage and deploy applications effectively.
Posted 1 month ago
4.0 - 9.0 years
6 - 10 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=4 to 12 , jd= Job Role Java Developer Job Type FTE Job Location Hyderabad JD Strong expertise in Java (Java 13+) and Spring Boot framework.Extensive experience with AWS cloud services and deploying applications in a cloud environment.Proven hands-on experience with Apache Kafka (producer/consumer, topics, partitions).Deep knowledge of PostgreSQL including schema design, indexing, and query optimization.Solid experience writing JUnit test cases and developing unit/integration test suites.Familiarity with code coverage tools (e.g., JaCoCo, SonarQube) and implementing best practices for test coverage.Excellent verbal and written communication skills, with the ability to clearly explain complex technical concepts to non-technical stakeholders.Demonstrated leadership skills with experience managing, mentoring, and motivating technical teams.Proven experience in stakeholder management, including gathering requirements, setting expectations, and delivering technical solutions aligned with business goals.Familiarity with microservices architecture and RESTful API design.Experience with containerization (Docker) and orchestration platforms like Kubernetes (EKS).Strong understanding of CI/CD pipelines and DevOps practices.Solid problem-solving skills with ability to handle complex technical challenges.Familiarity with monitoring tools (Prometheus, Grafana) and log management.Experience with version control systems (Git) and Agile/Scrum methodologiesBPM knowledge on modeler, groovy scripts, event listeners and interceptorsCamel frameworkKAFKA APIsAPI security with JWTAWS GlueReact JS, Title=Java (Java 13+) + React Developer, ref=6566287
Posted 1 month ago
4.0 - 5.0 years
0 - 0 Lacs
Hyderabad
Work from Office
Role & responsibilities Assist in the deployment, management, and scaling of applications on Kubernetes. Monitor and troubleshoot Kubernetes clusters and networking issues. Collaborate with development teams to ensure CI/CD pipelines are efficient and effective. Contribute to incident response and root cause analysis for production issues. Preferred candidate profile Experience with monitoring tools (Prometheus, Grafana, etc.). Experience with Terraform or Ansible is a plus. Familiarity with Git and version control practices. Understanding of security best practices in a DevOps environment. Handson experience in Docker and Kubernetes. Understanding of networking concepts. Experience with cloud platforms (AWS or Azure or Google Cloud). Strong problem-solving skills and the ability to work in a team-oriented environment.
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
Production experience on AWS (IAM, ECS, EC2, VPC, ELB, RDS, Autoscaling, Cost Optimisation, Trusted Advisors, Guard Duty, Security, etc.) Must have monitoring experience on tools like Nagios/ Prometheus/ Grafana/ Datadog/ New Relic, etc. Required Candidate profile Must have experience in Linux Administration. Must have a working knowledge of scripting (Python/Shell).
Posted 1 month ago
3.0 - 6.0 years
5 - 9 Lacs
Gurugram
Work from Office
Experience : 8-10 years. Job Title : Devops Engineer. Location : Gurugram. Job Summary. We are seeking a highly skilled and experienced Lead DevOps Engineer to drive the design, automation, and maintenance of secure and scalable cloud infrastructure. The ideal candidate will have deep technical expertise in cloud platforms (AWS/GCP), container orchestration, CI/CD pipelines, and DevSecOps practices.. You will be responsible for leading infrastructure initiatives, mentoring team members, and collaborating. closely with software and QA teams to enable high-quality, rapid software delivery.. Key Responsibilities. Cloud Infrastructure & Automation :. Design, deploy, and manage secure, scalable cloud environments using AWS, GCP, or similar platforms.. Develop Infrastructure-as-Code (IaC) using Terraform for consistent resource provisioning.. Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, AWS CodePipeline, or Azure DevOps.. Containerization & Orchestration :. Containerize applications using Docker for seamless development and deployment.. Manage and scale Kubernetes clusters (on-premise or cloud-managed like AWS EKS).. Monitor and optimize container environments for performance, scalability, and cost-efficiency.. Security & Compliance :. Enforce cloud security best practices including IAM policies, VPC design, and secure secrets management (e.g., AWS Secrets Manager).. Conduct regular vulnerability assessments, security scans, and implement remediation plans.. Ensure infrastructure compliance with industry standards and manage incident response protocols.. Monitoring & Optimization :. Set up and maintain monitoring/observability systems (e.g., Grafana, Prometheus, AWS CloudWatch, Datadog, New Relic).. Analyze logs and metrics to troubleshoot issues and improve system performance.. Optimize resource utilization and cloud spend through continuous review of infrastructure configurations.. Scripting & Tooling :. Develop automation scripts (Shell/Python) for environment provisioning, deployments, backups, and log management.. Maintain and enhance CI/CD workflows to ensure efficient and stable deployments.. Collaboration & Leadership :. Collaborate with engineering and QA teams to ensure infrastructure aligns with development needs.. Mentor junior DevOps engineers, fostering a culture of continuous learning and improvement.. Communicate technical concepts effectively to both technical and non-technical :. Education. Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent hands-on : AWS Certified DevOps Engineer Professional (preferred) or other relevant cloud :. 8+ years of experience in DevOps or Cloud Infrastructure roles, including at least 3 years in a leadership capacity.. Strong hands-on expertise in AWS (ECS, EKS, RDS, S3, Lambda, CodePipeline) or GCP equivalents.. Proven experience with CI/CD tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps.. Advanced knowledge of Docker and Kubernetes ecosystem.. Skilled in Infrastructure-as-Code (Terraform) and configuration management tools like Ansible.. Proficient in scripting (Shell, Python) for automation and tooling.. Experience implementing DevSecOps practices and advanced security configurations.. Exposure to data tools (e.g., Apache Superset, AWS Athena, Redshift) is a plus.. Soft Skills. Strong problem-solving abilities and capacity to work under pressure.. Excellent communication and team collaboration.. Organized with attention to detail and a commitment to Skills :. Experience with alternative cloud platforms (e.g., Oracle Cloud, DigitalOcean).. Familiarity with advanced observability stacks (Grafana, Prometheus, Loki, Datadog).. (ref:hirist.tech). Show more Show less
Posted 1 month ago
8.0 - 12.0 years
14 - 18 Lacs
Noida
Work from Office
We are looking for a Lead Cloud Operations Engineer to join our growing team supporting key supply-side technology platforms, including Atlas Integration, GMX, Hotel APIs, and related microservices in Azure. This is a high-impact technical leadership role focused on Azure cloud operations, monitoring, performance, security, and incident resolution. You will be responsible for ensuring the availability, scalability, and reliability of cloud-hosted systems, mentoring a small operations team, and collaborating with developers, architects, and business stakeholders to drive continuous improvement. Role & responsibilities Own day-to-day operations and health of production and pre-prod environments hosted in Azure. Monitor infrastructure and applications using Azure Monitor, Application Insights, and Grafana. Lead the team in proactive incident detection, triage, resolution, and post-incident reviews (RCA, documentation). Implement and enhance automation for common operational tasks using PowerShell, Python, Azure CLI, and Terraform/Ansible. Act as escalation point for complex issues and high-severity incidents. Create, improve, and maintain runbooks, dashboards, alerts, and performance tuning metrics. Collaborate with development and DevOps teams to ensure operational readiness, deployment hygiene, and system resilience. Maintain strong governance around Azure resources, RBAC, policy enforcement, and tagging strategy. Lead disaster recovery planning, testing, and execution across critical systems. Drive cost optimization initiatives using Azure Cost Management and FinOps principles. Ensure compliance with security policies (ISO 27001, GDPR, SOC2) and assist in audits or security reviews. Support team mentoring, training, and promoting a strong culture of ownership and accountability. Requirement Azure IaaS: Virtual Machines, Scale Sets, Load Balancer, Disks, Networking (VNETs, NSGs, UDRs, Private Links, Service Endpoints) Azure PaaS: App Services, Azure Functions, Logic Apps, Key Vault, Event Grid, Azure SQL, Application Gateway, Azure Front Door, Traffic Manager Azure Kubernetes Service (AKS) deployment, scaling, security & troubleshooting Azure Site Recovery (ASR), Azure Backup, and Disaster Recovery architecture Deep understanding of Azure Monitor, Application Insights, Log Analytics Ability to write and optimize KQL queries for diagnostics and dashboards Experience with Grafana, Prometheus, and alerting pipelines Hands-on experience with Terraform, Ansible, ARM templates Proficiency in scripting with PowerShell, Bash, and/or Python Experience with Azure DevOps Pipelines or similar CI/CD tooling is a plus RBAC, Managed Identities, Conditional Access, Key Vault integration Awareness of ISO 27001, SOC2, GDPR requirements in cloud environments Proven experience leading 24 engineers (including juniors/mid-levels) Strong verbal and written communication skills; able to interact with technical and nontechnical stakeholders Experience participating in on-call rotations, owning major incidents, and delivering RCA reports Ability to train, mentor, and guide junior engineers Collaborative mindset with a strong sense of accountability and urgency Nice to Have Experience with multi-cloud (AWS or GCP) environments and hybrid cloud networking Experience working with microservices-based systems and APIs Exposure to FinOps practices and cloud cost management tools Certifications: AZ-305, AZ-104, AZ-500, AZ-700, AZ-400 preferre
Posted 1 month ago
10.0 - 13.0 years
27 - 30 Lacs
Hyderabad
Hybrid
Proven PO/TPO experience in cloud/Dev Ops. Hands-on with Azure Dev Ops, Terra form, Kubernetes, CI/CD & IaC. Strong in Agile & stakeholder mg mt. .NET/C# & Azure certs a plus. Drive infra automation & cloud-native initiatives.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France