Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 8 years
3 - 7 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.
Posted 1 month ago
3 - 8 years
3 - 7 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.
Posted 1 month ago
7 - 12 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Kubernetes Good to have skills : Google Kubernetes Engine, Google Cloud Compute Services Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :We are looking for an experienced Kubernetes Architect to join our growing cloud infrastructure team. This role will be responsible for architecting, designing, and implementing scalable, secure, and highly available cloud-native applications on Kubernetes. You will leverage Kubernetes along with associated technologies like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus to build resilient systems that meet both business and technical needs. Google Kubernetes Engine (GKE) will be considered as an additional skill. As a Kubernetes Architect, you will play a key role in defining best practices, optimizing the infrastructure, and providing architectural guidance to cross-functional teams Key Responsibilities: Architect Kubernetes Solutions:Design and implement scalable, secure, and high-performance Kubernetes clusters. Cloud-Native Application Design:Collaborate with development teams to design cloud-native applications, ensuring that microservices are properly architected and optimized for Kubernetes environments. Kafka Management:Architect and manage Apache Kafka clusters using Kubekafka, ensuring reliable, real-time data streaming and event-driven architectures. Database Architecture:Use Kubegres to manage high-availability PostgreSQL clusters in Kubernetes, ensuring data consistency, scaling, and automated failover. Helm Chart Development:Create, maintain, and optimize Helm charts for consistent deployment and management of applications across Kubernetes environments. Ingress & Networking:Architect and configure Ingress controllers (e.g., NGINX, Traefik) for secure and efficient external access to Kubernetes services, including SSL termination, load balancing, and routing. Caching and Performance Optimization:Leverage Redis to design efficient caching and session management solutions, optimizing application performance. Monitoring & Observability:Lead the implementation of Prometheus for metrics collection and Grafana for building real-time monitoring dashboards to visualize the health and performance of infrastructure and applications. CI/CD Integration:Design and implement continuous integration and continuous deployment (CI/CD) pipelines to streamline the deployment of Kubernetes-based applications. Security & Compliance:Ensure Kubernetes clusters follow security best practices, including RBAC, network policies, and the proper configuration of Secrets Management. Automation & Scripting:Develop automation frameworks using tools like Terraform, Helm, and Ansible to ensure repeatable and scalable deployments. Capacity Planning and Cost Optimization:Optimize resource usage within Kubernetes clusters to achieve both performance and cost-efficiency, utilizing cloud tools and services. Leadership & Mentorship:Provide technical leadership to development, operations, and DevOps teams, offering mentorship, architectural guidance, and sharing best practices. Documentation & Reporting:Produce comprehensive architecture diagrams, design documents, and operational playbooks to ensure knowledge transfer across teams and maintain system reliability Required Skills & Experience: 10+ years of experience in cloud infrastructure engineering, with at least 5+ years of hands-on experience with Kubernetes. Strong expertise in Kubernetes for managing containerized applications in the cloud. Experience in deploying & managing container-based systems on both private and public clouds (Google Kubernetes Engine (GKE)). Proven experience with Kubekafka for managing Apache Kafka clusters in Kubernetes environments. Expertise in managing PostgreSQL clusters with Kubegres and implementing high-availability database solutions. In-depth knowledge of Helm for managing Kubernetes applications, including the development of custom Helm charts. Experience with Ingress controllers (e.g., NGINX, Traefik) for managing external traffic in Kubernetes. Hands-on experience with Redis for caching, session management, and as a message broker in Kubernetes environments. Advanced knowledge of Prometheus for monitoring and Grafana for visualization and alerting in cloud-native environments. Experience with CI/CD pipelines for automated deployment and integration using tools like Jenkins, GitLab CI, or CircleCI. Solid understanding of networking, including load balancing, DNS, SSL/TLS, and ingress/egress configurations in Kubernetes. Familiarity with Terraform and Ansible for infrastructure automation. Deep understanding of security best practices in Kubernetes, such as RBAC, Network Policies, and Secrets Management. Knowledge of DevSecOps practices to ensure secure application delivery.Certifications:oGoogle Cloud Platform (GCP) certification is mandatory.oKubernetes Certification (CKA, CKAD, or CKAD) is highly preferred.oHashiCorp Terraform certification is a significant plus.
Posted 1 month ago
5 - 10 years
8 - 13 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Kubernetes Good to have skills : Google Kubernetes Engine, Google Cloud Compute Services Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Job Summary :We are looking for an experienced Kubernetes Specialist to join our cloud infrastructure team. You will work closely with architects and engineers to design, implement, and optimize cloud-native applications on Google Kubernetes Engine (GKE). This role will focus on providing expertise in Kubernetes, container orchestration, and cloud infrastructure management, ensuring the seamless operation of scalable, secure, and high-performance applications on GKE and other cloud environments.________________________________________Responsibilities: Kubernetes Implementation:Design, implement, and manage Kubernetes clusters for containerized applications, ensuring high availability and scalability. Cloud-Native Application Design:Work with teams to deploy, scale, and maintain cloud-native applications on Google Kubernetes Engine (GKE). Kubernetes Tools Expertise:Utilize Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus to build and maintain resilient systems. Infrastructure Automation:Develop and implement automation frameworks using Terraform and other tools to streamline Kubernetes deployments and cloud infrastructure management. CI/CD Implementation:Design and maintain CI/CD pipelines to automate deployment and testing for Kubernetes-based applications. Kubernetes Networking & Security:Ensure secure and efficient Kubernetes cluster networking, including Ingress controllers (e.g., NGINX, Traefik), RBAC, and Secrets Management. Monitoring & Observability:Lead the integration of monitoring solutions using Prometheus for metrics and Grafana for real-time dashboard visualization. Performance Optimization:Optimize resource utilization within GKE clusters, ensuring both performance and cost-efficiency. Collaboration:Collaborate with internal development, operations, and security teams to meet user requirements and implement Kubernetes solutions. Troubleshooting & Issue Resolution:Address complex issues related to containerized applications, Kubernetes clusters, and cloud infrastructure, troubleshooting and resolving them efficiently.________________________________________Technical Skillset: GCP & Kubernetes Experience:Minimum of 3+ years of hands-on experience in Google Cloud Platform (GCP) and Kubernetes implementations, including GKE. Container Management:Proficiency with container orchestration engines such as Kubernetes and Docker. Kubernetes Tools Knowledge:Experience with Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus for managing Kubernetes-based applications. Infrastructure as Code (IaC):Strong experience with Terraform for automating infrastructure provisioning and management. CI/CD Pipelines:Hands-on experience in building and managing CI/CD pipelines for Kubernetes applications using tools like Jenkins, GitLab, or CircleCI. Security & Networking:Knowledge of Kubernetes networking (DNS, SSL/TLS), security best practices (RBAC, network policies, and Secrets Management), and the use of Ingress controllers (e.g., NGINX) Cloud & DevOps Tools:Familiarity with cloud services and DevOps tools such as GitHub, Jenkins, and Ansible. Monitoring Expertise:In-depth experience with Prometheus and Grafana for operational monitoring, alerting, and creating actionable insights. Certifications: Google Cloud Platform (GCP) Associate Cloud Engineer (ACE) certification is required. Certified Kubernetes Administrator (CKA) is highly preferred.
Posted 1 month ago
3 - 6 years
8 - 10 Lacs
Gurugram
Work from Office
Design, implement, & manage CI/CD pipelines using GitLab.Work with Docker & Kubernetes for containerization and orchestration in production environments. Automate infrastructure provisioning & configuration using Ansible & Terraform. Required Candidate profile Proven experience working with GitLab CI/CD, Docker, Kubernetes, Ansible, & Terraform on live production projects. Familiarity with cloud platforms like AWS or Azure is a strong plus
Posted 1 month ago
2 - 4 years
8 - 12 Lacs
Bengaluru
Work from Office
locationsIndia, Bangalore time typeFull time posted onPosted 2 Days Ago job requisition idJR0035199 Job Title: Site Reliability Engineer About Trellix: Trellix, the trusted CISO ally, is redefining the future of cybersecurity and soulful work. Our comprehensive, GenAI-powered platform helps organizations confronted by todays most advanced threats gain confidence in the protection and resilience of their operations. Along with an extensive partner ecosystem, we accelerate technology innovation through artificial intelligence, automation, and analytics to empower over 53,000 customers with responsibly architected security solutions. We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: The Site Reliability Engineer team is responsible for design, implementation and end to end ownership of the infrastructure platform and services that protect the Trellix Securitys Consumer. The services provide continuous protection to our customers with a very strong focus on quality and an extendible services platform to internal partners & product teams. This role is a Site Reliability Engineer for commercial cloud-native solutions, deployed and managed in public cloud environments like AWS, GCP. You will be part of a team that is responsible for Trellix Cloud Services that enable protection at the endpoint products on a continuous basis. Responsibilities of this role include supporting Cloud service measurement, monitoring, and reporting, deployments and security. You will input into improving overall operational quality through common practices and by working with the Engineering, QA, and product DevOps teams. You will also be responsible for supporting efforts that improve Operational Excellence and Availability of Trellix Production environments. You will have access to the latest tools and technology, and an incredible career path with the worlds cyber security leader. You will have the opportunity to immerse yourself within complex and demanding deployment architectures and see the big picture all while helping to drive continuous improvement in all aspects of a dynamic and high-performing engineering organization. If you are passionate about running and continuously improving as a world class Site Reliability Engineer Team, we are offering you a unique and great opportunity to build your career with us and gain experience working with high-performance Cloud systems. About Role: Being part of a global 24x7x365 team providing the operational coverage including event response and recovery efforts of critical services. Periodic deployment of features, patches and hotfixes to maintain the Security posture of our Cloud Services. Ability to work in shifts on a rotational basis and participate in On-Call duties Have ownership and responsibility for high availability of Production environments Input into the monitoring of systems applications and supporting data Report on system uptime and availability Collaborate with other team members on best practices Assist with creating and updating runbooks & SOPs Build a strong relationship with the Cloud DevOps, Dev & QA teams and become a domain expert for the cloud services in your remit. Provided the required support for growth and development in this role. About you: 2 to 4 years of hands-on working experience in supporting production of large-scale cloud services. Strong production support background and experience of in-depth troubleshooting Experience working with solutions in both Linux and Windows environments Experience using modern Monitoring and Alerting tools (Prometheus, Grafana, PagerDuty, etc.) Excellent written and verbal communication skills. Experience with Python or other scripting languages Proven ability to work independently in deploying, testing, and troubleshooting systems. Experience supporting high availability systems and scalable solutions hosted on AWS or GCP. Familiarity with security tools & practices (Wiz, Tenable) Familiarity with Containerization and associated management tools (Docker, Kubernetes) Significant experience of developing and maintaining relationships with a wide range of customers at all levels Understanding of Incident, Change, Problem and Vulnerability Management processes. Desired: Awareness of ITIL best practices AWS Certification and/or Kubernetes Certification Experience with SnowFlake Automation/CI/CD experience, Jenkins, Ansible, Github Actions, Argo CD. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious ab out our commitment to a workplace where everyone can thrive and contribute to our industry-leading products and customer support, which is why we prohibit discrimination and harassment based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 month ago
3 - 5 years
6 - 10 Lacs
Bengaluru
Work from Office
locationsIndia, Bangalore time typeFull time posted onPosted 30+ Days Ago job requisition idJR0034909 Job Title: SDET About Trellix: Trellix, the trusted CISO ally, is redefining the future of cybersecurity and soulful work. Our comprehensive, GenAI-powered platform helps organizations confronted by todays most advanced threats gain confidence in the protection and resilience of their operations. Along with an extensive partner ecosystem, we accelerate technology innovation through artificial intelligence, automation, and analytics to empower over 53,000 customers with responsibly architected security solutions. We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. The team is the ultimate quality gate before shipping to Customers. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Work on cutting edge technology and AI driven analysis. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python or JAVA) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, helm, argoCD is an added advantage Strong foundational knowledge in working on Linux based systems. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with non-functional testing, such as, performance and load, is desirable. Exposure to Locust or JMeter tools will be an added advantage Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) would be advantageous. Company Benefits and Perks: We work hard to embrace diversity and inclusion and encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to diversity which is why we prohibit discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 month ago
1 - 6 years
8 - 13 Lacs
Pune
Work from Office
Cloud Observability Administrator JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22685 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Observability Administrator ZS is looking for a Cloud Observability Administrator to join our team in Pune. As a Cloud Observability Administrator, you will be working on configuration of various Observability tools and create solutions to address business problems across multiple client engagements. You will leverage information from requirements-gathering phase and utilize past experience to design a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions. What Youll Do: Deploying, managing, and operating scalable, highly available, and fault tolerant Splunk architecture. Onboarding various kinds of log sources like Windows/Linux/Firewalls/Network into Splunk. Developing alerts, dashboards and reports in Splunk. Writing complex SPL queries. Managing and administering a distributed Splunk architecture. Very good knowledge on configuration files used in Splunk for data ingestion and field extraction. Perform regular upgrades of Splunk and relevant Apps/add-ons. Possess a comprehensive understanding of AWS infrastructure, including EC2, EKS, VPC, CloudTrail, Lambda etc. Automation of manual tasks using Shell/PowerShell scripting. Knowledge of Python scripting is a plus. Good knowledge of Linux commands to manage administration of servers. What Youll Bring: 1+ years of experience in Splunk Development & Administration, Bachelor's Degree in CS, EE, or related discipline Strong analytic, problem solving, and programming ability 1-1.5 years of relevant consulting-industry experience working on medium-large scale technology solution delivery engagements; Strong verbal, written and team presentation communication skills Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams Proven ability to work creatively and analytically in a problem-solving environment Ability to work within a virtual global team environment and contribute to the overall timely delivery of multiple projects Knowledge on Observability tools such as Cribl, Datadog, Pagerduty is a plus. Knowledge on AWS Prometheus and Grafana is a plus. Knowledge on APM concepts is a plus. Knowledge on Linux/Python scripting is a plus. Splunk Certification is a plus. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At
Posted 1 month ago
3 - 6 years
7 - 10 Lacs
Jaipur, Bengaluru
Work from Office
In Time Tec is an award-winning IT & software company. In Time Tec offers progressive software development services, enabling its clients to keep their brightest and most valuable talent focused on innovation. In Time Tec has a leadership team averaging 15 years in software/firmware R&D, and 20 years building onshore/offshore R&D teams. We are looking for rare talent to join us. People having a positive mindset and great organizational skills will be drawn to the position. Your capacity to take initiative and solve problems as they emerge, flexibility, and honesty, will be key factors for your success at In Time Tec. We’re looking for an Interactive Backend Engineer – Python & DevOps who will be responsible for managing the release pipeline. This person will not just be involved in the scripting but also in the development and will be directly supporting the development and content teams that are creating and publishing content on most trafficked websites. The ideal candidate is someone who has worked in a build/release role previously, has strong communication skills, and who knows how to handle the unexpected scenarios. Roles and Responsibilities Backend Engineer – Python & DevOps Skills: Strong programming experience in Python (not just scripting — real development). Experience with CI/CD tools like Jenkins. Proficient in Git and source control workflows. Experience with Docker , Kubernetes , and Linux environments . Familiarity with scripting languages like Bash , optionally Groovy or Go . Knowledge of web application servers and deployment processes. Good understanding of DevOps principles , cloud environments, and automation. Nice to Have: Experience with monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Exposure to configuration management tools like Ansible . Experience in performance tuning and scaling backend systems.
Posted 1 month ago
4 - 8 years
15 - 25 Lacs
Noida, Pune, Gurugram
Hybrid
We are looking for passionate DevOps Engineers for our team. The Devops Engineer will work closely with architects, data engineers and operations to design, build, deploy, manage and operate our development, test and production infrastructure. You will build and maintain tools to ensure our applications meet our stringent SLA's in a fast-paced culture with a passion to learn and contribute. We are looking for a strong engineer with a can-do attitude. What Youll Need: 4+ years of industry experience in design, implementation and maintenance of IT infrastructure and devOps solutions, data centres and greenfield infrastructure projects on both On-Premise and Cloud. Strong experience in Terrform, Oracle Cloud Infrastructure , Anisible, Puppet. Strong experience in server installation, maintenance, monitoring, troubleshooting, data backup, recovery, security and administration of Linux Operating systems like Red Hat Enterprise, Ubuntu and Centos. Strong programming ability in Shell/Perl/Python with automation experience. Experience automating public cloud deployments Experience using and optimizing monitoring and trending systems (Prometheus, Grafana), log aggregation systems (ELK, Splunk), and their agents. Experience with working in container based technologies like Dockers and Openshift. Using Docker and Ansible to automate the creation of kubernetes pods Experience with Ansible for provisioning and configuration of servers. Experience with Jenkins to automate the Build and Deploy process across all environments. Experience with Build tools like Apache Maven and Apache Gradle Monitoring and Troubleshooting of Kubernetes clusters using Grafana and Prometheus. Experience in working closely with the Development Team to avoid the manual intervention and to ensure the timely delivery of deliverables. Experience in Virtualization technologies (VMWare). Database administration, maintenance, backup and restoration. Experience with various SQL and NoSQL databases like MySQL, Postgres, MongoDB, HBase, Elasticsearch etc. Experience in handling Production Deployments. Our perfect candidate is someone that: Is proactive and an independent problem solver Is a constant learner. We are a fast-growing company. We want you to grow with us! Is a team player and good communicator. Notice Period: 30 Days or less Mode of Work: Hybrid (3 days Work from Office) .
Posted 1 month ago
4 - 8 years
13 - 18 Lacs
Hyderabad
Work from Office
About The Role #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now Senior Cloud Solutions Technologist Job Location (Short): Hyderabad, India Workplace Type: Hybrid Business Unit: ALI Req Id: 1542 .buttontextb0d7f9bdde9da229 a{ border1px solid transparent; } .buttontextb0d7f9bdde9da229 a:focus{ border1px dashed #5B94FF !important; outlinenone !important; } Responsibilities Design, build, and manage Azure infrastructure, ensuring high availability, performance, and security. Implement DevOps practices using Azure DevOps, CI/CD pipelines, and infrastructure-as-code (IaC) tools. Manage and optimize Azure Kubernetes Service (AKS) clusters, ensuring scalability, security, and efficiency of containerized applications. Configure and maintain Azure-based servers and Citrix Virtual App environments. Optimize performance, security, and disaster recovery strategies across Azure infrastructure, AKS clusters, and Citrix environments. Automate cloud operations using scripting (Python, Bash, PowerShell) and configuration management tools (Puppet and Terraform). Implement monitoring, logging, and alerting strategies for cloud services, applications, and infrastructure. Apply cloud security best practices, ensuring compliance with organizational and regulatory security standards. Collaborate with developers, architects, and infrastructure teams to streamline cloud deployments and ensure operational efficiency. Participate in an On-Call rotation to provide support for critical cloud systems. Education / Qualifications Hexagon is seeking a highly motivated and experienced Site Reliability Engineer (SRE) to design, build, and manage our Azure cloud infrastructure. This role will be instrumental in implementing DevOps practices using Azure DevOps, optimizing and managing Azure Kubernetes Service (AKS) clusters for containerized applications, and configuring and maintaining Azure-based servers and Citrix Virtual Apps and Desktops environments. Should have relevant bachelors degree in Engineering stream. Proficiency with monitoring tools (e.g., Datadog, Prometheus, Grafana, LogicMonitor). Strong understanding of IT infrastructure, including servers, networks, and cloud environments across different OS platforms. Experience with virtualization platforms and cloud security strategies. Hands-on experience with container orchestration (e.g., Azure Kubernetes Service (AKS) or equivalent). Proficient in automation tools (e.g., Puppet and Terraform) and scripting languages (Python, Bash, PowerShell). Experience in setting up alerting and monitoring for containerized and microservices environments (Kubernetes, Docker). Familiarity with DevOps best practices, including CI/CD pipeline development. Strong problem-solving and analytical skills, with a focus on proactive identification and resolution of issues. Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect, or Azure DevOps Engineer Expert) are highly desirable. Experience with Citrix Virtual Apps and Desktops administration is a plus. About Hexagon Hexagon is the global leader in digital reality solutions, combining sensor, software and autonomous technologies. We are putting data to work to boost efficiency, productivity, quality and safety across industrial, manufacturing, infrastructure, public sector, and mobility applications. Our technologies are shaping production and people related ecosystems to become increasingly connected and autonomous – ensuring a scalable, sustainable future. Hexagon (Nasdaq StockholmHEXA B) has approximately 24,500 employees in 50 countries and net sales of approximately 5.4bn EUR. Learn more at?hexagon.com?and follow us?@HexagonAB. Hexagon’s R&D Centre in India Hexagon’s R&D Centre in India is the single largest R&D centre for the company globally. More than 2,000 talented engineers and developers create innovation from this centre that powers Hexagon's products and solutions. Hexagon’s R&D Centre delivers innovations and creative solutions for all business lines of Hexagon, including Asset Lifecycle Intelligence, Autonomous Solutions, Geosystems, Manufacturing Intelligence, and Safety, Infrastructure & Geospatial. It also hosts dedicated service teams for the global implementation of Hexagon’s products. R&D India – MAKES THINGS INTELLIGENT Asset Lifecycle Intelligence Produces insights across the asset lifecycle to design, construct, and operate more profitable, safe, and sustainable industrial facilities. Everyone is welcome At Hexagon, we believe that diverse and inclusive teams are critical to the success of our people and our business. Everyone is welcome—as an inclusive workplace, we do not discriminate. In fact, we embrace differences and are fully committed to creating equal opportunities, an inclusive environment, and fairness for all. Respect is the cornerstone of how we operate, so speak up and be yourself. You are valued here. .buttontext1c1d8f096aaf95bf a{ border1px solid transparent; } .buttontext1c1d8f096aaf95bf a:focus{ border1px dashed #0097ba !important; outlinenone !important; } #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now
Posted 1 month ago
3 - 7 years
13 - 18 Lacs
Hyderabad
Work from Office
About The Role #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now Cloud Solutions Consultant Job Location (Short): Hyderabad, India Workplace Type: Hybrid Business Unit: ALI Req Id: 1406 .buttontextb0d7f9bdde9da229 a{ border1px solid transparent; } .buttontextb0d7f9bdde9da229 a:focus{ border1px dashed #5B94FF !important; outlinenone !important; } Responsibilities Design, build, and manage Azure infrastructure, ensuring high availability, performance, and security. Implement DevOps practices using Azure DevOps, CI/CD pipelines, and infrastructure-as-code (IaC) tools. Manage and optimize Azure Kubernetes Service (AKS) clusters, ensuring scalability, security, and efficiency of containerized applications. Configure and maintain Azure-based servers and Citrix Virtual App environments. Optimize performance, security, and disaster recovery strategies across Azure infrastructure, AKS clusters, and Citrix environments. Automate cloud operations using scripting (Python, Bash, PowerShell) and configuration management tools (Puppet and Terraform). Implement monitoring, logging, and alerting strategies for cloud services, applications, and infrastructure. Apply cloud security best practices, ensuring compliance with organizational and regulatory security standards. Collaborate with developers, architects, and infrastructure teams to streamline cloud deployments and ensure operational efficiency. Participate in an On-Call rotation to provide support for critical cloud systems. Education / Qualifications Hexagon is seeking a highly motivated and experienced Cloud Solutions Consultant to design, build, and manage our Azure cloud infrastructure. This role will be instrumental in implementing DevOps practices using Azure DevOps, optimizing and managing Azure Kubernetes Service (AKS) clusters for containerized applications, and configuring and maintaining Azure-based servers and Citrix Virtual Apps and Desktops environments. Required Skills & Qualifications: A minimum of 6-10 years of relevant work experience. Should have a bachelors/masters degree in engineering. Proficiency with monitoring tools (e.g., Datadog, Prometheus, Grafana, LogicMonitor). Strong understanding of IT infrastructure, including servers, networks, and cloud environments across different OS platforms. Experience with virtualization platforms and cloud security strategies. Hands-on experience with container orchestration (e.g., Azure Kubernetes Service (AKS) or equivalent). Proficient in automation tools (e.g., Puppet and Terraform) and scripting languages (Python, Bash, PowerShell). Experience in setting up alerting and monitoring for containerized and microservices environments (Kubernetes, Docker). Familiarity with DevOps best practices, including CI/CD pipeline development. Strong problem-solving and analytical skills, with a focus on proactive identification and resolution of issues. Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect, or Azure DevOps Engineer Expert) are highly desirable. Experience with Citrix Virtual Apps and Desktops administration is a plus. About Hexagon Hexagon is the global leader in digital reality solutions, combining sensor, software and autonomous technologies. We are putting data to work to boost efficiency, productivity, quality and safety across industrial, manufacturing, infrastructure, public sector, and mobility applications. Our technologies are shaping production and people related ecosystems to become increasingly connected and autonomous – ensuring a scalable, sustainable future. Hexagon (Nasdaq StockholmHEXA B) has approximately 24,500 employees in 50 countries and net sales of approximately 5.4bn EUR. Learn more at?hexagon.com?and follow us?@HexagonAB. Hexagon’s R&D Centre in India Hexagon’s R&D Centre in India is the single largest R&D centre for the company globally. More than 2,000 talented engineers and developers create innovation from this centre that powers Hexagon's products and solutions. Hexagon’s R&D Centre delivers innovations and creative solutions for all business lines of Hexagon, including Asset Lifecycle Intelligence, Autonomous Solutions, Geosystems, Manufacturing Intelligence, and Safety, Infrastructure & Geospatial. It also hosts dedicated service teams for the global implementation of Hexagon’s products. R&D India – MAKES THINGS INTELLIGENT Asset Lifecycle Intelligence Produces insights across the asset lifecycle to design, construct, and operate more profitable, safe, and sustainable industrial facilities. Everyone is welcome .buttontext1c1d8f096aaf95bf a{ border1px solid transparent; } .buttontext1c1d8f096aaf95bf a:focus{ border1px dashed #0097ba !important; outlinenone !important; } #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now
Posted 1 month ago
4 - 9 years
0 Lacs
Bengaluru
Remote
This is Rajlaxmi from the HR department of ISoftStone Inc. we are looking for a TechOps Engineer with 5+ years of experience. Please find the JD below, If Interested, Please Drop CV at "rajlaxmi.chowdhury@isoftstone.com". Location- Bangalore/Remote Relevant Exp- 5+ years Overview We are seeking a highly motivated and skilled TechOps Engineer to join our team. The ideal candidate will be responsible for ensuring the smooth operation and performance of GTP services, provide technical support, troubleshooting issues, and implementing solution to optimize efficiency. This is an opportunity to work in a dynamic and innovative environment. We foster a collaborative and inclusive culture that value creativity, initiative and continuous learning. If you are a self-motivated professional with a passion for technology and a drive for excellence, we invite you to apply and be an integral part of our team. Career progression opportunities exist for suitably skilled and motivated individuals in the wider GTP function. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Certified in ITIL v3 or v4 foundation is a preferred. Excellent communication skills and ability to articulate technical issues / requirements. Excellent problem-solving and troubleshooting skills. Preferred Skills: Demonstrate comprehensive understanding of ITIL processes and best practices. Demonstrate comprehensive understanding in various monitoring systems such as Dynatrace, Sentry, Grafana, Prometheus, Azure Monitor, GCP Operation Suite, etc. Proficiency in Cloud technologies (e.g. AWS, Azure, GCP). Demonstrate understanding in operating Couchbase Database, MongoDB, as well as PostgreSQL is preferred. Demonstrate understanding of backup and disaster recovery concepts and tools to ensure the availability and recoverability of production systems in the event of a disaster. Certification in relevant technologies (e.g. Microsoft Azure, GCP) is a plus. Familiarity of DevOps practices such as CI/CD workflows, experience with GitHub Actions, and proficiency in using infrastructure automation tools Knowledge of software development lifecycle. Knowledge of containerization and orchestration tools such as Kubernetes Technologies and Tools.
Posted 1 month ago
3 - 8 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : bachelors degree in computer science Engineering or a related field Summary :This is a hands-on, technical role where the candidate will design and implement a DevOps Maturity Model by integrating multiple DevOps tools and building backend APIs to visualize data on a front-end interface. The candidate will work closely with cross-functional teams to enable DevOps culture, ensure system reliability, and drive continuous improvement. Roles & Responsibilities:1.DevOps Maturity Model:Design and develop a model to assess and improve DevOps practices by integrating tools like Jenkins, GitLab, and Azure DevOps.2.Backend Development:Build scalable and efficient backend APIs using Python and Azure Serverless.3.Frontend Development:Develop intuitive and responsive front-end interfaces using Angular and Vue.js for data visualization.4.Monitoring & Automation:Implement monitoring, logging, and alerting solutions. Develop automation scripts for reporting and analysis.5.Collaboration:Work with cross-functional teams to resolve production-level disruptions and enable DevOps culture.6.Documentation:Document architecture, design, and implementation details. Professional & Technical Skills: 1.Backend Development :Python and experience with Azure Serverless2.Frontend DevelopmentAngular and Vue.js.3.Databases:Familiarity with Azure SQL, Cosmos DB, or PostgreSQL.4.Containerization:Good understanding of Docker and Kubernetes for basic troubleshooting.5.Networking:Basic understanding of TCP/IP, HTTP, DNS, VPN, and cloud networking.6.Monitoring & Logging:Experience with monitoring tools like Prometheus, Grafana, or Datadog. Additional Information:1.The candidate should have a minimum of 3 years of experience in Python & Angular full stack.2.This position is based at our Bengaluru office.3.A 15 years full time education is required (bachelor's degree in computer science, Engineering, or a related field). Qualification bachelors degree in computer science Engineering or a related field
Posted 1 month ago
4 - 9 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Title: DevOps Engineer Experience Required: 45 Years Location: 100% Work from Office Schedule: Monday to Friday | 7:00 AM 4:00 PM IST (Brisbane Time) Job Summary: We are hiring a DevOps Engineer with a strong foundation in CI/CD pipelines , container orchestration , and infrastructure automation . The ideal candidate must have hands-on experience with Docker , Kubernetes , Terraform , and scripting languages . Mandatory Technical Skills & Tools (Keywords): DevOps CI/CD Pipelines (e.g., Jenkins, GitLab CI/CD, CircleCI) Docker Kubernetes Infrastructure as Code (IaC) Terraform , CloudFormation Linux Systems Administration Automation Scripting – Python , Shell/Bash Cloud Platforms – AWS, Azure, or GCP Monitoring & Logging – Prometheus, Grafana, ELK, or similar Source Control – Git, GitHub, GitLab Configuration Management – Ansible, Chef, or Puppet Key Responsibilities: Design, build, and maintain automated CI/CD pipelines . Manage and scale containerized applications using Docker and Kubernetes . Write and maintain Infrastructure as Code (IaC) using Terraform or similar tools. Develop automation scripts for deployment, monitoring, and infrastructure operations. Ensure system reliability , scalability, and performance. Collaborate with software engineers, QA, and infrastructure teams to ensure seamless deployment cycles. Troubleshoot and resolve infrastructure issues in a proactive manner. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 4–5 years of experience in a DevOps or Site Reliability Engineering role. Proven experience in CI/CD , containers , and cloud infrastructure . Strong analytical and problem-solving skills. Work Environment: 100% Work from Office 5 Days a Week (Monday to Friday) Working hours aligned with Brisbane Time : 7:00 AM – 4:00 PM IST
Posted 1 month ago
5 - 7 years
10 - 12 Lacs
Bengaluru
Work from Office
Experience in designing and building high-performance, distributed systems. Familiarity with cloud services (AWS, GCP, Azure) and containerization (Docker,Kubernetes). Strong knowledge of asynchronous programming, multithreading, and parallelprocessing. Experience in integrating external APIs, function calling, and plugin-basedarchitectures. Experience with performance monitoring and logging tools (Prometheus, Grafana,ELK stack). Familiarity with search engines, RAG pipelines, and hybrid search strategies. Experience in designing and building high-performance, distributed systems.
Posted 1 month ago
10 - 18 years
20 - 27 Lacs
Hyderabad, Ahmedabad
Work from Office
Hi Aspirants, Greetings from Tech Block - IT Software & Services - Hyderabad & Ahmedabad !!! About TechBlocks TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. Job Title: We are looking for SRE Manager and SRE Team Leader (Site Reliability Manager) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - Hybrid Model ( 3 Days WFO & 2 Days WFH) Job Summary : An SRE Manager is responsible for overseeing a team of Site Reliability Engineers (SREs) and ensuring the reliability, performance, and availability of a company's digital infrastructure . They manage the SRE team, drive automation initiatives, and collaborate with other departments to ensure seamless operations and alignment with business objectives. Experience Required: 10+ years total experience, with 3+ years in a leadership role in SRE (Site Reliability Engineer) or Cloud Operations. Technical Knowledge and Skills: Mandatory: Deep understanding of Kubernetes, GKE, Terraform , and Grafana / Prometheus / Splunk / DataDog Cloud: Advanced GCP administration / or any cloud CI/CD: Jenkins, Argo CD, GitHub Actions Incident Management: Full lifecycle, tools like OpsGenie Nice to Have : Knowledge of service mesh and observability stacks Strong scripting skills (Python, Bash) Big Query/Dataflow exposure for telemetry Scope: Build and lead a team of SREs Standardize practices for reliability, alerting, and response Engage with Engineering and Product leaders Roles and Responsibilities: Establish and lead the implementation of organizational reliability strategies, aligning SLAs, SLOs, and Error Budgets with business goals and customer expectations. Develop and institutionalize incident response frameworks, including escalation policies, on-call scheduling, service ownership mapping, and RCA process governance. Lead technical reviews for infrastructure reliability design, high-availability architectures, and resiliency patterns across distributed cloud services. Champion observability and monitoring culture by standardizing tooling, alert definitions, dashboard templates, and telemetry data schemas across all product teams. Drive continuous improvement through operational maturity assessments, toil elimination initiatives, and SRE OKRs aligned with product objectives. Collaborate with cloud engineering and platform teams to introduce self-healing systems, capacity-aware autoscaling, and latency-optimized service mesh patterns. Act as the principal escalation point for reliability-related concerns and ensure incident retrospectives lead to measurable improvements in uptime and MTTR. Own runbook standardization, capacity planning, failure mode analysis, and production readiness reviews for new feature launches. Mentor and develop a high-performing SRE team, fostering a proactive ownership culture, encouraging cross-functional knowledge sharing, and establishing technical career pathways. Collaborate with leadership, delivery, and customer stakeholders to define reliability goals, track performance, and demonstrate ROI on SRE investments Note : Please send me updated resume to kranthikt@tblocks.com / Reach me on 8522804902 Warm Regards, Kranthi Kumar| kranthikt@tblocks.com Contact: 8522804902 Senior Talent Acquisition Specialist Toronto | Ahmedabad | Hyderabad | Pune www.tblocks.com This communication may be privileged and contain confidential information intended only for the recipients to whom it was intended to be sent. Any unauthorized disclosure, copying, other distribution of this communication, or taking any action on its contents is strictly prohibited. If you have received this message in error, please notify us immediately and delete this message without reading, copying, or forwarding it to anyone.
Posted 1 month ago
5 - 9 years
4 - 8 Lacs
Kolkata
Work from Office
We are looking for an experienced and motivated DevOps Engineer with 5 to 7 years of hands- on experience designing, implementing, and managing cloud infrastructure, particularly on Google Cloud Platform (GCP). The ideal candidate will have deep expertise in infrastructure, such as code (IaC), CI/CD pipelines, container orchestration, and cloud-native technologies. This role requires strong analytical skills, attention to detail, and a passion for optimizing cloud infrastructure performance and cost. Key Responsibilities Design, implement, and maintain scalable, reliable, and secure cloud infrastructure using Google Cloud Platform (GCP) services, including Compute Engine, Google Kubernetes Engine (GKE), Cloud Functions, Cloud Pub/Sub, BigQuery, and Cloud Storage. Build and manage CI/CD pipelines using GitHub, artifact repositories, and version control systems; enforce GitOps practices across environments. Leverage Docker, Kubernetes, and serverless architectures to support microservices and modern application deployments. Develop and manage Infrastructure as Code (IaC) using Terraform to automate environment provisioning. Implement observability tools like Prometheus, Grafana, and Google Cloud Monitoring for real-time system insights. Ensure best practices in cloud security, including IAM policies, encryption standards, and network security. Integrate and manage service mesh architectures such as Istio or Linkerd for secure and observable microservices communication. Troubleshoot and resolve infrastructure issues, ensuring high availability, disaster recovery, and performance optimization. Drive initiatives for cloud cost management and suggest optimization strategies for resource efficiency. Document technical architectures, processes, and procedures; ensure smooth knowledge transfer and operational readiness. Collaborate with cross-functional teams including Development, QA, Security, and Architecture teams to streamline deployment workflows. Preferred candidate profile 5+ years of DevOps/Cloud Engineering experience, with at least 3 years on GCP. Proficiency in Terraform, Docker, Kubernetes, and other DevOps toolchains. Strong experience with CI/CD tools, GitHub/GitLab, and artifact repositories. Deep understanding of cloud networking, VPCs, load balancing, firewalls, and VPNs. Expertise in monitoring and logging frameworks such as Prometheus, Grafana, and Stackdriver (Cloud Monitoring). Strong scripting skills in Python, Bash, or Go for automation tasks. Knowledge of data backup, high-availability systems, and disaster recovery strategies. Familiarity with service mesh technologies and microservices-based architecture. Excellent analytical, troubleshooting, and documentation skills. Effective communication and ability to work in a fast-paced, collaborative environment.
Posted 1 month ago
6 - 8 years
15 - 20 Lacs
Gurugram
Work from Office
A Candidate with good skills in RabbitMQ, Docker and Kubernetes, Jenkins Pipelines, Nexus, Nagios / appdynamics, ELK, Kafka, Redis, Prometheus.
Posted 1 month ago
8 - 13 years
25 - 30 Lacs
Bengaluru
Work from Office
About The Role About The Role At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr. DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB we"™d love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications BSc in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and "˜fixes"™ Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors
Posted 1 month ago
7 - 11 years
17 - 22 Lacs
Bengaluru
Work from Office
At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation. Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive. Position Summary F5 Inc. is actively seeking an exceptional Sr Principal Software Engineer (Individual Contributor) to play a pivotal role in our SRE Operations team for the groundbreaking F5XC Product. Are you an SRE Operations specialist with automation in your DNA? Do you thrive in fast-paced SaaS environments where cloud meets global infrastructure ? We are looking for a top-tier SRE to drive Logs, Metrics, and Alerting , with a deep focus on Alerting automation at massive scale. Why This Role is Unique: Our SaaS is hybrid – running across public cloud and a global network of 50+ PoPs , delivering terabits of capacity . Our infrastructure spans cloud-native services and physical networking gear (routers, switches, firewalls), creating a uniquely challenging and exciting observability landscape. The Analytics & Observability platform will have deep reach across these layers , ensuring reliability, security, and performance at a massive scale. What Youll Do: Be the Force Behind Observability & Stability Drive end-to-end Observability (Logs, Metrics, and Alerts) across our hybrid SaaS stack , spanning cloud, edge, and physical network devices. Take ownership of Alerting strategy , cutting through noise while ensuring actionable, high-fidelity alerts. Implement intelligent automation to reduce operational toil and enhance real-time visibility. ?? Own & Automate Operations Design, build, and manage automation for self-healing infrastructure across cloud + global PoPs. Develop automation for Kubernetes, ArgoCD, Helm Charts, Golang-based services, AWS, GCP, Terraform . Improve networking observability , ensuring our routers, switches, and firewalls are monitored at scale. Continuously eliminate manual ops work through automation and platform improvements. Lead Incident Response & Operational Excellence Participate in on-call rotations , ensuring rapid incident response across our cloud + edge stack. Drive incident response automation , reducing MTTR and increasing system resilience . Ensure security, compliance, and best practices in observability & automation . Collaborate & Mentor Work closely with application teams, network engineers, and SREs to improve reliability and performance. Mentor junior engineers, fostering a culture of automation-first thinking and deep observability . What Makes You a Great Fit? ? Deep expertise in Logs, Metrics, and Alerting, with a strong focus on Alerting automation . Experience in hybrid SaaS environments spanning cloud-native and global infrastructure. Strong background in Kubernetes, Infrastructure-as-Code (Terraform), Golang, AWS/GCP, and networking observability . Proven track record of eliminating toil and improving operational efficiency through automation. Passion for deep observability, networking-scale analytics, and automation at the edge . If you love solving reliability challenges at global scale, automating everything, and working in a hybrid cloud + networking environment , we want to talk to you! The About The Role is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change. Must-Have: Observability & Alerting Expertise – Strong experience with Logs, Metrics, and Alerts , with a focus on high-fidelity alerting and automation . Automation & Infrastructure as Code – Deep knowledge of Terraform, ArgoCD, Helm, Kubernetes, and Golang for automation . Cloud & Hybrid SaaS Experience – Hands-on experience managing cloud-native (AWS/GCP) and edge infrastructure . Incident Response & Reliability Engineering – Strong on-call experience , with a track record of reducing MTTR through automation Kubernetes Mastery – Hands-on experience deploying, managing, and troubleshooting Kubernetes in production environments. Nice-to-Have: Networking & Edge Observability – Familiarity with monitoring routers, switches, and firewalls in a global PoP environment . Data & Analytics in Observability – Experience with time-series databases (Prometheus, Grafana, OpenTelemetry, etc.) . Security & Compliance Awareness – Understanding of secure-by-design principles for monitoring & alerting . Mentorship & Collaboration – Ability to mentor junior engineers and work cross-functionally with SREs, application teams, and network engineers . High Availability / Disaster Recovery Experience with HA/DR and Migration Qualifications Typically, it requires at least 18 years of related experience with a bachelor’s degree, 15 years and a master’s degree, or a PhD with 12 years’ experience; or equivalent experience. Excellent organizational agility and communication skills throughout the organization. Environment Empowered Work Culture: Experience an environment that values autonomy, fostering a culture where creativity and ownership are encouraged. Continuous Learning: Benefit from the mentorship of experienced professionals with solid backgrounds across diverse domains, supporting your professional growth. Team Cohesion: Join a collaborative and supportive team where youll feel at home from day one, contributing to a positive and inspiring workplace. F5 Networks, Inc. is an equal opportunity employer and strongly supports diversity in the workplace. The About The Role is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change. Please note that F5 only contacts candidates through F5 email address (ending with @f5.com) or auto email notification from Workday (ending with f5.com or @myworkday.com ) . Equal Employment Opportunity It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination. F5 offers a variety of reasonable accommodations for candidates . Requesting an accommodation is completely voluntary. F5 will assess the need for accommodations in the application process separately from those that may be needed to perform the job. Request by contacting accommodations@f5.com.
Posted 1 month ago
6 - 10 years
15 - 19 Lacs
Hyderabad, Ahmedabad
Hybrid
Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Experience Required: 610 years of SRE or infrastructure engineering experience in cloud-native environments. Mandatory: Cloud: GCP (GKE, Load Balancing, VPN, IAM) Observability: Prometheus, Grafana, ELK, Datadog Containers & Orchestration: Kubernetes, Docker Incident Management: On-call, RCA, SLIs/SLOs IaC: Terraform, Helm Incident Tools: PagerDuty, OpsGenie Nice to Have : GCP Monitoring, Skywalking Service Mesh, API Gateway GCP Spanner, MongoDB (basic)
Posted 1 month ago
5 - 9 years
7 - 17 Lacs
Ahmedabad
Work from Office
We are seeking a highly skilled Senior DevSecOps/DevOps Engineer with extensive experience in cloud infrastructure, automation, and security best practices. The ideal candidate must have 5+ years of overall experience, with at least 3+ years of direct, hands-on Kubernetes management experience. The candidate must have strong expertise in building, managing, and optimizing Jenkins pipelines for CI/CD workflows, with a focus on incorporating DevSecOps practices into the pipeline. Key Responsibilities: Design, deploy, and maintain Kubernetes clusters in cloud and/or on-premises environments. Build and maintain Jenkins pipelines for CI/CD, ensuring secure, automated, and efficient delivery processes. Integrate security checks (static code analysis, image scanning, etc.) directly into Jenkins pipelines. Manage Infrastructure as Code ( IaC ) using Terraform , Helm , and similar tools. Develop, maintain, and secure containerized applications using Docker and Kubernetes best practices. Implement monitoring, logging, and alerting using Prometheus , Grafana , and the ELK/EFK stack . Implement Kubernetes security practices including RBAC , network policies , and secrets management . Lead incident response efforts, root cause analysis, and system hardening initiatives. Collaborate with developers and security teams to embed security early in the development lifecycle (Shift-Left Security). Research, recommend, and implement best practices for DevSecOps and Kubernetes operations. Required Skills and Qualifications: 5+ years of experience in DevOps, Site Reliability Engineering, or Platform Engineering roles. 3+ years of hands-on Kubernetes experience, including cluster provisioning, scaling, and troubleshooting. Strong expertise in creating, optimizing, and managing Jenkins pipelines for end-to-end CI/CD. Experience in containerization and orchestration: Docker and Kubernetes . Solid experience with Terraform Helm , and other IaC tools. Experience securing Kubernetes clusters, containers, and cloud-native applications. Scripting proficiency ( Bash , Python , or Golang preferred). Knowledge of service meshes (Istio, Linkerd) and Kubernetes ingress management. Hands-on experience with security scanning tools (e.g., Trivy , Anchore , Aqua , SonarQube ) integrated into Jenkins. Strong understanding of IAM , RBAC , and secret management systems like Vault or AWS Secrets Manager .
Posted 1 month ago
9 - 14 years
40 - 50 Lacs
Bengaluru
Work from Office
Infrastructure Engineer Experience: 8 - 14 Years Employment Type: Full-Time Joining: Immediate Joiner Preferred Work location Bangalore / Hybrid (weekly 3 days work from office in rotational shift) *Need candidates only from Bangalore location only. About the role We are seeking an experienced Infrastructure Engineer to join our team at, a leader in blockchain technology and solutions. The ideal candidate will have a strong background in infrastructure management and a deep understanding of blockchain ecosystems. You will be responsible for designing, implementing, and maintaining the foundational infrastructure that supports our blockchain platforms, ensuring high availability, scalability, and security. Your expertise in AWS cloud technologies and database management, particularly with RDS, PostgreSQL, and Aurora, will be essential to our success. Responsibilities: Design & Deployment: Develop, deploy, and manage the infrastructure for blockchain nodes, databases, and network systems. Automation & Optimization: Automate infrastructure provisioning and maintenance tasks to enhance efficiency and reduce downtime. Optimize performance, reliability, and scalability across our blockchain systems. Monitoring & Troubleshooting: Set up monitoring and alerting systems to proactively manage infrastructure health. Quickly identify, troubleshoot, and resolve issues in production environments. Security Management: Implement robust security protocols, firewalls, and encryption to protect infrastructure and data from breaches and vulnerabilities. should be aware of VPC Virtual private cloud good in this Collaboration: Work closely with development, DevOps, and security teams to ensure seamless integration and support of blockchain applications. Support cross-functional teams in achieving network reliability and efficient resource management. Documentation: Maintain comprehensive documentation of infrastructure configurations, processes, and recovery plans. Continuous Improvement: Research and implement new tools and practices to improve infrastructure resiliency, performance, and cost-efficiency. Stay updated with blockchain infrastructure trends and industry best practices. Incident management: Incident dashboard management. Integrate dashboard using different power tools. Requirements: Educational Background: Bachelors degree in Computer Science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in AWS infrastructure engineering, using terraforms, Terra-grunt, and Atlantis with incident management and resolution using automation (infrastructure as a code) , AWS infrastructure cloud provisioning. Should be aware of VPC Virtual private cloud. Technical Skills: Terraform and Automation AWS Cloud watch Hands-on experience with monitoring tools (e.g., Prometheus, Grafana). DevOps with CI/CD pipelines. Incident management resolution and reporting. Proficiency in cloud platforms (e.g., AWS, GCP, Azure) and container orchestration (e.g., Docker, Kubernetes). Strong knowledge of Linux/Unix system administration. Understanding of networking protocols, VPNs, and firewalls. Participate in on-call rotations to provide 24/7 support for critical systems. Security Knowledge: Strong understanding of security best practices, especially within blockchain environments. Soft Skills: Excellent problem-solving abilities, attention to detail, strong communication skills, and a proactive, team-oriented mindset. Experience working with consensus protocols and node architecture.
Posted 1 month ago
6 - 10 years
8 - 12 Lacs
Noida
Work from Office
Job Description Job Description We are looking for a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have 5-7 years of experience in a DevOps role and a proven track record of implementing and maintaining complex systems with a focus on automation, scalability, and security. The Senior DevOps Engineer will work closely with our development, operations, and security teams to ensure that our software is released quickly and reliably, with a focus on continuous integration and delivery. Requirements: Bachelors/Masters degree in Computer Science, Information Technology or related field 5-7 years of experience in a DevOps role Strong understanding of the SDLC and experience with working on fully Agile teams Proven experience in coding & scripting DevOps, Ant/Maven, Groovy, Terraform, Shell Scripting, and Helm Chart skills. Working experience with IaC tools like Terraform, CloudFormation, or ARM templates Strong experience with cloud computing platforms (e.g. Oracle Cloud (OCI), AWS, Azure, Google Cloud) Experience with containerization technologies (e.g. Docker, Kubernetes/EKS/AKS) Experience with continuous integration and delivery tools (e.g. Jenkins, GitLab CI/CD) Kubernetes - Experience with managing Kubernetes clusters and using kubectl for managing helm chart deployments, ingress services, and troubleshooting pods. OS Services Basic Knowledge to Manage, configuring, and troubleshooting Linux operating system issues (Linux), storage (block and object), networking (VPCs, proxies, and CDNs) Monitoring and instrumentation - Implement metrics in Prometheus, Grafana, Elastic, log management and related systems, and Slack/PagerDuty/Sentry integrations Strong know-how of modern distributed version control systems (e.g. Git, GitHub, GitLab etc) Strong troubleshooting and problem-solving skills, and ability to work well under pressure Excellent communication and collaboration skills, and ability to lead and mentor junior team members Career Level - IC3 Responsibilities Responsibilities Design, implement, and maintain automated build, deployment, and testing systems Experience in Taking Application Code and Third Party Products and Building Fully Automated Pipelines for Java Applications to Build, Test and Deploy Complex Systems for delivery in Cloud. Ability to Containerize an Application i.e. creating Docker Containers and Pushing them to an Artifact Repository for deployment on containerization solutions with OKE (Oracle container Engine for Kubernetes) using Helm Charts. Lead efforts to optimize the build and deployment processes for high-volume, high-availability systems Monitor production systems to ensure high availability and performance, and proactively identify and resolve issues Support and Troubleshoot Cloud Deployment and Environment Issues Create and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD Continuously improve the scalability and security of our systems, and lead efforts to implement best practices Participate in the design and implementation of new features and applications, and provide guidance on best practices for deployment and operations Work with security team to ensure compliance with industry and company standards, and implement security measures to protect against threats Keep up-to-date with emerging trends and technologies in DevOps, and make recommendations for improvement Lead and mentor junior DevOps engineers and collaborate with cross-functional teams to ensure successful delivery of projects Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 6+ years of software engineering or related experience.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Prometheus is a popular monitoring and alerting tool used in the field of DevOps and software development. In India, the demand for professionals with expertise in Prometheus is on the rise. Job seekers looking to build a career in this field have a promising outlook in the Indian job market.
These cities are known for their vibrant tech industry and have a high demand for professionals skilled in Prometheus.
The salary range for Prometheus professionals in India varies based on experience levels. Entry-level positions can expect to earn around ₹5-8 lakhs per annum, whereas experienced professionals can earn up to ₹15-20 lakhs per annum.
A typical career path in Prometheus may include roles such as: - Junior Prometheus Engineer - Prometheus Developer - Senior Prometheus Engineer - Prometheus Architect - Prometheus Consultant
As professionals gain experience and expertise, they can progress to higher roles with increased responsibilities.
In addition to Prometheus, professionals in this field are often expected to have knowledge and experience in: - Kubernetes - Docker - Grafana - Time series databases - Linux system administration
Having a strong foundation in these related skills can enhance job prospects in the Prometheus domain.
As you explore opportunities in the Prometheus job market in India, remember to continuously upgrade your skills and stay updated with the latest trends in monitoring and alerting technologies. With dedication and preparation, you can confidently apply for roles in this dynamic field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2