Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 10 years
25 - 27 Lacs
Coimbatore, Bengaluru
Work from Office
TensorFlow, PyTorch Multi-GPU/TPU, distributed data pipelines Advanced techniques, RAG systems AWS, GCP, or Azure Terraform, Airflow, Kubeflow Prometheus, Grafana, or similar tools Version Control & CI/CD: Git, Jenkins, GitHub Actions, etc
Posted 1 month ago
5 - 10 years
3 - 8 Lacs
Bengaluru
Work from Office
About the Role: We are seeking a highly skilled and self-driven Senior DevOps Engineer with a strong foundation in software engineering and deep hands-on experience in CI/CD pipelines , Python scripting , and observability tools . The ideal candidate will contribute to automation, deployment, monitoring, and the overall reliability of mission-critical platforms. Key Responsibilities: Design, implement, and maintain robust CI/CD pipelines using GitHub and GitLab. Develop and maintain automation scripts using Python for operational efficiency and infrastructure management. Build, monitor, and optimize observability stacks using Grafana , Kibana , and associated logging and metrics tools. Work collaboratively with development teams to integrate infrastructure as code and promote DevOps best practices. Enhance system reliability, availability, and scalability across environments (development, staging, and production). Identify, troubleshoot, and resolve performance bottlenecks and system issues proactively using observability metrics. Ensure secure code deployment and rollback strategies are in place. Conduct post-incident reviews, root cause analysis, and process improvements. Required Skills & Qualifications: Skill Area Requirement Programming Strong scripting skills in Python Version Control Proficiency with GitHub and GitLab workflows Observability Hands-on experience with Grafana , Kibana , and log aggregation tools CI/CD Expertise in building, optimizing, and maintaining pipelines
Posted 1 month ago
7 - 11 years
9 - 13 Lacs
Bengaluru
Work from Office
Skill required: Delivery - Marketing Analytics and Reporting Designation: I&F Decision Sci Practitioner Specialist Qualifications: Any Graduation Years of Experience: 7 to 11 years What would you do? Data & AIAnalytical processes and technologies applied to marketing-related data to help businesses understand and deliver relevant experiences for their audiences, understand their competition, measure and optimize marketing campaigns, and optimize their return on investment. What are we looking for? Python (Programming Language) Structured Query Language (SQL) Machine Learning Data Science Written and verbal communication Ability to manage multiple stakeholders Strong analytical skills Detail orientation Expertise in AWS, Azure, or Google Cloud for ML workflows. Hands-on experience with Kubernetes, Docker, Jenkins, or GitLab CI/CD Familiarity with MLflow, TFX, Kubeflow, or SageMaker. Knowledge of Prometheus, Grafana, or similar tools for tracking system health and model performance. Understanding of ETL processes, data pipelines, and big data tools like Spark or Kafka. Proficiency in Git and model versioning best practices. Roles and Responsibilities: In this role you are required to do analysis and solving of moderately complex problems May create new solutions, leveraging and, where needed, adapting existing methods and procedures The person would require understanding of the strategic direction set by senior management as it relates to team goals Primary upward interaction is with direct supervisor May interact with peers and/or management levels at a client and/or within Accenture Guidance would be provided when determining methods and procedures on new assignments Decisions made by you will often impact the team in which they reside Individual would manage small teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Work closely with data scientists, engineers, and DevOps teams to operationalize ML Optimize ML pipelines for performance, cost, and scalability in production. Automate deployment pipelines for ML models, ensuring fast and reliable transitions from development to production environments Set up and manage scalable cloud or on-premise environments for ML workflows. Qualifications Any Graduation
Posted 1 month ago
7 - 12 years
9 - 14 Lacs
Kolkata
Work from Office
Project Role : Service Management Practitioner Project Role Description : Support the delivery of programs, projects or managed services. Coordinate projects through contract management and shared service coordination. Develop and maintain relationships with key stakeholders and sponsors to ensure high levels of commitment and enable strategic agenda. Must have skills : Site Reliability Engineering Good to have skills : Service Integration and Management (SIAM) Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education We are seeking an experienced SRE Observability Engineer to join our team and lead the development, enhancement, and extension of SRE driven observability and alerting platforms for our global clients.As an SRE Observability Engineer, your role will be build, enhance and maintain best in class observability platforms that can effectively monitor the full technology stack for cloud and on-prem systems. An SRE Observability Engineer will play a pivotal role in shaping the evolving needs of our customers including instrumentation of Service Level Indicators and Objectives (SLI/SLO) and development/enhancement of SLI/SLO driven observability dashboards and alerting.Key Responsibilities Gather and analyze logs, metrics and traces from operating systems, infrastructure and network as well as applications to assist in performance tuning and fault finding Implement, enhance and maintain observability and alerting capabilities, especially that are built on SLI/SLO/Error Budget Analyze an existing observability and alerting platform and identify how it can be further improved Help build our unified observability stack using various observability tools Improve automation and increase the system's self-healing capability. Build monitoring that alerts on symptoms rather than on outage Qualifications Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering or related field or a combination of education and equivalent work experience Required Experience Overall 5-8 years of working experience 3-5 years of experience of building observability platforms with tools such as Dynatrace, AppDynamics, New Relic, Prometheus, Splunk, Sensu, Nagios, DataDog, Open Telemetry etc. Very good understanding and strong working knowledge of log collection and aggregation, custom metric development and distributed tracing Experience of building observability dashboard (preferably SLO driven) in visualization tools like Grafana Good understanding of SLIs/SLOs, especially their implementation designs Good working understanding of monitoring Cloud Platforms – AWS, Azure and GCPGood to have experience Prior experience of implementing SLI/SLO/Error Budget driven observability and alerting Strong proficiency with Cloud Platforms Experience programming with one or more of the following:Python, Go, Java/Scala or C Experience with J2EE, NoSQL/SQL Datastore, Spring Boot, GCP, AWS, Azure & Docker orK8 in developing multi-tier applications. Overall good understanding of SRE principles and practices Understanding and ability to implement effective observability strategies to improve MTTD/R Experience with RESTful APIs and microservices platforms Working knowledge of the TCP/IP stack, internet routing and load balancing Solve complex architecture/design & business problems, work to simplify, optimize, remove bottlenecks, etc.You may not check every box, or your experience may look a little different from what we've outlined, but if you think you can bring value to Ford Motor Company, we encourage you to apply. Qualifications 15 years full time education
Posted 1 month ago
12 - 22 years
17 - 22 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design and implement robust DevOps architectures for cloud-native applications, including IoT solutions, utilizing Azure and AWS services. Develop and execute a comprehensive DevOps strategy and roadmap aligned with company objectives. Architect and maintain scalable, secure, and highly available cloud infrastructure. Lead and mentor a team of DevOps engineers, fostering a culture of collaboration and innovation. Oversee the development of CI/CD pipelines using tools such as Jenkins, GitHub, GitLab, and ArgoCD to optimize the software delivery process. Implement best practices for code quality, testing, and deployment to facilitate rapid and reliable software delivery. Drive automation initiatives with Terraform and Ansible to improve operational efficiency. Continuously monitor cloud environments to proactively identify and address performance issues, outages, and security threats. Conduct security audits and implement best practices to ensure compliance with regulatory requirements. Collaborate with cross-functional teams to troubleshoot and resolve infrastructure issues efficiently. Stay abreast of industry trends and advancements to ensure a competitive technology stack. Architect and implement Continuous Integration and Continuous Deployment workflows, enhancing automation pipelines. Design and implement scalable DevOps architecture for nightly builds, pull requests, zero-downtime production releases, rollbacks, and GitFlow processes. Requirements: 13+ years of experience in software development, system architecture, or IT operations, including at least 5 years in a leadership role. Proven expertise in designing and implementing cloud architecture in AWS and Azure. Demonstrated experience in implementing end-to-end CI/CD solutions, including SAST and DAST, in public and private cloud platforms. Excellent communication and interpersonal skills, with the ability to lead a high-performing team. Experience with configuration management tools (e.g., Ansible, Puppet, Chef). Strong expertise in Infrastructure as Code (IaC) tools, particularly Terraform. Proficiency in containerization and orchestration technologies (e.g., Kubernetes, AKS, EKS, KEDA). Experience architecting and implementing automated pipelines for OS installation, software updates, network configuration, packaging, deployments, and version management. Familiarity with IoT services such as Azure IoT Hub, AWS IoT Core, and their integration into cloud solutions. Proven experience in pre-sales activities for DevOps/CloudOps. Develop and present proof-of-concept (PoC) demos configured for both internal and external audiences. Ability to thrive in a fast-paced, dynamic work environment. Bachelors degree in computer science, Engineering, or a related field Preferred candidate profile
Posted 1 month ago
12 - 17 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Kubernetes Good to have skills : Google Kubernetes Engine, Google Cloud Compute Services Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education We are seeking a highly motivated and experienced DevOps Infra Engineer to join our team and manage Kubernetes (K8) infrastructure. You will be responsible for implementing and maintaining Infrastructure as Code (IaC) using Terraform or relevant code and ensuring the smooth deployment and management of our Kubernetes Stack in On Prem environments. You will also be instrumental in troubleshooting issues, optimizing infrastructure, and implementing / Managing monitoring tools for observability. Primary Skills: Kubernetes, Kubegres, Kubekafka, Grafana, Redis, PrometheusSecondary Skills: Keycloak,MetalLB,Ingress,ElasticSearch,Superset,OpenEBS,Istio,Secrets,Helm,NussKnacker,Valero,DruidResponsibilities:Containerization:Working experience with Kubernetes and Docker for containerized application deployments in On Prem ( GKE/K8s ). Knowledge of Helm charts and their application in Kubernetes clusters. Collaboration and Communication:Work effectively in a collaborative team environment with developers, operations, and other stakeholders. Communicate technical concepts clearly and concisely. CI/CD:Design and implement CI/CD pipelines using Jenkins, including pipelines, stages, and jobs. Utilize Jenkins Pipeline and Groovy scripting for advanced pipeline automation. Integrate Terraform with Jenkins for IaC management and infrastructure provisioning. Infrastructure as Code (IaC):Develop and manage infrastructure using Terraform, including writing Terraform tfvars and modules code. Set up IaC pipelines using Terraform, Jenkins, and cloud environments like Azure and GCP. Troubleshoot issues in Terraform code and ensure smooth infrastructure deployments. Cloud Platforms:Possess a deep understanding of both Google Cloud and Azure cloud platforms. Experience with managing and automating cloud resources in these environments. Monitoring & Logging:Configure and manage monitoring tools like Splunk, Grafana, and ELK for application and infrastructure health insights. GitOps:Implement GitOps practices for application and infrastructure configuration management. Scripting and Automation:Proficient in scripting languages like Python and Bash for automating tasks. Utilize Ansible or Chef for configuration management. Configuration Management:Experience with configuration management tools like Ansible and Chef. Qualifications:4-9 years of experience as a Kubernetes & DevOps Engineer or similar role with 12+ years of total experience in Cloud and Infra managed services. Strong understanding of CI/CD principles and practices. Proven experience with Jenkins or CI/CD, including pipelines, scripting, and plugins. Expertise in Terraform and IaC principles. Experience with Kubernetes management in On Prem platform is preferred. Exposure with monitoring and logging tools like Splunk, Grafana, or ELK. Experience with GitOps practices. Proficiency in scripting languages like Python and Bash. Experience with configuration management tools like Ansible or Chef. Hands-on experience with Kubernetes and Docker. Knowledge of Helm charts and their application in Kubernetes clusters. Must:Flexible to cover a part of US working Hours ( 24/7 business requirement ).Excellent communication and collaboration skills. Fluent in English.
Posted 1 month ago
3 - 8 years
3 - 7 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute Services Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities: Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations. Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes. Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus Proven track record in supporting and deploying various public cloud services. Experience in building or managing self-service platforms to boost developer productivity. Proficiency in using Infrastructure as Code (IaC) tools like Terraform. Skilled in diagnosing and resolving complex issues in automation and cloud environments. Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems. Strong understanding of infrastructure CI/CD pipelines and associated tools. Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions. Experience working in GKE, Edge/GDCE environments. Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset: Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions. At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies. Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules. Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE). Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications: GCP ACE certification is mandatory. CKA certification is highly desirable. HashiCorp Terraform certification is a significant plus.
Posted 1 month ago
7 - 12 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Kubernetes Good to have skills : Google Kubernetes Engine, Google Cloud Compute Services Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education About The Role :We are looking for an experienced Kubernetes Architect to join our growing cloud infrastructure team. This role will be responsible for architecting, designing, and implementing scalable, secure, and highly available cloud-native applications on Kubernetes. You will leverage Kubernetes along with associated technologies like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus to build resilient systems that meet both business and technical needs. Google Kubernetes Engine (GKE) will be considered as an additional skill. As a Kubernetes Architect, you will play a key role in defining best practices, optimizing the infrastructure, and providing architectural guidance to cross-functional teams Key Responsibilities: Architect Kubernetes Solutions:Design and implement scalable, secure, and high-performance Kubernetes clusters. Cloud-Native Application Design:Collaborate with development teams to design cloud-native applications, ensuring that microservices are properly architected and optimized for Kubernetes environments. Kafka Management:Architect and manage Apache Kafka clusters using Kubekafka, ensuring reliable, real-time data streaming and event-driven architectures. Database Architecture:Use Kubegres to manage high-availability PostgreSQL clusters in Kubernetes, ensuring data consistency, scaling, and automated failover. Helm Chart Development:Create, maintain, and optimize Helm charts for consistent deployment and management of applications across Kubernetes environments. Ingress & Networking:Architect and configure Ingress controllers (e.g., NGINX, Traefik) for secure and efficient external access to Kubernetes services, including SSL termination, load balancing, and routing. Caching and Performance Optimization:Leverage Redis to design efficient caching and session management solutions, optimizing application performance. Monitoring & Observability:Lead the implementation of Prometheus for metrics collection and Grafana for building real-time monitoring dashboards to visualize the health and performance of infrastructure and applications. CI/CD Integration:Design and implement continuous integration and continuous deployment (CI/CD) pipelines to streamline the deployment of Kubernetes-based applications. Security & Compliance:Ensure Kubernetes clusters follow security best practices, including RBAC, network policies, and the proper configuration of Secrets Management. Automation & Scripting:Develop automation frameworks using tools like Terraform, Helm, and Ansible to ensure repeatable and scalable deployments. Capacity Planning and Cost Optimization:Optimize resource usage within Kubernetes clusters to achieve both performance and cost-efficiency, utilizing cloud tools and services. Leadership & Mentorship:Provide technical leadership to development, operations, and DevOps teams, offering mentorship, architectural guidance, and sharing best practices. Documentation & Reporting:Produce comprehensive architecture diagrams, design documents, and operational playbooks to ensure knowledge transfer across teams and maintain system reliability Required Skills & Experience: 10+ years of experience in cloud infrastructure engineering, with at least 5+ years of hands-on experience with Kubernetes. Strong expertise in Kubernetes for managing containerized applications in the cloud. Experience in deploying & managing container-based systems on both private and public clouds (Google Kubernetes Engine (GKE)). Proven experience with Kubekafka for managing Apache Kafka clusters in Kubernetes environments. Expertise in managing PostgreSQL clusters with Kubegres and implementing high-availability database solutions. In-depth knowledge of Helm for managing Kubernetes applications, including the development of custom Helm charts. Experience with Ingress controllers (e.g., NGINX, Traefik) for managing external traffic in Kubernetes. Hands-on experience with Redis for caching, session management, and as a message broker in Kubernetes environments. Advanced knowledge of Prometheus for monitoring and Grafana for visualization and alerting in cloud-native environments. Experience with CI/CD pipelines for automated deployment and integration using tools like Jenkins, GitLab CI, or CircleCI. Solid understanding of networking, including load balancing, DNS, SSL/TLS, and ingress/egress configurations in Kubernetes. Familiarity with Terraform and Ansible for infrastructure automation. Deep understanding of security best practices in Kubernetes, such as RBAC, Network Policies, and Secrets Management. Knowledge of DevSecOps practices to ensure secure application delivery.Certifications:oGoogle Cloud Platform (GCP) certification is mandatory.oKubernetes Certification (CKA, CKAD, or CKAD) is highly preferred.oHashiCorp Terraform certification is a significant plus.
Posted 1 month ago
5 - 10 years
8 - 13 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Kubernetes Good to have skills : Google Kubernetes Engine, Google Cloud Compute Services Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Job Summary :We are looking for an experienced Kubernetes Specialist to join our cloud infrastructure team. You will work closely with architects and engineers to design, implement, and optimize cloud-native applications on Google Kubernetes Engine (GKE). This role will focus on providing expertise in Kubernetes, container orchestration, and cloud infrastructure management, ensuring the seamless operation of scalable, secure, and high-performance applications on GKE and other cloud environments.________________________________________Responsibilities: Kubernetes Implementation:Design, implement, and manage Kubernetes clusters for containerized applications, ensuring high availability and scalability. Cloud-Native Application Design:Work with teams to deploy, scale, and maintain cloud-native applications on Google Kubernetes Engine (GKE). Kubernetes Tools Expertise:Utilize Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus to build and maintain resilient systems. Infrastructure Automation:Develop and implement automation frameworks using Terraform and other tools to streamline Kubernetes deployments and cloud infrastructure management. CI/CD Implementation:Design and maintain CI/CD pipelines to automate deployment and testing for Kubernetes-based applications. Kubernetes Networking & Security:Ensure secure and efficient Kubernetes cluster networking, including Ingress controllers (e.g., NGINX, Traefik), RBAC, and Secrets Management. Monitoring & Observability:Lead the integration of monitoring solutions using Prometheus for metrics and Grafana for real-time dashboard visualization. Performance Optimization:Optimize resource utilization within GKE clusters, ensuring both performance and cost-efficiency. Collaboration:Collaborate with internal development, operations, and security teams to meet user requirements and implement Kubernetes solutions. Troubleshooting & Issue Resolution:Address complex issues related to containerized applications, Kubernetes clusters, and cloud infrastructure, troubleshooting and resolving them efficiently.________________________________________Technical Skillset: GCP & Kubernetes Experience:Minimum of 3+ years of hands-on experience in Google Cloud Platform (GCP) and Kubernetes implementations, including GKE. Container Management:Proficiency with container orchestration engines such as Kubernetes and Docker. Kubernetes Tools Knowledge:Experience with Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and Prometheus for managing Kubernetes-based applications. Infrastructure as Code (IaC):Strong experience with Terraform for automating infrastructure provisioning and management. CI/CD Pipelines:Hands-on experience in building and managing CI/CD pipelines for Kubernetes applications using tools like Jenkins, GitLab, or CircleCI. Security & Networking:Knowledge of Kubernetes networking (DNS, SSL/TLS), security best practices (RBAC, network policies, and Secrets Management), and the use of Ingress controllers (e.g., NGINX) Cloud & DevOps Tools:Familiarity with cloud services and DevOps tools such as GitHub, Jenkins, and Ansible. Monitoring Expertise:In-depth experience with Prometheus and Grafana for operational monitoring, alerting, and creating actionable insights. Certifications: Google Cloud Platform (GCP) Associate Cloud Engineer (ACE) certification is required. Certified Kubernetes Administrator (CKA) is highly preferred.
Posted 1 month ago
5 - 10 years
1 - 5 Lacs
Gurugram
Work from Office
Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Grafana Good to have skills : Microsoft SQL Server, Microsoft System Center Operations Manager (SCOM), Microsoft Azure DevOps Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Infra Tech Support Practitioner, you will provide ongoing technical support and maintenance of production and development systems and software products both remote and onsite. You will work on configured services running on various platforms within defined operating models and processes, including hardware/software support and technology implementation at the operating system-level. Must have Skills :Grafana Good to Have Skills Microsoft Azure DevOps Microsoft System Center Operations Manager (SCOM) Microsoft SQL Server Key Responsibilities : Grafana knowledge and hands on experience Troubleshooting experience Good written and verbal communication skills Excellent understanding of SQL, able to understand existing SQL Queries, good understanding of database structures MS SQL Preferred Basic understanding of networking, firewalls and secure network communication using SSL Good to have experience with Azure DevOps and operating Azure PipelinesProfessional & Technical Experience : Good Git source control knowledge Good to have LDAP knowledge around groups and permissions Good to have knowledge or experience with Splunk and understanding Splunk queries Good to have knowledge of PowerShell, specifically using PowerShell to run automations Good to have knowledge of Azure, Azure Monitor, Azure Monitor Logs Good to have Application support experience Good to have basic knowledge of Kubernetes deployments The ability to learn new processes and technologies Candidate should have effective communication skills within the team and with clients - Maturity and a professional attitude:Candidate should be matured enough and demonstrate professional attitude - Dependability and a Strong Work Ethic-Candidate should demonstrate dependability Educational Qualification:Btech or equivalent
Posted 1 month ago
3 - 6 years
8 - 10 Lacs
Gurugram
Work from Office
Design, implement, & manage CI/CD pipelines using GitLab.Work with Docker & Kubernetes for containerization and orchestration in production environments. Automate infrastructure provisioning & configuration using Ansible & Terraform. Required Candidate profile Proven experience working with GitLab CI/CD, Docker, Kubernetes, Ansible, & Terraform on live production projects. Familiarity with cloud platforms like AWS or Azure is a strong plus
Posted 1 month ago
2 - 4 years
8 - 12 Lacs
Bengaluru
Work from Office
locationsIndia, Bangalore time typeFull time posted onPosted 2 Days Ago job requisition idJR0035199 Job Title: Site Reliability Engineer About Trellix: Trellix, the trusted CISO ally, is redefining the future of cybersecurity and soulful work. Our comprehensive, GenAI-powered platform helps organizations confronted by todays most advanced threats gain confidence in the protection and resilience of their operations. Along with an extensive partner ecosystem, we accelerate technology innovation through artificial intelligence, automation, and analytics to empower over 53,000 customers with responsibly architected security solutions. We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: The Site Reliability Engineer team is responsible for design, implementation and end to end ownership of the infrastructure platform and services that protect the Trellix Securitys Consumer. The services provide continuous protection to our customers with a very strong focus on quality and an extendible services platform to internal partners & product teams. This role is a Site Reliability Engineer for commercial cloud-native solutions, deployed and managed in public cloud environments like AWS, GCP. You will be part of a team that is responsible for Trellix Cloud Services that enable protection at the endpoint products on a continuous basis. Responsibilities of this role include supporting Cloud service measurement, monitoring, and reporting, deployments and security. You will input into improving overall operational quality through common practices and by working with the Engineering, QA, and product DevOps teams. You will also be responsible for supporting efforts that improve Operational Excellence and Availability of Trellix Production environments. You will have access to the latest tools and technology, and an incredible career path with the worlds cyber security leader. You will have the opportunity to immerse yourself within complex and demanding deployment architectures and see the big picture all while helping to drive continuous improvement in all aspects of a dynamic and high-performing engineering organization. If you are passionate about running and continuously improving as a world class Site Reliability Engineer Team, we are offering you a unique and great opportunity to build your career with us and gain experience working with high-performance Cloud systems. About Role: Being part of a global 24x7x365 team providing the operational coverage including event response and recovery efforts of critical services. Periodic deployment of features, patches and hotfixes to maintain the Security posture of our Cloud Services. Ability to work in shifts on a rotational basis and participate in On-Call duties Have ownership and responsibility for high availability of Production environments Input into the monitoring of systems applications and supporting data Report on system uptime and availability Collaborate with other team members on best practices Assist with creating and updating runbooks & SOPs Build a strong relationship with the Cloud DevOps, Dev & QA teams and become a domain expert for the cloud services in your remit. Provided the required support for growth and development in this role. About you: 2 to 4 years of hands-on working experience in supporting production of large-scale cloud services. Strong production support background and experience of in-depth troubleshooting Experience working with solutions in both Linux and Windows environments Experience using modern Monitoring and Alerting tools (Prometheus, Grafana, PagerDuty, etc.) Excellent written and verbal communication skills. Experience with Python or other scripting languages Proven ability to work independently in deploying, testing, and troubleshooting systems. Experience supporting high availability systems and scalable solutions hosted on AWS or GCP. Familiarity with security tools & practices (Wiz, Tenable) Familiarity with Containerization and associated management tools (Docker, Kubernetes) Significant experience of developing and maintaining relationships with a wide range of customers at all levels Understanding of Incident, Change, Problem and Vulnerability Management processes. Desired: Awareness of ITIL best practices AWS Certification and/or Kubernetes Certification Experience with SnowFlake Automation/CI/CD experience, Jenkins, Ansible, Github Actions, Argo CD. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious ab out our commitment to a workplace where everyone can thrive and contribute to our industry-leading products and customer support, which is why we prohibit discrimination and harassment based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 month ago
3 - 5 years
6 - 10 Lacs
Bengaluru
Work from Office
locationsIndia, Bangalore time typeFull time posted onPosted 30+ Days Ago job requisition idJR0034909 Job Title: SDET About Trellix: Trellix, the trusted CISO ally, is redefining the future of cybersecurity and soulful work. Our comprehensive, GenAI-powered platform helps organizations confronted by todays most advanced threats gain confidence in the protection and resilience of their operations. Along with an extensive partner ecosystem, we accelerate technology innovation through artificial intelligence, automation, and analytics to empower over 53,000 customers with responsibly architected security solutions. We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at . Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. The team is the ultimate quality gate before shipping to Customers. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. Work on cutting edge technology and AI driven analysis. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 3-5 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python or JAVA) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, helm, argoCD is an added advantage Strong foundational knowledge in working on Linux based systems. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with non-functional testing, such as, performance and load, is desirable. Exposure to Locust or JMeter tools will be an added advantage Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) would be advantageous. Company Benefits and Perks: We work hard to embrace diversity and inclusion and encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to diversity which is why we prohibit discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 month ago
6 - 11 years
6 - 11 Lacs
Gurugram
Work from Office
We are looking for talented, creative, and proactive individuals who are passionate about solving complex business problems and contributing to the next generation of modern applications. Our goal is to help our customers understand the connections between application performance, user experience, and business outcomes, thereby creating exceptional customer experiences. Join us in shaping the future of Observability Engineering within our Intelligent Operations team with innovative data and integration solutions tools. Experience Minimum 6+ years of hands-on experience with Application Performance Management tools such as Datadog, New Relic, AppDynamics, Dynatrace, Splunk ITSI, Honeycomb, Chronosphere, Riverbed Aternity/Alluvio, ExtraHop, & Logic Monitor. Hands-on experience with cloud-native, open-source solutions like Prometheus, Grafana, ELK stack/, OpenTelemetry (OTEL), Experience with public cloud solutions like AWS CloudWatch, Azure App Insights, etc. Strong understanding of network & system management solutions, distributed systems, networking, and database technologies. Operational background and familiarity with ITIL , ITSM, SRE, or DevOps best practices and principles. Excellent problem-solving skills, organizational, project management, and communication skills. Eagerness to collaborate, contribute to team success, and a continuous learning mindset. Experience with containerization and orchestration technologies like Docker and Kubernetes. Broad background in software engineering with, at a minimum, generalist-level expertise in programming languages such as Python, Java, Go, .NET, NodeJS, Ruby, and PHP. Familiarity with microservices architecture, service mesh technologies, and end-user technologies (iOS, Android, JavaScript, HTML5). Knowledge of configuration management tools such as Terraform and Ansible Roles and Responsibilities Implement and maintain cutting-edge Observability solutions utilizing tools like New Relic, Datadog, AppDynamics, or Dynatrace for our large-scale enterprise customers. Develop and maintain systems for effective monitoring, logging, and tracing, ensuring scalability and reliability. Collaborate with cross-functional teams, including software engineers, product managers, and data scientists, to build resilient systems. Integrate observability practices into different engineering workflows and lead the adoption, optimization, and integration of products within the customers business infrastructure. Create custom dashboards, set up alerts, and develop AIOps rules, ensuring effective tracking against goals/KPIs. Provide technical support in post-sales processes, including installation, deployment, training, technical check-ups, and escalation management. Identify performance bottlenecks and anomalous system behavior and resolve root causes of service issues. Stay updated with the latest trends in observability, logging, monitoring, and cloud technologies and introduce innovative solutions and best practices. Participate in strategic technology planning, focusing on scalability, cost-effectiveness, and risk management in observability infrastructure. Document observability systems and processes comprehensively and prepare reports for management on system performance and reliability. Utilize Infrastructure as Code (IaC) principles for efficient infrastructure provisioning and management.
Posted 1 month ago
1 - 6 years
8 - 13 Lacs
Pune
Work from Office
Cloud Observability Administrator JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22685 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Observability Administrator ZS is looking for a Cloud Observability Administrator to join our team in Pune. As a Cloud Observability Administrator, you will be working on configuration of various Observability tools and create solutions to address business problems across multiple client engagements. You will leverage information from requirements-gathering phase and utilize past experience to design a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions. What Youll Do: Deploying, managing, and operating scalable, highly available, and fault tolerant Splunk architecture. Onboarding various kinds of log sources like Windows/Linux/Firewalls/Network into Splunk. Developing alerts, dashboards and reports in Splunk. Writing complex SPL queries. Managing and administering a distributed Splunk architecture. Very good knowledge on configuration files used in Splunk for data ingestion and field extraction. Perform regular upgrades of Splunk and relevant Apps/add-ons. Possess a comprehensive understanding of AWS infrastructure, including EC2, EKS, VPC, CloudTrail, Lambda etc. Automation of manual tasks using Shell/PowerShell scripting. Knowledge of Python scripting is a plus. Good knowledge of Linux commands to manage administration of servers. What Youll Bring: 1+ years of experience in Splunk Development & Administration, Bachelor's Degree in CS, EE, or related discipline Strong analytic, problem solving, and programming ability 1-1.5 years of relevant consulting-industry experience working on medium-large scale technology solution delivery engagements; Strong verbal, written and team presentation communication skills Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams Proven ability to work creatively and analytically in a problem-solving environment Ability to work within a virtual global team environment and contribute to the overall timely delivery of multiple projects Knowledge on Observability tools such as Cribl, Datadog, Pagerduty is a plus. Knowledge on AWS Prometheus and Grafana is a plus. Knowledge on APM concepts is a plus. Knowledge on Linux/Python scripting is a plus. Splunk Certification is a plus. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At
Posted 1 month ago
4 - 8 years
15 - 25 Lacs
Noida, Pune, Gurugram
Hybrid
We are looking for passionate DevOps Engineers for our team. The Devops Engineer will work closely with architects, data engineers and operations to design, build, deploy, manage and operate our development, test and production infrastructure. You will build and maintain tools to ensure our applications meet our stringent SLA's in a fast-paced culture with a passion to learn and contribute. We are looking for a strong engineer with a can-do attitude. What Youll Need: 4+ years of industry experience in design, implementation and maintenance of IT infrastructure and devOps solutions, data centres and greenfield infrastructure projects on both On-Premise and Cloud. Strong experience in Terrform, Oracle Cloud Infrastructure , Anisible, Puppet. Strong experience in server installation, maintenance, monitoring, troubleshooting, data backup, recovery, security and administration of Linux Operating systems like Red Hat Enterprise, Ubuntu and Centos. Strong programming ability in Shell/Perl/Python with automation experience. Experience automating public cloud deployments Experience using and optimizing monitoring and trending systems (Prometheus, Grafana), log aggregation systems (ELK, Splunk), and their agents. Experience with working in container based technologies like Dockers and Openshift. Using Docker and Ansible to automate the creation of kubernetes pods Experience with Ansible for provisioning and configuration of servers. Experience with Jenkins to automate the Build and Deploy process across all environments. Experience with Build tools like Apache Maven and Apache Gradle Monitoring and Troubleshooting of Kubernetes clusters using Grafana and Prometheus. Experience in working closely with the Development Team to avoid the manual intervention and to ensure the timely delivery of deliverables. Experience in Virtualization technologies (VMWare). Database administration, maintenance, backup and restoration. Experience with various SQL and NoSQL databases like MySQL, Postgres, MongoDB, HBase, Elasticsearch etc. Experience in handling Production Deployments. Our perfect candidate is someone that: Is proactive and an independent problem solver Is a constant learner. We are a fast-growing company. We want you to grow with us! Is a team player and good communicator. Notice Period: 30 Days or less Mode of Work: Hybrid (3 days Work from Office) .
Posted 1 month ago
4 - 8 years
13 - 18 Lacs
Hyderabad
Work from Office
About The Role #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now Senior Cloud Solutions Technologist Job Location (Short): Hyderabad, India Workplace Type: Hybrid Business Unit: ALI Req Id: 1542 .buttontextb0d7f9bdde9da229 a{ border1px solid transparent; } .buttontextb0d7f9bdde9da229 a:focus{ border1px dashed #5B94FF !important; outlinenone !important; } Responsibilities Design, build, and manage Azure infrastructure, ensuring high availability, performance, and security. Implement DevOps practices using Azure DevOps, CI/CD pipelines, and infrastructure-as-code (IaC) tools. Manage and optimize Azure Kubernetes Service (AKS) clusters, ensuring scalability, security, and efficiency of containerized applications. Configure and maintain Azure-based servers and Citrix Virtual App environments. Optimize performance, security, and disaster recovery strategies across Azure infrastructure, AKS clusters, and Citrix environments. Automate cloud operations using scripting (Python, Bash, PowerShell) and configuration management tools (Puppet and Terraform). Implement monitoring, logging, and alerting strategies for cloud services, applications, and infrastructure. Apply cloud security best practices, ensuring compliance with organizational and regulatory security standards. Collaborate with developers, architects, and infrastructure teams to streamline cloud deployments and ensure operational efficiency. Participate in an On-Call rotation to provide support for critical cloud systems. Education / Qualifications Hexagon is seeking a highly motivated and experienced Site Reliability Engineer (SRE) to design, build, and manage our Azure cloud infrastructure. This role will be instrumental in implementing DevOps practices using Azure DevOps, optimizing and managing Azure Kubernetes Service (AKS) clusters for containerized applications, and configuring and maintaining Azure-based servers and Citrix Virtual Apps and Desktops environments. Should have relevant bachelors degree in Engineering stream. Proficiency with monitoring tools (e.g., Datadog, Prometheus, Grafana, LogicMonitor). Strong understanding of IT infrastructure, including servers, networks, and cloud environments across different OS platforms. Experience with virtualization platforms and cloud security strategies. Hands-on experience with container orchestration (e.g., Azure Kubernetes Service (AKS) or equivalent). Proficient in automation tools (e.g., Puppet and Terraform) and scripting languages (Python, Bash, PowerShell). Experience in setting up alerting and monitoring for containerized and microservices environments (Kubernetes, Docker). Familiarity with DevOps best practices, including CI/CD pipeline development. Strong problem-solving and analytical skills, with a focus on proactive identification and resolution of issues. Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect, or Azure DevOps Engineer Expert) are highly desirable. Experience with Citrix Virtual Apps and Desktops administration is a plus. About Hexagon Hexagon is the global leader in digital reality solutions, combining sensor, software and autonomous technologies. We are putting data to work to boost efficiency, productivity, quality and safety across industrial, manufacturing, infrastructure, public sector, and mobility applications. Our technologies are shaping production and people related ecosystems to become increasingly connected and autonomous – ensuring a scalable, sustainable future. Hexagon (Nasdaq StockholmHEXA B) has approximately 24,500 employees in 50 countries and net sales of approximately 5.4bn EUR. Learn more at?hexagon.com?and follow us?@HexagonAB. Hexagon’s R&D Centre in India Hexagon’s R&D Centre in India is the single largest R&D centre for the company globally. More than 2,000 talented engineers and developers create innovation from this centre that powers Hexagon's products and solutions. Hexagon’s R&D Centre delivers innovations and creative solutions for all business lines of Hexagon, including Asset Lifecycle Intelligence, Autonomous Solutions, Geosystems, Manufacturing Intelligence, and Safety, Infrastructure & Geospatial. It also hosts dedicated service teams for the global implementation of Hexagon’s products. R&D India – MAKES THINGS INTELLIGENT Asset Lifecycle Intelligence Produces insights across the asset lifecycle to design, construct, and operate more profitable, safe, and sustainable industrial facilities. Everyone is welcome At Hexagon, we believe that diverse and inclusive teams are critical to the success of our people and our business. Everyone is welcome—as an inclusive workplace, we do not discriminate. In fact, we embrace differences and are fully committed to creating equal opportunities, an inclusive environment, and fairness for all. Respect is the cornerstone of how we operate, so speak up and be yourself. You are valued here. .buttontext1c1d8f096aaf95bf a{ border1px solid transparent; } .buttontext1c1d8f096aaf95bf a:focus{ border1px dashed #0097ba !important; outlinenone !important; } #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now
Posted 1 month ago
3 - 7 years
13 - 18 Lacs
Hyderabad
Work from Office
About The Role #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now Cloud Solutions Consultant Job Location (Short): Hyderabad, India Workplace Type: Hybrid Business Unit: ALI Req Id: 1406 .buttontextb0d7f9bdde9da229 a{ border1px solid transparent; } .buttontextb0d7f9bdde9da229 a:focus{ border1px dashed #5B94FF !important; outlinenone !important; } Responsibilities Design, build, and manage Azure infrastructure, ensuring high availability, performance, and security. Implement DevOps practices using Azure DevOps, CI/CD pipelines, and infrastructure-as-code (IaC) tools. Manage and optimize Azure Kubernetes Service (AKS) clusters, ensuring scalability, security, and efficiency of containerized applications. Configure and maintain Azure-based servers and Citrix Virtual App environments. Optimize performance, security, and disaster recovery strategies across Azure infrastructure, AKS clusters, and Citrix environments. Automate cloud operations using scripting (Python, Bash, PowerShell) and configuration management tools (Puppet and Terraform). Implement monitoring, logging, and alerting strategies for cloud services, applications, and infrastructure. Apply cloud security best practices, ensuring compliance with organizational and regulatory security standards. Collaborate with developers, architects, and infrastructure teams to streamline cloud deployments and ensure operational efficiency. Participate in an On-Call rotation to provide support for critical cloud systems. Education / Qualifications Hexagon is seeking a highly motivated and experienced Cloud Solutions Consultant to design, build, and manage our Azure cloud infrastructure. This role will be instrumental in implementing DevOps practices using Azure DevOps, optimizing and managing Azure Kubernetes Service (AKS) clusters for containerized applications, and configuring and maintaining Azure-based servers and Citrix Virtual Apps and Desktops environments. Required Skills & Qualifications: A minimum of 6-10 years of relevant work experience. Should have a bachelors/masters degree in engineering. Proficiency with monitoring tools (e.g., Datadog, Prometheus, Grafana, LogicMonitor). Strong understanding of IT infrastructure, including servers, networks, and cloud environments across different OS platforms. Experience with virtualization platforms and cloud security strategies. Hands-on experience with container orchestration (e.g., Azure Kubernetes Service (AKS) or equivalent). Proficient in automation tools (e.g., Puppet and Terraform) and scripting languages (Python, Bash, PowerShell). Experience in setting up alerting and monitoring for containerized and microservices environments (Kubernetes, Docker). Familiarity with DevOps best practices, including CI/CD pipeline development. Strong problem-solving and analytical skills, with a focus on proactive identification and resolution of issues. Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications: Azure certifications (e.g., Azure Administrator Associate, Azure Solutions Architect, or Azure DevOps Engineer Expert) are highly desirable. Experience with Citrix Virtual Apps and Desktops administration is a plus. About Hexagon Hexagon is the global leader in digital reality solutions, combining sensor, software and autonomous technologies. We are putting data to work to boost efficiency, productivity, quality and safety across industrial, manufacturing, infrastructure, public sector, and mobility applications. Our technologies are shaping production and people related ecosystems to become increasingly connected and autonomous – ensuring a scalable, sustainable future. Hexagon (Nasdaq StockholmHEXA B) has approximately 24,500 employees in 50 countries and net sales of approximately 5.4bn EUR. Learn more at?hexagon.com?and follow us?@HexagonAB. Hexagon’s R&D Centre in India Hexagon’s R&D Centre in India is the single largest R&D centre for the company globally. More than 2,000 talented engineers and developers create innovation from this centre that powers Hexagon's products and solutions. Hexagon’s R&D Centre delivers innovations and creative solutions for all business lines of Hexagon, including Asset Lifecycle Intelligence, Autonomous Solutions, Geosystems, Manufacturing Intelligence, and Safety, Infrastructure & Geospatial. It also hosts dedicated service teams for the global implementation of Hexagon’s products. R&D India – MAKES THINGS INTELLIGENT Asset Lifecycle Intelligence Produces insights across the asset lifecycle to design, construct, and operate more profitable, safe, and sustainable industrial facilities. Everyone is welcome .buttontext1c1d8f096aaf95bf a{ border1px solid transparent; } .buttontext1c1d8f096aaf95bf a:focus{ border1px dashed #0097ba !important; outlinenone !important; } #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now
Posted 1 month ago
4 - 9 years
0 Lacs
Bengaluru
Remote
This is Rajlaxmi from the HR department of ISoftStone Inc. we are looking for a TechOps Engineer with 5+ years of experience. Please find the JD below, If Interested, Please Drop CV at "rajlaxmi.chowdhury@isoftstone.com". Location- Bangalore/Remote Relevant Exp- 5+ years Overview We are seeking a highly motivated and skilled TechOps Engineer to join our team. The ideal candidate will be responsible for ensuring the smooth operation and performance of GTP services, provide technical support, troubleshooting issues, and implementing solution to optimize efficiency. This is an opportunity to work in a dynamic and innovative environment. We foster a collaborative and inclusive culture that value creativity, initiative and continuous learning. If you are a self-motivated professional with a passion for technology and a drive for excellence, we invite you to apply and be an integral part of our team. Career progression opportunities exist for suitably skilled and motivated individuals in the wider GTP function. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Certified in ITIL v3 or v4 foundation is a preferred. Excellent communication skills and ability to articulate technical issues / requirements. Excellent problem-solving and troubleshooting skills. Preferred Skills: Demonstrate comprehensive understanding of ITIL processes and best practices. Demonstrate comprehensive understanding in various monitoring systems such as Dynatrace, Sentry, Grafana, Prometheus, Azure Monitor, GCP Operation Suite, etc. Proficiency in Cloud technologies (e.g. AWS, Azure, GCP). Demonstrate understanding in operating Couchbase Database, MongoDB, as well as PostgreSQL is preferred. Demonstrate understanding of backup and disaster recovery concepts and tools to ensure the availability and recoverability of production systems in the event of a disaster. Certification in relevant technologies (e.g. Microsoft Azure, GCP) is a plus. Familiarity of DevOps practices such as CI/CD workflows, experience with GitHub Actions, and proficiency in using infrastructure automation tools Knowledge of software development lifecycle. Knowledge of containerization and orchestration tools such as Kubernetes Technologies and Tools.
Posted 1 month ago
3 - 8 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : bachelors degree in computer science Engineering or a related field Summary :This is a hands-on, technical role where the candidate will design and implement a DevOps Maturity Model by integrating multiple DevOps tools and building backend APIs to visualize data on a front-end interface. The candidate will work closely with cross-functional teams to enable DevOps culture, ensure system reliability, and drive continuous improvement. Roles & Responsibilities:1.DevOps Maturity Model:Design and develop a model to assess and improve DevOps practices by integrating tools like Jenkins, GitLab, and Azure DevOps.2.Backend Development:Build scalable and efficient backend APIs using Python and Azure Serverless.3.Frontend Development:Develop intuitive and responsive front-end interfaces using Angular and Vue.js for data visualization.4.Monitoring & Automation:Implement monitoring, logging, and alerting solutions. Develop automation scripts for reporting and analysis.5.Collaboration:Work with cross-functional teams to resolve production-level disruptions and enable DevOps culture.6.Documentation:Document architecture, design, and implementation details. Professional & Technical Skills: 1.Backend Development :Python and experience with Azure Serverless2.Frontend DevelopmentAngular and Vue.js.3.Databases:Familiarity with Azure SQL, Cosmos DB, or PostgreSQL.4.Containerization:Good understanding of Docker and Kubernetes for basic troubleshooting.5.Networking:Basic understanding of TCP/IP, HTTP, DNS, VPN, and cloud networking.6.Monitoring & Logging:Experience with monitoring tools like Prometheus, Grafana, or Datadog. Additional Information:1.The candidate should have a minimum of 3 years of experience in Python & Angular full stack.2.This position is based at our Bengaluru office.3.A 15 years full time education is required (bachelor's degree in computer science, Engineering, or a related field). Qualification bachelors degree in computer science Engineering or a related field
Posted 1 month ago
4 - 9 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Title: DevOps Engineer Experience Required: 45 Years Location: 100% Work from Office Schedule: Monday to Friday | 7:00 AM 4:00 PM IST (Brisbane Time) Job Summary: We are hiring a DevOps Engineer with a strong foundation in CI/CD pipelines , container orchestration , and infrastructure automation . The ideal candidate must have hands-on experience with Docker , Kubernetes , Terraform , and scripting languages . Mandatory Technical Skills & Tools (Keywords): DevOps CI/CD Pipelines (e.g., Jenkins, GitLab CI/CD, CircleCI) Docker Kubernetes Infrastructure as Code (IaC) Terraform , CloudFormation Linux Systems Administration Automation Scripting – Python , Shell/Bash Cloud Platforms – AWS, Azure, or GCP Monitoring & Logging – Prometheus, Grafana, ELK, or similar Source Control – Git, GitHub, GitLab Configuration Management – Ansible, Chef, or Puppet Key Responsibilities: Design, build, and maintain automated CI/CD pipelines . Manage and scale containerized applications using Docker and Kubernetes . Write and maintain Infrastructure as Code (IaC) using Terraform or similar tools. Develop automation scripts for deployment, monitoring, and infrastructure operations. Ensure system reliability , scalability, and performance. Collaborate with software engineers, QA, and infrastructure teams to ensure seamless deployment cycles. Troubleshoot and resolve infrastructure issues in a proactive manner. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 4–5 years of experience in a DevOps or Site Reliability Engineering role. Proven experience in CI/CD , containers , and cloud infrastructure . Strong analytical and problem-solving skills. Work Environment: 100% Work from Office 5 Days a Week (Monday to Friday) Working hours aligned with Brisbane Time : 7:00 AM – 4:00 PM IST
Posted 1 month ago
5 - 7 years
10 - 12 Lacs
Bengaluru
Work from Office
Experience in designing and building high-performance, distributed systems. Familiarity with cloud services (AWS, GCP, Azure) and containerization (Docker,Kubernetes). Strong knowledge of asynchronous programming, multithreading, and parallelprocessing. Experience in integrating external APIs, function calling, and plugin-basedarchitectures. Experience with performance monitoring and logging tools (Prometheus, Grafana,ELK stack). Familiarity with search engines, RAG pipelines, and hybrid search strategies. Experience in designing and building high-performance, distributed systems.
Posted 1 month ago
3 - 8 years
4 - 9 Lacs
Bangalore Rural, Bengaluru
Work from Office
Skills: Elasticsearch, Talend, Grafana Responsibilities: Build dashboards, manage clusters, optimize performance Tech: API, Python, cloud platforms (AWS, Azure, GCP) Preference: Immediate joiners Contact: 6383826448 || jensyofficial23@gmail.com
Posted 1 month ago
10 - 18 years
20 - 27 Lacs
Hyderabad, Ahmedabad
Work from Office
Hi Aspirants, Greetings from Tech Block - IT Software & Services - Hyderabad & Ahmedabad !!! About TechBlocks TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. Job Title: We are looking for SRE Manager and SRE Team Leader (Site Reliability Manager) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - Hybrid Model ( 3 Days WFO & 2 Days WFH) Job Summary : An SRE Manager is responsible for overseeing a team of Site Reliability Engineers (SREs) and ensuring the reliability, performance, and availability of a company's digital infrastructure . They manage the SRE team, drive automation initiatives, and collaborate with other departments to ensure seamless operations and alignment with business objectives. Experience Required: 10+ years total experience, with 3+ years in a leadership role in SRE (Site Reliability Engineer) or Cloud Operations. Technical Knowledge and Skills: Mandatory: Deep understanding of Kubernetes, GKE, Terraform , and Grafana / Prometheus / Splunk / DataDog Cloud: Advanced GCP administration / or any cloud CI/CD: Jenkins, Argo CD, GitHub Actions Incident Management: Full lifecycle, tools like OpsGenie Nice to Have : Knowledge of service mesh and observability stacks Strong scripting skills (Python, Bash) Big Query/Dataflow exposure for telemetry Scope: Build and lead a team of SREs Standardize practices for reliability, alerting, and response Engage with Engineering and Product leaders Roles and Responsibilities: Establish and lead the implementation of organizational reliability strategies, aligning SLAs, SLOs, and Error Budgets with business goals and customer expectations. Develop and institutionalize incident response frameworks, including escalation policies, on-call scheduling, service ownership mapping, and RCA process governance. Lead technical reviews for infrastructure reliability design, high-availability architectures, and resiliency patterns across distributed cloud services. Champion observability and monitoring culture by standardizing tooling, alert definitions, dashboard templates, and telemetry data schemas across all product teams. Drive continuous improvement through operational maturity assessments, toil elimination initiatives, and SRE OKRs aligned with product objectives. Collaborate with cloud engineering and platform teams to introduce self-healing systems, capacity-aware autoscaling, and latency-optimized service mesh patterns. Act as the principal escalation point for reliability-related concerns and ensure incident retrospectives lead to measurable improvements in uptime and MTTR. Own runbook standardization, capacity planning, failure mode analysis, and production readiness reviews for new feature launches. Mentor and develop a high-performing SRE team, fostering a proactive ownership culture, encouraging cross-functional knowledge sharing, and establishing technical career pathways. Collaborate with leadership, delivery, and customer stakeholders to define reliability goals, track performance, and demonstrate ROI on SRE investments Note : Please send me updated resume to kranthikt@tblocks.com / Reach me on 8522804902 Warm Regards, Kranthi Kumar| kranthikt@tblocks.com Contact: 8522804902 Senior Talent Acquisition Specialist Toronto | Ahmedabad | Hyderabad | Pune www.tblocks.com This communication may be privileged and contain confidential information intended only for the recipients to whom it was intended to be sent. Any unauthorized disclosure, copying, other distribution of this communication, or taking any action on its contents is strictly prohibited. If you have received this message in error, please notify us immediately and delete this message without reading, copying, or forwarding it to anyone.
Posted 1 month ago
5 - 9 years
4 - 8 Lacs
Kolkata
Work from Office
We are looking for an experienced and motivated DevOps Engineer with 5 to 7 years of hands- on experience designing, implementing, and managing cloud infrastructure, particularly on Google Cloud Platform (GCP). The ideal candidate will have deep expertise in infrastructure, such as code (IaC), CI/CD pipelines, container orchestration, and cloud-native technologies. This role requires strong analytical skills, attention to detail, and a passion for optimizing cloud infrastructure performance and cost. Key Responsibilities Design, implement, and maintain scalable, reliable, and secure cloud infrastructure using Google Cloud Platform (GCP) services, including Compute Engine, Google Kubernetes Engine (GKE), Cloud Functions, Cloud Pub/Sub, BigQuery, and Cloud Storage. Build and manage CI/CD pipelines using GitHub, artifact repositories, and version control systems; enforce GitOps practices across environments. Leverage Docker, Kubernetes, and serverless architectures to support microservices and modern application deployments. Develop and manage Infrastructure as Code (IaC) using Terraform to automate environment provisioning. Implement observability tools like Prometheus, Grafana, and Google Cloud Monitoring for real-time system insights. Ensure best practices in cloud security, including IAM policies, encryption standards, and network security. Integrate and manage service mesh architectures such as Istio or Linkerd for secure and observable microservices communication. Troubleshoot and resolve infrastructure issues, ensuring high availability, disaster recovery, and performance optimization. Drive initiatives for cloud cost management and suggest optimization strategies for resource efficiency. Document technical architectures, processes, and procedures; ensure smooth knowledge transfer and operational readiness. Collaborate with cross-functional teams including Development, QA, Security, and Architecture teams to streamline deployment workflows. Preferred candidate profile 5+ years of DevOps/Cloud Engineering experience, with at least 3 years on GCP. Proficiency in Terraform, Docker, Kubernetes, and other DevOps toolchains. Strong experience with CI/CD tools, GitHub/GitLab, and artifact repositories. Deep understanding of cloud networking, VPCs, load balancing, firewalls, and VPNs. Expertise in monitoring and logging frameworks such as Prometheus, Grafana, and Stackdriver (Cloud Monitoring). Strong scripting skills in Python, Bash, or Go for automation tasks. Knowledge of data backup, high-availability systems, and disaster recovery strategies. Familiarity with service mesh technologies and microservices-based architecture. Excellent analytical, troubleshooting, and documentation skills. Effective communication and ability to work in a fast-paced, collaborative environment.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Grafana is a popular tool used for monitoring and visualizing metrics, logs, and other data. In India, the demand for Grafana professionals is on the rise as more companies are adopting this tool for their monitoring and analytics needs.
The average salary range for Grafana professionals in India varies based on experience level: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-20 lakhs per annum
A typical career path in Grafana may include roles such as: 1. Junior Grafana Developer 2. Grafana Developer 3. Senior Grafana Developer 4. Grafana Tech Lead
In addition to Grafana expertise, professionals in this field often benefit from having knowledge or experience in: - Monitoring tools such as Prometheus - Data visualization tools like Tableau - Scripting languages (e.g., Python, Bash) - Understanding of databases (e.g., SQL, NoSQL)
As the demand for Grafana professionals continues to grow in India, it is essential to stay updated with the latest trends and technologies in this field. Prepare thoroughly for interviews and showcase your skills confidently to land your dream job in Grafana. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.