Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 5 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Description: Proeffective IT Services Private Limited Job title: Golang Developer Location: Bengaluru Terms: Full - Time Skills: Golang, Kubernetes, OSPF routing, VTI virtual tunnel interfaces, IPsec VPN, Token based authentication, SNMP Traps, Kafka, RabbitMQ, Prometheus, Grafana, Linux kernel, network engineering. Experience: 3 years to 5 years About us: We are committed to empowering businesses with expert IT solutions, consulting, and providing solutions tailored to meet unique needs. Our focus on quality and client satisfaction allows us to combine technology and strategy to drive your success. If you like challenging work environment where everyone is recognized and challenged to aspire their career goals, then you are at the right place! About the role: We are seeking a skilled Golang Developer to join our team and contribute to building high-performance, scalable, and robust applications. The ideal candidate should have experience in backend development, cloud technologies, and distributed systems. Responsibilities: Develop & maintain a Golang-based backend that manages thousands of networking devices. Implement secure, token-based authentication between backend and gateway devices (JWT, OAuth2, mTLS). Configure and manage OSPF routing, Virtual Tunnel Interfaces (VTI), and IPsec VPNs via backend APIs. Design and implement a scalable microservices architecture using Kubernetes (K8s). Handle SNMP trap processing for real-time device monitoring and alerts. Utilize Kafka/RabbitMQ for handling high-throughput device events asynchronously. Ensure high availability, fault tolerance, and scalability in backend services. Optimize network traffic handling, logging, and monitoring using Prometheus, Grafana, or ELK. Implement automated provisioning and configuration management Candidate Requirements: 3-5 years of experience in backend development with Golang. Strong understanding of networking protocols (TCP/IP, OSPF, IPsec, VTI, SNMP). Experience with secure communication channels (mTLS, OAuth2, JWT). Hands-on experience in Kubernetes (K8s) and containerized microservices. Experience with Kafka/RabbitMQ for event-driven architecture. Good understanding of Linux internals and system performance tuning. Experience with monitoring and logging tools (Prometheus, Grafana, ELK). Knowledge of infrastructure automation (Terraform, Ansible, Helm). Familiarity with CI/CD pipelines for automated deployments. Nice to have: Experience in low-latency, high-performance distributed systems. Exposure to Edge Computing & IoT device management. Knowledge of firewall configurations and VPN setups.
Posted 3 months ago
12 - 16 years
40 - 45 Lacs
Hyderabad
Work from Office
The Identity Access Management (IAM) Engineer will be the senior technical SME in the IAM organization leading the technical delivery of Customer Identity and Access management (CIAM). Lead CIAM implementation projects hands-on from initiation to completion, including requirements gathering, solution design, implementation, testing, and deployment. Develop and implement the CIAM strategy and roadmaps considering the industry security trends and regulatory requirements. Improve the maturity of the CIAM products and services showing increased adoption and speed to market. Must have strong development and customization experience. This role is based out of Hyderabad, India and requires coming into the office. Responsibilities Provides subject matter expertise in designing, solutioning and implementing access management requirements. Solution and implement customer identity access management solutions with hands-on experience in leading CIAM platforms such as Okta, Auth0, or ForgeRock. Provide required knowledge and expertise to assist with the technical approach for the shared operational capabilities of CIAM, including user registration, self-service, authentication, authorization, administration, audit, and reporting. Drive the adoption and migration to the enterprise CIAM capabilities. Provides advanced engineering expertise to automate and administrator identity and compliance requirements. Working with Cybersecurity and API teams to document best practices, authentication patterns and decision criteria for authentication and authorization. Hands on execution of identity management roadmaps and technology enhancements. Support program goals and objectives leveraging expert Okta experience and skills. Performs highly specialized and technical tasks associated with the most current and cutting-edge technologies. Creates and maintains standards surrounding documentation related to identity processes, and infrastructure. Provide level 3 production support to help diagnose and troubleshoot production issues. Define best practice and development of troubleshooting processes, methodologies, standards, alerts and reporting from CIAM platform(s) to be leveraged for operational monitoring. Participate in incident response and security incident investigations related to IAM systems. Adapt the architecture to evolving security conditions and support security guidelines. Evaluate and recommend IAM technologies, tools, and vendors to support our organization's evolving security and business needs. Develop and deliver applicable documentation, training, and knowledge transfer to both internal and external stakeholders. Provide technical leadership in designing, configuring, and troubleshooting IAM solutions. Evaluate and hands on implement automation capabilities to simplify processes and deliver value/cost savings to the business. Foster the Agile DevOps culture through the latest toolset to improve customer satisfaction through rapid, continuous delivery. Qualifications Minimum Qualifications: 12+ years of overall IT experience 9+ years of hands-on experience in authentication architecture, Solutioning and design roles 9+years of hands-on experience with Okta and/or relevant access management tools 6+ years of scripting (power shell, Python) and development (Java, J2EE, JavaScript, React, Rest API) experience. 4+ years working with Agile and DevOps tools and methodologies. Minimum Okta Certified Administrator: Okta Certified Consultant and/or Okta Certified Developer preferred. BS/BA degree or equivalent experience CISSP / CIAM Certification is a plus. Experience with CIC/Auth0 platform is a plus Preferred Qualifications: Extensive experience in solutioning, designing and implementing authentication services. Experience leading CIAM implementation projects from initiation to completion, including requirements gathering, solution design, implementation, testing, and deployment. Proven track record of understanding B2B and B2C customer needs and delivering solutions that enhance user experience while maintaining security and compliance standards. Thorough understanding of security best practices, privacy regulations (such as GDPR, CCPA), and compliance requirements related to customer data protection. Broader IAM domain experience with focus on information security Deep technical expertise in solutioning and integrating B2B, B2C applications with CIAM. Strong expertise in designing solutions with the standard IAM platforms like Okta, PingFederate in enabling single sign-on services for both cloud and on-prem applications. Hands-on experience in building SSO solutions with various protocols like SAML, OAuth, OIDC, and headers-based applications and platforms, preferably Azure AD, Ping, and SiteMinder Strong hands-on experience in designing and architecting Consumer identity and access management solutions Strong understanding of the latest security principles like zero trust and passwordless authentication to implement new standards in the authentication model. Must have working knowledge of Okta Lifecycle Management and Administrative APIs Experience with solutions like CyberArk, Beyond Trust, RSA or comparable products. Excellent understanding of REST integration concepts Experience in directory services like Oracle LDAP, and AD Experience working with cloud-based authentication solutions (e.g., AWS Cognito, Azure AD, Okta). Strong hands-on development experience - Java, Node js, React, Sprint boot, REST API and Java script. Hands on experience with JavaScript, Python, Ruby, PowerShell, or other scripting languages preferred. Experience building CICD pipelines in Azure or AWS Experience in automating application deployment building CICD pipelines using Ansible and terraform. Experience in Monitoring tools like Splunk, ELK, Prometheus, or similar tools Experience with container technologies Docker, Kubernetes Experience with Linux and Windows platforms, middleware, Apache, and load balancers Experience developing workflows, custom connectors, and troubleshooting complex issues. Experience with Agile and DevOps tools and methodologies Minimum Okta Certified Administrator: Okta Certified Consultant and/or Okta Certified Developer preferred. CISSP / CIAM Certification is a plus. Experience in SiteMinder is preferred. Non-Technical skills: Exceptional communication and interpersonal skills with the ability to influence and collaborate with diverse stakeholders. Deliver outcomes with a little supervision, must be a self-starter and self-motivator. Strong analytical, problem-solving, and decision-making skills, with the ability to manage complex and competing priorities. Strong project management and organizational skills, with the ability to deliver high-quality results. Ability to think strategically and suggest creative solutions. Ability to synthesize complex requirements into simple business practices. Flexible and able to adapt to changing priorities.
Posted 3 months ago
7 - 12 years
8 - 12 Lacs
Mumbai
Work from Office
iSource Services is hiring for one of their client for the position of DevOps Engineer. Skills: DevOps (AWS, Jenkins, k8, Prometheus, Splunk, Grafana), PHP Framework OOP Responsibilities: Security patches QCR/Compliance Bug fixes. L3 escalations including PDP dropin (FE) BF/Holiday. Special events preparation Testing (e2e, performance) Release validation. Deployment on pre-prod environment. Monitoring and alerting changes Monitor AWS resources, K8 clusters On-call duties.
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Chennai
Work from Office
Description: Job Title: DevOps Engineer (AWS & on-premise) 6+ years of experience in DevOps engineering with a focus on IaC (Terraform, Ansible) Job Summary: We are seeking an experienced DevOps Engineer with a strong background in automating respource provisioning via IaC tools like Terraform & Ansible in on-premise and AWS environments. The ideal candidate will be responsible for designing, implementing, and maintaining on-Premise & Cloud AWS environment deployments and configuration via infrastructure as code (IaC), while ensuring seamless integration between on-premise and cloud environments. Qualifications: • Education: o Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience). • Experience: o 6+ years of experience in DevOps engineering with a focus on AWS. o 3+ years of experience managing on-premise Kubernetes clusters. • Must Have Skills: o Strong knowledge and Experience with on infrastructure as code (IaC) tools like Terraform, Ansible, CloudFormation. o Proficient in managing Kubernetes clusters, including networking, storage, and security. o Proficient in managing Docker Swarm clusters, including networking, storage, and security. o Strong knowledge of AWS architecture, services and best practices. o Proficient in AWS services like VPC, EC2, S3, ELB, EBS, RDS, IAM, Route 53, CloudWatch, CloudFront, CloudTrail, Backup, DataSync, System Manager. o Experienced in creating multiple VPC’s and public, private subnets as per requirement and distributed them as groups into various availability zones of the VPC. o Experience with monitoring and logging tools like Prometheus, Grafana, ELK Stack. o Experience with scripting languages (e.g., Python, Bash) and automation tools. o Familiarity with Git and version control systems. o Strong problem-solving skills and attention to detail. Preferred Skills : • AWS Certified DevOps Engineer or equivalent certification. • Experience with hybrid cloud environments (on-premise and cloud). • Familiarity with service mesh (e.g., Istio) and Kubernetes operators. • Knowledge of microservices architecture and serverless computing. Requirements: Job Title: DevOps Engineer (AWS & on-premise) 6+ years of experience in DevOps engineering with a focus on IaC (Terraform, Ansible) Job Summary: We are seeking an experienced DevOps Engineer with a strong background in automating respource provisioning via IaC tools like Terraform & Ansible in on-premise and AWS environments. The ideal candidate will be responsible for designing, implementing, and maintaining on-Premise & Cloud AWS environment deployments and configuration via infrastructure as code (IaC), while ensuring seamless integration between on-premise and cloud environments. Qualifications: • Education: o Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience). • Experience: o 6+ years of experience in DevOps engineering with a focus on AWS. o 3+ years of experience managing on-premise Kubernetes clusters. • Must Have Skills: o Strong knowledge and Experience with on infrastructure as code (IaC) tools like Terraform, Ansible, CloudFormation. o Proficient in managing Kubernetes clusters, including networking, storage, and security. o Proficient in managing Docker Swarm clusters, including networking, storage, and security. o Strong knowledge of AWS architecture, services and best practices. o Proficient in AWS services like VPC, EC2, S3, ELB, EBS, RDS, IAM, Route 53, CloudWatch, CloudFront, CloudTrail, Backup, DataSync, System Manager. o Experienced in creating multiple VPC’s and public, private subnets as per requirement and distributed them as groups into various availability zones of the VPC. o Experience with monitoring and logging tools like Prometheus, Grafana, ELK Stack. o Experience with scripting languages (e.g., Python, Bash) and automation tools. o Familiarity with Git and version control systems. o Strong problem-solving skills and attention to detail. Preferred Skills : • AWS Certified DevOps Engineer or equivalent certification. • Experience with hybrid cloud environments (on-premise and cloud). • Familiarity with service mesh (e.g., Istio) and Kubernetes operators. • Knowledge of microservices architecture and serverless computing. Job Responsibilities: Key Responsibilities: • Kubernetes Management: o Design, deploy, and maintain on-premise Kubernetes clusters. o Implement monitoring, logging, and alerting solutions for Kubernetes environments. o Manage container orchestration, including scaling, security, and upgrades. • AWS Cloud Management: o Design, deploy, and manage AWS infrastructure using services such as EC2, S3, RDS, VPC, IAM, Lambda, and others. o Implement infrastructure as code (IaC) using tools like Terraform, AWS CloudFormation, or Ansible. o Manage AWS networking components like VPCs, security groups, and load balancers. • CI/CD Pipeline Development: o Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. o Automate testing, deployment, and monitoring processes to ensure high-quality releases. o Collaborate with development teams to integrate CI/CD processes into their workflows. • Security and Compliance: o Implement security best practices in both AWS and on-premise environments. o Ensure compliance with industry standards and regulations. o Manage secrets and credentials using tools like AWS Secrets Manager or HashiCorp Vault. • Monitoring and Optimization: o Implement and manage monitoring tools (e.g., Prometheus, Grafana, CloudWatch) to track performance and availability. o Optimize infrastructure for cost, performance, and reliability. o Troubleshoot and resolve issues in both cloud and on-premise environments. • Collaboration and Communication: o Work closely with development, operations, and security teams to deliver high-quality software. o Document processes, configurations, and best practices. o Provide training and support to internal teams as needed. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 3 months ago
3 - 5 years
5 - 7 Lacs
Pune
Work from Office
Infrastructure as Code (IaC): Design, implement, and manage infrastructure using IaC tools (e.g., Terraform, CloudFormation, Ansible). Automate infrastructure provisioning and configuration. Ensure infrastructure consistency and reproducibility. Continuous Integration/Continuous Deployment (CI/CD): Design and implement CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or Azure DevOps. Automate build, test, and deployment processes. Optimize CI/CD pipelines for speed and reliability. Containerization and Orchestration: Implement and manage containerization using Docker. Orchestrate containers using Kubernetes or similar platforms. Ensure container security and scalability. Monitoring and Logging: Implement and manage monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). Proactively identify and resolve performance and availability issues. Set up alerts and notifications for critical system events. Cloud Infrastructure Management: Manage and optimize cloud infrastructure on platforms like AWS, Azure, or GCP. Implement cloud security best practices. Optimize cloud resource utilization and cost. Automation and Scripting: Develop and maintain automation scripts using languages like Python, Bash, or PowerShell. Automate repetitive tasks and processes. Collaboration and Communication: Collaborate with development and operations teams to improve SDLC processes. Communicate effectively with stakeholders. Participate in on-call rotations and incident response. Security: Implement security best practices throughout the SDLC. Perform security audits and vulnerability assessments. Ensure compliance with security policies and regulations.
Posted 3 months ago
3 - 8 years
14 - 24 Lacs
Pune, Bengaluru
Hybrid
Site Reliability Engineer We are seeking a talented and experienced Senior Site Reliability Engineer (SRE) specializing in Operating Systems (OS), Applications, Databases, and Middleware to join Maersk. This role requires deep expertise in implementing SRE practices and driving engagements, with a strong focus on observability, automation, performance through open-source monitoring tools and problem solving techniques. The ideal candidate will have substantial experience with Azure Cloud, DC components like compute, storage, network, Java/JVM, Dockers Kubernetes, alongside good coding skills. Knowledge on Automation & AIOps will be an added advantage. Job Purpose/summary Lead the establishment and implementation of SRE practices across multiple platforms within Maersk to ensure the reliability, availability, and performance of critical systems and services. Define and enforce Service Level Objectives (SLOs), Service Level Indicators (SLIs), and Service Level Agreements (SLAs). Oversee the administration, maintenance, and optimization of operating systems, applications, databases, and middleware. Collaborate closely with development and operations teams to integrate and optimize applications for performance and reliability. Manage and optimize Azure Cloud infrastructure for scalability, performance, and cost-efficiency. Deploy, manage, and monitor Kubernetes clusters to ensure high availability and resilience. Implement and manage observability solutions using open-source tools such as Prometheus and Grafana. Develop and maintain robust monitoring, logging, and alerting systems to proactively identify and resolve issues. Utilize Application Performance Management (APM) tools and practices to continuously monitor and enhance application performance. Drive AIOps initiatives to streamline operations and enhance efficiency through automation and machine learning. Develop automation scripts and tools to reduce manual intervention and ensure consistent deployment and management practices. Lead cross-functional engagements with development, operations, and business teams to foster a culture of reliability and continuous improvement. Provide technical leadership, guidance, and mentorship to team members and stakeholders. What we are looking for: Bachelors degree in Computer Science, Engineering, or a related field, or equivalent experience. Proven experience as a Site Reliability Engineer or in a similar role. Deep knowledge of operating systems (Linux/Windows), application management, databases (SQL/NoSQL), and middleware. Strong expertise in Azure Cloud services and Kubernetes orchestration. Hands-on experience with observability tools like Prometheus, Grafana, and APM solutions. Proficiency in coding/scripting languages such as Python, Go, Shell, etc. Familiarity with AIOps practices and tools. Excellent problem-solving skills and a proactive approach to addressing issues. Strong communication and collaboration skills. Preferred skills: Certification in Azure or Kubernetes. Experience with other cloud platforms (AWS, Google Cloud). Familiarity with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code (IaC) tools like Terraform or Ansible. Knowledge of security best practices and compliance standards. We Offer As an organization with global presence, joining Maersk is a wonderful and exciting opportunity for you to work with people of diverse talents & background. We offer a fast paced, challenging and truly international atmosphere with activities spread around the globe in Copenhagen, India, London, Hague and Charlotte. The environment is dynamic with focus on high performance, results, and respect for our employees. There will be the possibility of continuous professional and personal development and for gaining a professional and social network. As a company, we are committed to growing our people. We will provide you with opportunities that broaden your knowledge and strengthen your professional & technical skills. We operate in a fast-paced environment utilizing modern technologies and bias toward action We value customer outcomes and are passionate about using technology to solve problems We are a diverse team with colleagues from different backgrounds and cultures We offer the freedom, and responsibility, to shape the setup and the processes we use in our community We support continuous learning, including through conferences, workshops and meetups Ideally, you will have (*) Natural bar-raiser: curious and passionate, with a desire to continuously learn more Bias to action, being familiar with methods and approaches needed to get things done in a collaborative, lean and fast-moving environment Respond effectively to complex and ambiguous problems and situations Simplify, clearly and succinctly convey complex information and ideas to individuals at all levels of the organization Motivated by goal achievement and continuous improvement, with the enthusiasm and drive to motivate your team and the wider organization
Posted 3 months ago
6 - 8 years
6 - 10 Lacs
Bengaluru
Work from Office
Roles & Responsibilities: Responsible for developing DevOps strategy and roadmap for monitoring and maintaining cloud platforms Work closely with project teams and architects to understand current DevOps maturity and prepare transformation strategy Define and standardize DevOps tools and technologies Promote DevOps best practices and increase DevOps adoption across teams Monitor infrastructure including proactive capacity management and replication strategies Build and operate CI and CD tools and process Build and operate a secure cloud and networking environment, responsible for end-to-end platform security and security audits Operational tasks for maintenance of the platform 1. IoT Cloud Infrastructure Management Maintain and scale our AWS-based IoT cloud solution. Configure and manage DynamoDB, DynamoDB Streams, PostgreSQL, IoT Core, and Lambda services to ensure seamless operation. 2. Automation and Deployment Develop and implement automation scripts and tools for infrastructure provisioning, configuration management, and deployment processes. Build and maintain CI/CD pipelines to facilitate efficient software releases. 3. Monitoring and Optimization Implement robust monitoring and alerting systems for IoT cloud resources. Analyze system performance metrics and optimize resource utilization for cost-efficiency and performance improvements. 4. Security and Compliance Ensure the security of infrastructure through best practices, and deploying proper tools. Stay updated on AWS security best practices and compliance requirements. 5. Troubleshooting and Incident Response Rapidly diagnose and resolve infrastructure and service-related issues to minimize downtime. Maintain incident response plans for critical scenarios. Skills & Experience B.E. or B.Tech. with 6-8 years of total experience as DevOps engineer Experience with one or more cloud providers like AWS, Azure, Google cloud etc. Good knowledge of system monitoring methodologies and tools Prometheus, Grafana etc. Good knowledge with container management technologies like Docker, Kubernetes etc. Kubernetes - In depth knowledge Creating secure, available, failsafe Kubernetes cluster in on premise. Knowledge related to ansible or similar for this purpose. Knowledge for MicroK8s opensource Kubernetes cluster Knowledge on open-source technologies and production grade on prem deployment Hands on experience with DevOps CICD tools GitLab CI, Jenkins, GitHub, TFS, Maven, Ansible, Chef, Puppet, SonarQube, Artifactory, PowerShell, Ansible etc. Good knowledge on hosting and managing micro-services on AWS, with clarity on IAAS and PAAS concepts; know-how on security architecture with VPCs, subnets, IAM Experience of working in an agile distributed environment Exposure to working in a web-solution environment (with microservice architecture) Understands architectural concepts for distributed systems and cloud technologies Proven experience in AWS cloud services, particularly DynamoDB, PostgreSQL, IoT Core, and Lambda. Proficient in writing and organizing infrastructure as code (IAC) with Terraform. Good understanding of scripting languages like Python and nodejs. Experience with CI/CD in general and specific knowledge of GitLab. Strong understanding of DevOps principles and practices. Strong problem-solving skills and the ability to work well in a team. Nice to Have AWS certifications (e.g., AWS Certified DevOps Engineer, AWS Certified Solutions Architect) are a plus. Familiarity with serverless microservices is a plus. Understanding of security best practices and compliance standards in AWS.
Posted 3 months ago
5 - 8 years
10 - 20 Lacs
Bengaluru
Work from Office
Key Responsibilities: Develop and manage deployment processes for ML models in production. Implement CI/CD pipelines for ML workflows. Monitor model performance, reliability, and accuracy. Optimize infrastructure for ML model training and deployment. Ensure security and compliance of ML workflows. Required Experience & Skills: 5+ years in software engineering/DevOps, with 2-3 years in MLOps. Proficiency in Python, CI/CD tools, Docker, Kubernetes, and cloud platforms (AWS/Azure/GCP). Hands-on experience with TensorFlow, PyTorch, or scikit-learn. Strong knowledge of monitoring tools (Prometheus, Grafana, ELK stack).
Posted 3 months ago
10 - 20 years
10 - 20 Lacs
Pune, Trivandrum
Hybrid
Proven experience in an SRE, DevOps, or infrastructure engineering role with a focus on monitoring, automation, and orchestration. Strong knowledge of Networking and Security domain, with the ability to critically analyse network designs and propose innovative improvements to enhance performance, reliability, stability and security Expertise in monitoring tools (Prometheus, ELK) with ability to optimize monitoring systems and integrate ML/AI models to improve visibility, anomaly detection, and proactive issue resolution. Extensive hands-on experience with automation tools such as Terraform, Ansible, and Jenkins, along with proficiency in CI/CD pipelines, to efficiently streamline and optimize network operations and workflows. Proficiency in scripting languages (Bash, Python, Go). Proficiency with containerization and orchestration (Docker, Kubernetes). Understanding of cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with microservices architecture and distributed systems. Work closely with developers, QA, and operations teams to foster a DevOps culture focused on security, reliability, and automation. Monitoring & Alerting: • Design, implement, and manage comprehensive monitoring solutions using tools like Prometheus, Grafana, ELK stack, etc. • Develop and maintain alerting systems that proactively provide insights into system health and performance. • Integrate ML/Gen AI models for anomaly detection, trend analysis, and proactive alerts to enhance observability • Identify and implement innovative features to improve visibility into system performance and reliability. • Integrate ML/Gen AI models for anomaly detection, trend analysis, and proactive alerts to enhance observability. • Identify and implement innovative features to improve visibility into system performance and reliability • Define and track SLIs, SLOs, and SLAs for critical services and ensure continuous compliance. Automation & Infrastructure Management: • Automate infrastructure provisioning and management using tools such as Ansible or Terraform eliminate manual interventions. • Build and maintain CI/CD pipelines ( GitLab CI) to streamline deployments and ensure system consistency. • Implement automated testing and validation processes for infrastructure and applications. Orchestration & Infrastructure as Code: • Leverage containerization and orchestration technologies (Docker, Kubernetes) to manage scalable, resilient, and fault-tolerant services. • Use Infrastructure as Code (IaC) to automate and standardize environment provisioning and configuration management. Networking & Security: • Review network designs and propose enhancements using emerging technologies and industry best practices for efficiency and innovation. • Ensure the security and compliance of infrastructure by implementing best practices in network security, including encryption, firewall management, access controls, and intrusion detection. • Perform regular security audits and vulnerability assessments to identify and mitigate risks. • Monitor network traffic and optimize performance through network tuning and troubleshooting.
Posted 3 months ago
11 - 21 years
30 - 45 Lacs
Mumbai Suburbs, Navi Mumbai, Mumbai (All Areas)
Work from Office
Min 11 to 20 yrs with exp in tools like Azure DevOps Jenkins GitLab GitHub Docker Kubernetes Terraform Ansible Exp on Dockerfile & Pipeline codes Exp automating tasks using Shell Bash PowerShell YAML Exposure in .NET Java ProC PL/SQL Oracle/SQL REDIS Required Candidate profile Exp in DevOps platform from ground up using tools at least for 2 projects Implement in platform for Req tracking cod mgmt release mgmt Exp in tools such as AppDynamics Prometheus Grafana ELK Stack Perks and benefits Addnl 40% Variable + mediclaim
Posted 3 months ago
3 - 8 years
5 - 14 Lacs
Bengaluru
Work from Office
Job Title Machine Learning Engineer Responsibilities Responsibilities: As part of this role, you’ll need to comprehend various ML algorithms, their strengths, weaknesses, and how they impact deployment. Algorithm Development:-Design, develop, and implement machine learning algorithms to address specific business challenges.-Collaborate with cross-functional teams to understand requirements and deliver solutions that meet business objectives. Data Analysis and Modeling:-Perform exploratory data analysis to gain insights and identify patterns in large datasets.-Build, validate, and deploy machine learning models for predictive and prescriptive analytics. Feature Engineering:-Extract and engineer relevant features from diverse datasets to enhance model performance.-Optimize and fine-tune models for improved accuracy and efficiency. Model Evaluation and Deployment:-Conduct thorough evaluation of machine learning models using appropriate metrics.-Deploy models into production environments, ensuring scalability, reliability, and performance.-Communicate complex technical concepts to non-technical stakeholders effectively. Technical and Professional Requirements: Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or a related field. 5-6 years of hands-on experience in developing and deploying machine learning models. Proficiency in programming languages such as Python, R, or Java. Experience with data preprocessing, feature engineering, and model evaluation techniques. Understanding how to set up scalable and reliable environments for ML models is crucial. Mastery of CI/CD and Automation Tools:Continuous Integration/Continuous Deployment Knowledge of tools like Azure ML DevOps, Jenkins, GitLab CI/CD, and Kubernetes to automate workflows and ensure smooth deployments. Knowledge of Monitoring and Logging Systems:Azure Monitor, Prometheus, Grafana, and ELK stack for monitoring and logging. Strong Communication and Collaboration Abilities:As a team lead, the candidate will work closely with data scientists, engineers, and stakeholders. Preferred Skills: Technology->Machine learning->data science Additional Responsibilities: Understanding of forecasting & revenue ERP environments (e.g.:Salesforce & SAP ECC) Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn). Deep Understanding of Machine Learning Models Proficiency in Cloud and On-Premises Infrastructure Excellent communication skills for aligning goals, resolving conflicts, and driving successful ML projects. Continuous Learning:Stay abreast of the latest developments in machine learning, data science, and related fields. Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements
Posted 3 months ago
5 - 9 years
10 - 15 Lacs
Chennai, Bengaluru, Hyderabad
Hybrid
Role & responsibilities Skill:Azure Devops CI/CD, Ansible, Linux Kibana, Grafana, Prometheus Exp: 5 to 9 Yrs Location: Chennai/Bangalore Mode of interview: F2F
Posted 3 months ago
11 - 15 years
15 - 20 Lacs
Mumbai
Work from Office
Overview: Accountabilities: As a Platform Reliability Engineer, you will be responsible for the evaluation, selection, and deployment of monitoring & observability technologies. You will manage and maintain monitoring infrastructure, ensuring it aligns with industry best practices. You will collaborate with DevOps, CriticalOps and IT leadership teams to understand system requirements and design effective monitoring strategies. You will also develop and implement monitoring solutions for infrastructure, applications, and services. Essential Skills/Experience: Degree level education in computer science, information technology, or a related field Proven experience as a monitoring and observability engineer or a similar role Proficient in developing monitoring capabilities and configuring integration with tools such as Prometheus, Grafana, Splunk, SumoLogic, DataDog, DynaTrace, etc. Strong scripting skills (e.g., Python) for automation in data environments Familiarity with logging, tracing, and APM (Application Performance Monitoring) solutions. Ability to interpret and communicate technical information into business language Working knowledge of Agile Software Development techniques and Methodologies Familiarity with CI/CD pipelines and continuous deployment practices as part of an Agile team Proficient in all aspects of Agile and SaFE (can lead, teach, and run) Excellent problem-solving skills Customer engagement experience Knowledge of data processing frameworks (e.g. Apache Spark) and data storage solutions (e.g. data lakes, warehouses) Experience with data orchestration tools (e.g. Apache Airflow) Understanding of data lineage and metadata management. Good commercial awareness and understanding of the external market Demonstrate initiative, strong customer orientation, and cross-cultural working Excellent communication and interpersonal skills.
Posted 3 months ago
4 - 6 years
6 - 11 Lacs
Bengaluru
Work from Office
Job Purpose: You will support streamlining and automating the deployment, monitoring, and management of applications in cloud environments, ensuring scalability, reliability, and efficiency. You will act as the bridge between development and operations by supporting implementation of continuous integration and continuous deployment (CI/CD) pipelines, optimizing cloud infrastructure, and enhancing system performance and security towards achieving larger organizational objectives to facilitate seamless collaboration between development and operations teams to enhance the speed and quality of software delivery and its operations. Reporting Manager: Service Deliver Manager This an Individual Contributor role Roles & Responsibilities: Infrastructure Management: o Support design, deployment, and management of scalable, reliable cloud infrastructure. o Understand and utilize Infrastructure as Code (IaC) tools such as Terraform, Ansible, ARM, Bicep etc to automate provisioning. o Implement and maintain automated testing frameworks to ensure code quality and application reliability. Continuous Integration and Continuous Deployment (CI/CD): o Support development and maintenance of CI/CD pipelines to automate code testing, integration, and deployment. o Help ensure smooth and fast delivery of applications and updates. Incident Management: o Respond to and help resolve incidents, ensuring minimal downtime and impact on users. o Conduct root cause analysis and implement preventive measures for recurring issues. o Participate in on-call support for critical incidents. Commented [SR1]: Responsibility buckets will help track gradation across roles in the hierarchy Stakeholders Managment: o Collaborate with development team, Cloud Security Team, and operations teams to support project requirements, deploying and managing application and resolve issues. o Research and Development: Stay updated with the latest cloud technologies, tools, and best practices. o Continuously explore and evaluate new solutions to enhance the cloud infrastructure and DevOps processes Education & Work Experience Bachelors degree in computer science, Engineering, or related field 4-6 Years of experience as DevOps Engineer In-Depth knowledge of cloud infrastructure and services, specifically Azure or AWS or other cloud platform Hands on experience with tools like Git, Jenkins, Docker, Kubernetes, or similar technologies. Strong scripting skills using Bash, Python, PowerShell or similar languages. Strong knowledge of infrastructure as code (IAC) tools such as Terraform, Ansible, or CloudFormation Strong knowledge of monitoring and logging tools such as Prometheus, Grafana, or similar technologies Proficient in Linux and typical Unix tools Excellent problem-solving skills, analytical skills, strong abstraction capabilities and ability to troubleshoot complex issues in production environment. Knowledge of best practices in disaster recovery planning and execution Independent and autonomous approach to work A strong focus on customers and results Excellent communication, collaboration skills, ability to work in a team and a professional attitude. Willing to provide on-call support as and when needed.
Posted 3 months ago
4 - 9 years
5 - 11 Lacs
Pune
Work from Office
Job Title: OpenShift Administrator Job Summary: We are seeking a skilled OpenShift Administrator to manage, maintain, and optimize our OpenShift Container Platform environments. This role involves configuring clusters, managing deployments, and supporting application teams in delivering highly available and scalable containerized applications. The ideal candidate will have hands-on experience with OpenShift and Kubernetes, strong problem-solving skills, and a proactive approach to security and automation. Key Responsibilities: Cluster Management and Administration Install, configure, and maintain OpenShift Container Platform clusters across various environments (on-premises). Manage nodes, load balancers, networking, and storage within the OpenShift environment. Perform regular upgrades and patches for OpenShift, Kubernetes, and underlying infrastructure components to maintain platform stability and security. Deployment of Services on different Environment (DEV, UAT, PROD, DR). Knowledge and implantation of Disaster Recovery. Creation of SR in portal, follow-up and closing. Environment Monitoring and Optimization Monitor cluster performance, resource usage, and system health, ensuring high availability and stability. Set up and configure monitoring and alerting tools like Prometheus, Grafana. Optimize infrastructure and application resources for cost-efficiency and performance in collaboration with development teams. Automation and Scripting Automate repetitive tasks such as deployments, backups, and maintenance using Bash, or other scripting languages. Integrate CI/CD pipelines with OpenShift for streamlined application delivery and continuous deployment (Tekton / ArgoCD). Security and Compliance Configure Role-Based Access Control (RBAC) to enforce secure access to OpenShift resources. Implement network policies, security context constraints, and other security configurations to protect applications and data. Ensure OpenShift clusters and workloads comply with organizational security policies and industry standards. Troubleshooting and Issue Resolution Diagnose and resolve platform and application issues related to OpenShift, Kubernetes, containers, networking, and storage. Perform root cause analysis on issues and create action plans to prevent recurrence. Collaborate with cross-functional teams to provide support and guidance on resolving application-level issues. Documentation and Knowledge Sharing Document environment configurations, standard operating procedures, troubleshooting steps, and best practices. Provide knowledge transfer and training sessions for team members and other stakeholders to facilitate efficient support and development. Collaboration and Support Work closely with DevOps, development, and infrastructure teams to support their application deployments and containerization efforts. Provide guidance on container best practices, application optimization, and troubleshooting within the OpenShift environment. Act as a point of contact for OpenShift platform issues, working to ensure effective and timely resolutions. Qualifications: Education: Bachelors degree in Computer Science, Information Technology, or a related field (or equivalent experience). Experience: 1.5+ years of experience in managing OpenShift or Kubernetes platforms. Skills: Proficiency in OpenShift administration, Kubernetes concepts, and containerization using Docker. Strong knowledge of Linux systems administration and networking fundamentals. Familiarity with monitoring tools (Prometheus, Grafana) and CI/CD tools (Tekton / ArgoCD, GitLab). Security and compliance experience, including RBAC, SELinux, network policies, and security contexts. Working knowledge of Image Registries (Quay / Nexus) Certifications (Preferred): Red Hat Certified in OpenShift Administration (D0180, D0280) Certified Kubernetes Administrator (CKA) Soft Skills: Excellent problem-solving and analytical abilities. Strong communication and collaboration skills. Ability to work independently and manage multiple projects in a fast-paced environment. Experience: 5 years Employment: On the payroll of Vinsys client will be Saraswat Bank Location Flexibility: Must visit Vashi, Mumbai as per business requirements Required Skills : D0322: Installing OpenShift on cloud, virtual, or physical infrastructure DO180: Red Hat OpenShift Administration I – Operating a Production Cluster DO280: Red Hat OpenShift Administration II – Operating a Production Kubernetes Cluster
Posted 3 months ago
10 - 15 years
15 - 25 Lacs
Bengaluru
Work from Office
10-15 years of exp in backend architecture, API security, cloud-based microservices. Expe in Spring Boot, API Gateway, OAuth, Kubernetes (EKS) orchestration. Hands-on exp in CI/CD pipeline automation, DevSecOps best practices, performance tuning. Required Candidate profile Strong knowledge of AWS networking, IAM policies, and security compliance. Proven ability to mentor backend developers, optimize system performance, and scale cloud-based architectures
Posted 3 months ago
8 - 13 years
25 - 30 Lacs
Chennai, Pune, Delhi
Work from Office
Responsibilities This position supports and transforms existing and new mission-critical and highly visible operational website(s) and applications spanning multiple technology stacks through all phases of SDLC, while working collaboratively across IT, business, and third-party suppliers from around the globe in a 24x7, fast-paced, and Agile based environment. Skills Must have +5 years of experience in DevOps Familiarity with build process and any CI/CD tool like gocd Continuous Monitoring Centralized Logging
Posted 3 months ago
1 - 6 years
3 - 8 Lacs
Pune
Work from Office
You will be part of the Storage Development Business of the Infrastructure organization with the following key responsibilities: Responsibilities: You will handle the most highly escalated cases by our support/L2 teams to ensure they receive top-level help to provide better customer experience on their most impactful issues. You will be responsible for providing help to L2 support engineers. You will be part of Ceph development teams where you must fix customer-reported issues to ensure our customers and partners receive an enterprise-class product. You will work to exceed customer expectations by providing outstanding sustaining service and ensuring that regular updates are provided to L2 teams. You will need to understand our customer's and partner's needs and work with product management teams on driving these features and fixes directly into the product. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 2+ Years of experience working as a L3, sustaining or development engineer or directly related experience. Senior-level Linux Storage System administration experience, including system installation, configuration, maintenance, scripting via BASH, and utilizing Linux tooling for advanced log debugging. Advanced troubleshooting and debugging skills, with a passion for problem-solving and investigation. Must be able to work and collaborate with a global team and strive to share knowledge with peers. 1+ years of working with Ceph/Openshift/Kubernetes technologies. Strong scripting (Python, Bash, etc.) and programming skills (C/C++). Able to send upstream patches to fix customer-reported issues. In-depth knowledge of Ceph storage architecture, components, and deployment. Hands-on experience with configuring and tuning Ceph clusters. Understanding of RADOS, CephFS, and RBD (Rados Block Device). Preferred technical and professional experience Knowledge of open source development, and working experience in open source projects. Certifications related to Ceph storage and performance testing are a plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their storage services. Experience with container orchestration tools such as Kubernetes. Knowledge of monitoring tools (Prometheus, Grafana) and logging frameworks. Ability to work effectively in a collaborative, cross-functional team environment. Knowledge of AI/ML, exposure to Gen AI
Posted 3 months ago
2 - 7 years
5 - 10 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
Containerization - K8s/Docker/Openshift b) Maintainability & Observability of products using CNI c) Working knowledge of Message Queues - RabbitMQ/Kafka/Redis Streams d) Good experience in scripting languages - any one mandatory - Java script/nodejs/Python etc e) Understand Performance Engineering and in the past have had experience of scaling a product ten fold. f) Performance Engineering fundamentals - APM/Context switch/bandwidth etc g) Distributed tracing h) Good knowledge of Relational/NoSQL databases starting from Oracle/Postgresql to Redis/Mongo/Elastic etc i) CI/CD Pipelines using Jenkins j) Good knowledge of Ansible k) Security - SAST/DAST/SCA and its associated tools l) Containerization security m)Automation via shift left approach n) Working experience in deployments - Canary/Blue green o) Working experience in cloud environments for both VM/Containerized workloads
Posted 3 months ago
6 - 10 years
18 - 20 Lacs
Chennai, Noida
Hybrid
For the Observability Role that we are looking for, you can use the below details as a kick starting point to find the right resource The skillsets that we are looking for are as below 1. Experience in AWS environments 2. Experience in Kubernetes Environments as a administrator 3. Experience with Linux Operating systems 4. Experience in Python & shell scripting is a must 5. Experience in Jenkins Pipelines 6. Strong knowledge of DevOps principles 7. Preferably experience with the Opensource monitoring tools like Telegraf, Prometheus, Grafana, Loki 8. Experience in Developing dashboards in Grafana using various data sources like Loki , Prometheus , AWS CloudWatch 9. Experience in using Git / Bitbucket 10. Knowledge about Agile methodologies Keywords Devops Docker AWS Azure Kubernetes Pipelines Deployment Python/Java/any lan Bash Linux Jenkins Jira Bitbucket
Posted 3 months ago
5 - 8 years
18 - 20 Lacs
Pune
Work from Office
5+ years of experience in a Technical Support Role p on Data based Software Product at least L3 level. Respond to customer inquiries and provide in-depth technical support. Candidate to work during EMEA time zone (2PM to 10 PM shift)
Posted 3 months ago
5 - 8 years
25 - 35 Lacs
Chennai
Work from Office
Job Description Toast is a technology company that specializes in providing a comprehensive all-in-one SaaS product and financial technology solutions tailored for the restaurant industry. Toast offers a suite of tools to help restaurants manage their operations, including point of sale, payment processing, supplier management, digital ordering and delivery, marketing and loyalty, employee scheduling and team management. The platform is designed to streamline operations, enhance customer experiences, and improve overall efficiency for the restaurants. Bready* to make a change? At Toast, we empower the restaurant community to thrive through our innovative SaaS and financial technology solutions. As a Senior Backend Software Engineer, you’ll play a critical role in shaping the backend systems that power our platform. If you’re passionate about building scalable, reliable, and secure backend services and want to make a real impact in the restaurant industry, we’d love to hear from you. Let’s bring your expertise to the table and create solutions that serve millions of customers. About This Roll* (Responsibilities) As a Senior Backend Software Engineer at Toast, you will: Build new products and Evolve Toast’s existing products suite to meet global market needs Lead projects from discovery and development to roll out Design, develop, and maintain robust, scalable, and secure backend services and APIs. Collaborate closely with cross-functional teams, including product managers, frontend engineers, and data teams, to deliver impactful solutions. Lead efforts to enhance the performance and reliability of our systems, ensuring high availability and low latency. Contribute to architectural decisions and mentor junior team members. Implement best practices in software development, including testing, code reviews, and CI/CD processes. Address complex technical challenges, such as third-party integrations, data synchronization, and large-scale distributed systems. Stay updated on emerging backend technologies and advocate for their adoption where appropriate. Do you have the right ingredients*? (Requirements) To be successful in this role, you should have: 8+ years of experience in backend development with a strong focus on scalable, distributed systems. Proficiency in one or more backend programming languages, such as Java, Kotlin. Deep understanding of RESTful APIs, microservices architecture, and database design (SQL and NoSQL). Experience with cloud platforms preferably AWS and containerization technologies like Docker and Kubernetes. Strong problem-solving skills and a passion for tackling complex technical challenges. Excellent communication skills with a collaborative and team-oriented mindset. Special Sauce* (Nonessential Skills/Nice to Haves) While not required, these skills will make you stand out: Experience with restaurant or fintech systems, including payment processing or POS integrations. Familiarity with observability tools such as Datadog, Splunk, or Prometheus. Knowledge of event-driven architectures and message brokers like Kafka or RabbitMQ. Contributions to open-source projects or active involvement in technical communities. Experience with mentoring and leading engineering initiatives. Work mode - Hybrid (2 days a week in office) Minimum educational qualifications : Any UG
Posted 3 months ago
8 - 10 years
10 - 15 Lacs
Hyderabad
Work from Office
Applies technical knowledge and problem solving methodologies to projects of moderate scope, with a focus on improving the data and systems running at scale, and ensures end to end monitoring of applications Resolves most nuances and determines appropriate escalation path Build, support, Monitor and Automate web product on Private Cloud infrastructure Drive initiatives to improve the reliability and stability of web Hosting platforms using data driven analytics to improve service levels Collaborates with team members to identify comprehensive service level indicators and stakeholders to establish reasonable service level objectives and error budgets with customers Strong knowledge of one or more infrastructure disciplines such as hardware, networking terminology, databases, storage engineering, deployment practices, integration, automation, scaling, resilience, and performance assessments Experience with multiple cloud technologies with the ability to operate in and migrate across public and private clouds Private Public Exposure Understanding and working experience, and understanding of resiliency, scalability, observability, monitoring Understanding of the Data Objects & Structure and write the queries using SQL based on client tickets, as needed Experience as SRE in complex and mission critical applications involving multitude of components of varying technical generations Deep proficiency in reliability, scalability, performance, security, enterprise system architecture, toil reduction, and other site reliability best practices with the ability to implement these practices within an application or platform Strong knowledge and experience in observability, monitoring, alerting, and telemetry collection using tools such as CloudWatch, Grafana, Dynatrace, Prometheus, Splunk, etc. Fluency in at least one programming language such as (e.g., Python, Terraform, Ansible, Java Spring Boot, Shell Scripting, Net Demonstrates a high level of technical expertise within one or more technical domains and proactively identifies and solves technology related bottlenecks in your areas of expertise Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Required Qualifications, Capabilities, and Skills Formal training or certification on engineering infrastructure disciplines concepts and 6+ years applied experience
Posted 3 months ago
6 - 8 years
10 - 12 Lacs
Bengaluru
Work from Office
Experience with infrastructure as code, with Kubernetes and Kubernetes satellite technologies Coding experience beyond simple scripts - preferable in GoLang. Exposure to Infra level aspects of high scale event based distributed systems including Kafka, Cassandra & Postgres - including load balancing Experience with building custom command line tooling for automating the Infra decomm activities. Experience with CI/CD as code in a GitHub environment Experience with containerization Previous success in technical engineering Experience with monitoring large scale distributed systems with Open Telemetry, Splunk, Prometheus and Grafana skills Kubernates Golang or Java Kafka, Cassandra, Postgres Automation Scripts for Decommissioning CI/CD, GIT
Posted 3 months ago
8 - 13 years
25 - 30 Lacs
Chennai, Hyderabad
Work from Office
Primary Skills: Airflow, Autosys, Python Good to have: Finance domain exp JD: 1. Design, implement, and maintain Airflow DAGs (Directed Acyclic Graphs) to automate ETL (Extract, Transform, Load) processes and other data workflows. 2. Configure and manage task dependencies, scheduling, retries, and error handling in Airflow. 3. Integrate Apache Airflow with different data sources, processing systems, and storage solutions (e.g., databases, cloud services, data lakes). Responsibility: 7+ years of experience in managing workflow automation and job scheduling, with at least 3 year focused on Apache Airflow. Strong background in Autosys (preferably 3+ years), including job scheduling, dependency management, and error handling. Experience migrating jobs from Autosys to Airflow and automating complex workflows. Familiarity with Airflow's components, such as operators, sensors, and hooks. Strong knowledge of Python for writing Airflow tasks and custom operators. Experience working with cloud environments (AWS, GCP, Azure). Solid understanding of SQL, databases, and data engineering concepts. Familiarity with containerization technologies (Docker, Kubernetes) is a plus. Experience with monitoring, alerting, and logging systems (e.g., Prometheus, Grafana).
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Prometheus is a popular monitoring and alerting tool used in the field of DevOps and software development. In India, the demand for professionals with expertise in Prometheus is on the rise. Job seekers looking to build a career in this field have a promising outlook in the Indian job market.
These cities are known for their vibrant tech industry and have a high demand for professionals skilled in Prometheus.
The salary range for Prometheus professionals in India varies based on experience levels. Entry-level positions can expect to earn around ₹5-8 lakhs per annum, whereas experienced professionals can earn up to ₹15-20 lakhs per annum.
A typical career path in Prometheus may include roles such as: - Junior Prometheus Engineer - Prometheus Developer - Senior Prometheus Engineer - Prometheus Architect - Prometheus Consultant
As professionals gain experience and expertise, they can progress to higher roles with increased responsibilities.
In addition to Prometheus, professionals in this field are often expected to have knowledge and experience in: - Kubernetes - Docker - Grafana - Time series databases - Linux system administration
Having a strong foundation in these related skills can enhance job prospects in the Prometheus domain.
As you explore opportunities in the Prometheus job market in India, remember to continuously upgrade your skills and stay updated with the latest trends in monitoring and alerting technologies. With dedication and preparation, you can confidently apply for roles in this dynamic field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2