Jobs
Interviews

1633 Grafana Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

14.0 - 20.0 years

50 - 70 Lacs

Bengaluru

Hybrid

Overview : As an SRE manager, you are responsible for the availability and reliability of Calixs cloud. At Calix, Site Reliability Engineering combines software and systems engineering to build and run large-scale, distributed, fault-tolerant systems. You would be responsible for leading a team of Site Reliability Engineers, overseeing the reliability, scalability, and maintainability of Calix's critical infrastructure, including building and maintaining automation tools, managing on-call rotations, collaborating with development teams, and ensuring systems meet service level objectives (SLOs), all while prioritizing continuous improvement and a strong focus on infrastructure health and stability within the Calix platform, leveraging tools like Terraform, observability frameworks from the Grafana Labs ecosystem, and Google Cloud Platform. Qualifications : - Strong experience as an SRE manager with a proven track record of managing large-scale, highly available systems. - Expertise in cloud computing platforms (preferably Google Cloud Platform). - Knowledge of core operating system principles, networking fundamentals, and systems management. - Programming skills in languages like Python and Go. - Proven experience building and leading SRE teams, including hiring, coaching, and performance management. - Deep understanding and expertise in building and maintaining scalable open-source monitoring tools and backend storage. - Experience with incident management processes and best practices. - Excellent communication and collaboration skills to work with cross-functional teams. - Knowledge of SRE principles, including error budgets, fault analysis, and reliability engineering concepts. Education : - B.S. or M.S. in Computer Science or equivalent field. Role & responsibilities

Posted 1 month ago

Apply

4.0 - 6.0 years

27 - 42 Lacs

Chennai

Work from Office

Skill – Aks , Istio service mesh ,CICD Shift timing - Afternoon Shift Location - Chennai, Kolkata, Bangalore Excellent AKS, GKE or Kubernetes admin experience. Good troubleshooting experience on istio service mesh, connectivity issues. Experience with Github Actions or similar ci/cd tool to build pipelines.Working experience on any cloud, preferably Azure, Google with good networking knowledge. Experience on python or shell scripting. Experience on building dashboards, configure alerts using prometheus and Grafana.

Posted 1 month ago

Apply

0.0 - 1.0 years

0 Lacs

Ahmedabad

Work from Office

Job Title: DevOps Intern Location: Ahmedabad (Work from Office) Duration: 3 to 6 Months Start Date: Immediate or As per Availability Company: FX31 Labs Role Overview: We are looking for a motivated and detail-oriented DevOps Intern to join our engineering team. As a DevOps Intern, you will assist in designing, implementing, and maintaining CI/CD pipelines, automating workflows, and supporting infrastructure deployments across development and production environments. Key Responsibilities: Assist in building and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Help in provisioning and managing cloud infrastructure (AWS, Azure, or GCP). Collaborate with developers to automate software deployment processes. Monitor and optimize system performance, availability, and reliability. Write basic scripts to automate repetitive DevOps tasks. Document internal processes, tools, and workflows. Support containerization (Docker) and orchestration (Kubernetes) initiatives. Required Skills: Basic understanding of Linux/Unix systems and shell scripting. Familiarity with version control systems like Git. Knowledge of DevOps concepts like CI/CD, Infrastructure as Code (IaC), and automation. Exposure to tools like Docker, Jenkins, Kubernetes (even theoretical understanding is a plus). Awareness of at least one cloud platform (AWS, Azure, or GCP). Strong problem-solving attitude and willingness to learn. Good to Have: Hands-on project or academic experience related to DevOps. Knowledge of Infrastructure as Code tools like Terraform or Ansible. Familiarity with monitoring tools (Grafana, Prometheus) or logging tools (ELK, Fluentd). Eligibility Criteria: Pursuing or recently completed a degree in Computer Science, IT, or related field. Available to work full-time from the Ahmedabad office for the duration of the internship. Perks: Certificate of Internship & Letter of Recommendation (on successful completion). Opportunity to work on real-time projects with mentorship. PPO opportunity for high-performing candidates. Hands-on exposure to industry-level DevOps tools and cloud platforms. About FX31 Labs: FX31 Labs is a fast-growing tech company focused on building innovative solutions in AI, data engineering, and product development. We foster a learning-rich environment and aim to empower individuals through hands-on experience in real-world projects.

Posted 1 month ago

Apply

7.0 - 9.0 years

10 - 12 Lacs

Noida, Chennai, Bengaluru

Work from Office

Notice Period: Immediate Joiners Preferred Primary Skills: AWS Cloud Architecture & Services Microservices Development & Architecture IoT Platforms and Protocols (MQTT, CoAP, etc.) Mobile Application Support (iOS, Android) Web Application Support (ReactJS, Angular, VueJS, etc.) RESTful API Design and Development Containerization (Docker, Kubernetes) CI/CD pipelines Monitoring & Logging (CloudWatch, ELK Stack, Grafana) Troubleshooting & Performance Tuning We are seeking an experienced AWS Microservices / IoT / Mobile & Web App Support Engineer to join our growing team in Noida. The ideal candidate will have a strong background in designing and supporting scalable cloud-native solutions and IoT systems, along with hands-on experience in providing ongoing support for web and mobile applications. Key Responsibilities: Design, develop, and support microservices-based architecture on AWS Provide end-to-end support and maintenance for IoT solutions Troubleshoot and optimize performance of Mobile & Web Applications Collaborate with DevOps to implement CI/CD pipelines Ensure high availability, scalability, and security of applications Manage containerized applications (Docker, Kubernetes) Proactively monitor system performance and implement improvements Work closely with cross-functional teams to resolve incidents and deploy enhancements Maintain detailed documentation for architectures and processes Preferred Qualifications: AWS Certification (Solutions Architect, Developer, or SysOps) is a plus Experience with AWS IoT Core, Greengrass Familiarity with Serverless Architectures (Lambda, API Gateway) Strong understanding of Cloud Security and IAM Policies Excellent communication and teamwork skills Please share your profile with the following details: Current CTC: Expected CTC: Notice Period: Total Experience: Relevant AWS / Microservices / IoT / Mobile & Web App Support Experience: Preferred Location:

Posted 1 month ago

Apply

3.0 - 5.0 years

2 - 6 Lacs

Chennai

Work from Office

Job Description: We are seeking a highly motivated and skilled DevOps Engineer to join our Infrastructure team. The ideal candidate will have a solid background in DevOps practices, automation, CI/CD, and cloud technologies, along with hands-on experience in managing both on-premises and cloud environments. This role is crucial to support and optimize our infrastructure, monitoring, and deployment processes across development and production systems. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using tools such as Jenkins or GitLab CI. Develop automation scripts using Python, Bash, or similar scripting languages. Manage infrastructure as code using tools like Terraform and Ansible. Deploy, monitor, and maintain on-premise DevOps solutions, including Zabbix, log management tools, and internal services. Ensure uptime and performance of systems through proactive monitoring and incident management practices. Administer and support containerization platforms like Docker and Openshift, Kubernetes. Collaborate with development and QA teams to streamline deployment and release processes. Maintain and monitor cloud environments (AWS, Azure, GCP) and ensure cost-effective, secure, and scalable infrastructure. Participate in root cause analysis, implement corrective actions, and document incidents and fixes. Ensure systems comply with internal security standards and external regulations. Required Qualifications: Education: Bachelors or Masters degree (Bac+5 or higher) in Computer Science, Information Technology, or a related field. Experience: 3 to 4 years of hands-on experience in DevOps engineering and infrastructure automation. Technical Skills: Strong scripting experience in Python, Bash, or similar. Proficiency with CI/CD tools: Jenkins, GitLab CI, etc. Solid experience with Terraform, Ansible, and infrastructure-as-code best practices. In-depth understanding of cloud platforms (AWS, Azure, GCP). Good knowledge of containerization and orchestration tools (Docker, Kubernetes). Experience in monitoring and logging tools: Zabbix, Prometheus, Grafana, ELK/EFK stacks, etc. Familiarity with incident management workflows and log aggregation tools. Strong troubleshooting and problem-solving skills related to network, server, and application issues. Preferred (Good to Have): Cloud certifications (AWS Certified Solutions Architect, Azure Administrator, GCP Associate Engineer, etc.) Understanding of network security and compliance frameworks. Experience working in Agile environments and participating in sprint ceremonies. Proficiency in technical English for documentation and collaboration. Interested can reach us at careers.tag@techaffinity.com

Posted 1 month ago

Apply

3.0 - 8.0 years

8 - 16 Lacs

Bengaluru

Work from Office

About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Job Description Collaborate with stakeholders to understand performance requirements for applications. Design and develop a comprehensive performance testing strategy aligned with project goals for implementing Transformation solutions. Propose and implement Transformation solutions covering Performance engineering and AI driven testing Walkthrough of the prototype/demos to client and get an alignment from on the client on the ideas. Provide technical guidance and mentorship to PT team on technical challenges and issues Track status of each transformation Themes as part of the roadmap across all applications, identify potential issues upfront and ensure schedule and quality are not impacted. Stay up-to-date with the latest technology trends and advancements. Evaluate and recommend technology stacks, frameworks, and tools. Collaborate with product owners, business analysts, and development teams to understand requirements, assess and provide transformation solutions in Performance engineering area Proficient in Grafana, Prometheus, JMeter, Loadrunner, Dynatrace , Python, InfluxDB, Azure, AKS Excellent communication, collaboration, and problem-solving skills. Ability to lead and influence technical discussions. Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 12 Lacs

Bengaluru

Work from Office

IBM Cloud Computing is a one-stop shop which provides all the cloud solutions & cloud tools the industries need. IBM Cloud portfolio includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models, in addition to the components that make up those clouds. IBM Cloud ensures seamless integration into public and private cloud environments. The infrastructure is secure, scalable, and flexible, providing customized enterprise solutions that have made IBM Cloud the Hybrid Cloud Market leader with our market leading IAAS and PAAS Platforms. The IBM Cloud platform is the public cloud offering from IBM providing services to global enterprises. IBM Cloud is the Cloud for Smarter Business, built on Open Technology with Developer Tools and supports solutions by Industry. We run the services and workloads from Watson, Blockchain, Services, Security, and IoT. Ready to help drive IBM's success in the Cloud marketThis is your chance to research and learn new Cloud related technology products and services, as well as to design and implement quick Cloud based prototypes while advancing your career in leading edge technology. As a Site Reliability Engineer you will help build a meaningful engineering discipline, combining software and systems to develop creative engineering solutions to operations problems. Much of support and development will focus on existing systems, building infrastructure and reducing work through automation. Youll join a team of curious problem solvers with diverse set of perspectives who are thinking big and taking risks. In this environment youll take on the relevant projects, supported by and organization that provides the support and mentorship you need to learn and grow. Required education Bachelor's Degree Preferred education None Required technical and professional expertise Proficiency in at least one programming language ( e.g. Python, Go etc..) with respect to designing, coding, testing, and software delivery. Understanding of the software deliver lifecycle using Agile practices. Expertise in application, data and infrastructure disciplines Advanced knowledge in on or more infrastructure components ( e.g. networking, storage, compute systems etc...) Capable of managing service level changes to a system or service. Hands on experience with deployment, monitoring, automation and ops analysis tools such as Prometheus, Elasticsearch, Grafana, Kibana, Splunk, Dynatrace Managed, UiPath tec. Understanding of Incident workflow and management and ensuring the availability, latency, performance, and scalability of services Collaborates with development teams to define and measure Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Software Engineering experience, using one or more object oriented programming languages and/or scripting. Proficiency in one or more technology domains, may be a cross-domain expert able to solve complex and mission critical problems within a business or across the firm. Excellent communications skills - both verbal and written Able to work with little to no supervision in technical realm High level of self-motivation Works well in teams of technical and non-technical people Positive attitude toward learning new things technical and non-technical

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Work from Office

- Strong Linux or Kubernetes experience - JFrog Artifactory experience - Task automation experience in any programming language - Experience of observability stack such as Prometheus, Grafana

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Noida

Work from Office

"Ensure platform reliability and performance: Monitor, troubleshoot, and optimize production systems running on Kubernetes (EKS, GKE, AKS). Automate operations: Develop and maintain automation for infrastructure provisioning, scaling, and incident response. Incident response & on-call support: Participate in on-call rotations to quickly detect, mitigate, and resolve production incidents. Kubernetes upgrades & management: Own and drive Kubernetes version upgrades, node pool scaling, and security patches. Observability & monitoring: Implement and refine observability tools (Datadog, Prometheus, Splunk, etc.) for proactive monitoring and alerting. Infrastructure as Code (IaC): Manage infrastructure using Terraform, Terragrunt, Helm, and Kubernetes manifests. Cross-functional collaboration: Work closely with developers, DBPEs (Database Production Engineers), SREs, and other teams to improve platform stability. Performance tuning: Analyze and optimize cloud and containerized workloads for cost efficiency and high availability. Security & compliance: Ensure platform security best practices, incident response, and compliance adherence.." Required education None Preferred education Bachelor's Degree Required technical and professional expertise Strong expertise in Kubernetes (EKS, GKE, AKS) and container orchestration. Experience with AWS, GCP, or Azure, particularly in managing large-scale cloud infrastructure. Proficiency in Terraform, Helm, and Infrastructure as Code (IaC). Strong understanding of Linux systems, networking, and security best practices. Experience with monitoring & logging tools (Datadog, Splunk, Prometheus, Grafana, ELK, etc.). Hands-on experience with automation & scripting (Python, Bash, or Go). Preferred technical and professional experience Experience in incident management & debugging complex distributed systems. Familiarity with CI/CD pipelines and release automation.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Pune

Hybrid

We have one day process to close below role at Qualys office on Sat, 21st June'25 for shortlisted candidates only . Must Skills : Java, Spring Boot, RESTful APIs, Kubernetes, Grafana, SQL databases Role & responsibilities We are seeking a Senior Software Engineer Java with 5-7 years of hands-on experience to design, build, and scale backend services and integrations. This role requires a strong foundation in Java , deep knowledge of Spring Boot , and a solid understanding of core programming concepts like threading , locks , and generics . Familiarity with system design , clean code principles , and infrastructure tools like Kubernetes , Prometheus , and Grafana is also important. A background in building integrations or connectors between systems is a strong plus. Key Responsibilities: Develop robust backend applications using Java and Spring Boot . Design and implement RESTful APIs , microservices, and backend components. Apply strong understanding of Java core concepts : threading, locks, generics, collections. Build and maintain integrations/connectors between internal systems or third-party platforms. Contribute to system design and architecture decisions, including scalability, caching, load balancing, and resiliency. Follow and advocate for clean code , SOLID , and DRY principles. Collaborate with frontend engineers, DevOps, QA, and product managers to ship reliable features. Participate in code reviews and provide technical mentorship to peers. Monitor application health and performance using tools like Prometheus , Grafana , or ELK stack. Deploy and manage services in containerized environments, with exposure to Kubernetes . Required Skills & Qualifications 5-7 years of backend development experience using Java (8+) . Expertise in Spring Boot and building scalable, production-grade systems. Strong knowledge of: Concurrency and multithreading Locks and synchronization mechanisms Java generics and data structures Practical experience with REST API design , versioning, and security. Familiarity with system design concepts like caching, rate limiting, circuit breakers, and fault tolerance. Solid understanding of clean code practices and software design principles (SOLID, DRY). Experience with SQL databases (e.g., PostgreSQL, MySQL) and query optimization. Comfortable working with Git, Agile workflows, CI/CD, and code review processes. Nice to Have Experience building integrations/connectors with third-party systems. Familiarity with Kubernetes and container orchestration. Exposure to monitoring and alerting tools like Prometheus, Grafana, ELK. Knowledge of messaging systems (e.g., Kafka, RabbitMQ).

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Pune, Gurugram

Work from Office

In one sentence We are seeking an experienced Kafka Administrator to manage and maintain our Apache Kafka infrastructure, with a strong focus on deployments within OpenShift and Cloudera environments. The ideal candidate will have hands-on experience with Kafka clusters, container orchestration, and big data platforms, ensuring high availability, performance, and security. What will your job look like? Install, configure, and manage Kafka clusters in production and non-production environments. Deploy and manage Kafka on OpenShift using Confluent for Kubernetes (CFK) or similar tools. Integrate Kafka with Cloudera Data Platform (CDP), including services like NiFi, HBase, and Solr. Monitor Kafka performance and implement tuning strategies for optimal throughput and latency. Implement and manage Kafka security using SASL_SSL, Kerberos, and RBAC. Perform upgrades, patching, and backup/recovery of Kafka environments. Collaborate with DevOps and development teams to support CI/CD pipelines and application integration. Troubleshoot and resolve Kafka-related issues in a timely manner. Maintain documentation and provide knowledge transfer to team members. All you need is... 5+ years of experience as a Kafka Administrator. 2+ years of experience deploying Kafka on OpenShift or Kubernetes. Strong experience with Cloudera ecosystem and integration with Kafka. Proficiency in Kafka security protocols (SASL_SSL, Kerberos). Experience with monitoring tools like Prometheus, Grafana, or Confluent Control Center. Solid understanding of Linux systems and shell scripting. Familiarity with CI/CD tools (Jenkins, GitLab CI, etc.). Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Pune, Gurugram

Work from Office

In one sentence We are seeking a highly skilled and adaptable Senior Python Developer to join our fast-paced and dynamic team. The ideal candidate is a hands-on technologist with deep expertise in Python and a strong background in data engineering, cloud platforms, and modern development practices. You will play a key role in building scalable, high-performance applications and data pipelines that power critical business functions. You will be instrumental in designing and developing high-performance data pipelines from relational to graph databases, and leveraging Agentic AI for orchestration. Youll also define APIs using AWS Lambda and containerised services on AWS ECS. Join us on an exciting journey where you'll work with cutting-edge technologies including Generative AI, Agentic AI, and modern cloud-native architectureswhile continuously learning and growing alongside a passionate team. What will your job look like? Key Attributes: Adaptability & Agility Thrive in a fast-paced, ever-evolving environment with shifting priorities. Demonstrated ability to quickly learn and integrate new technologies and frameworks. Strong problem-solving mindset with the ability to juggle multiple priorities effectively. Core Responsibilities Design, develop, test, and maintain robust Python applications and data pipelines using Python/Pyspark. Define and implement smart data pipelines from RDBMS to Graph Databases. Build and expose APIs using AWS Lambda and ECS-based microservices. Collaborate with cross-functional teams to define, design, and deliver new features. Write clean, efficient, and scalable code following best practices. Troubleshoot, debug, and optimise applications for performance and reliability. Contribute to the setup and maintenance of CI/CD pipelines and deployment workflows if required. Ensure security, compliance, and observability across all development activities. All you need is... Required Skills & Experience Expert-level proficiency in Python with a strong grasp of Object oriented & functional programming. Solid experience with SQL and graph databases (e.g., Neo4j, Amazon Neptune). Hands-on experience with cloud platforms AWS and/or Azure is a must. Proficiency in PySpark or similar data ingestion and processing frameworks. Familiarity with DevOps tools such as Docker, Kubernetes, Jenkins, and Git. Strong understanding of CI/CD, version control, and agile development practices. Excellent communication and collaboration skills. Desirable Skills Experience with Agentic AI, machine learning, or LLM-based systems. Familiarity with Apache Iceberg or similar modern data lakehouse formats. Knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Understanding of microservices architecture and distributed systems. Exposure to observability tools (e.g., Prometheus, Grafana, ELK stack). Experience working in Agile/Scrum environments. Minimum Qualifications 6 to 8 years of hands-on experience in Python development and data engineering. Demonstrated success in delivering production-grade software and scalable data solutions.

Posted 1 month ago

Apply

7.0 - 12.0 years

5 - 13 Lacs

Pune

Hybrid

So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 month ago

Apply

6.0 - 9.0 years

4 - 9 Lacs

Pune

Hybrid

So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 month ago

Apply

5.0 - 7.0 years

8 - 18 Lacs

Pune

Work from Office

Key Responsibilities Should understand the business needs, write and drive roles and responsibilities for the team members, effectively groom them and make the team self-running. Should be highly collaborative with the team, and other stakeholders. Understand the compliance and regulatory terms/requirements of the business, learn and adapt to them, and ensure team is obliged to follow, and system errors deviating from those are given immediate attention and resolution. Should be flexible to share extended duties when need arises to deliver consistent results. Take initiatives on continuous improvement and quality of operations, finding gaps and fixing them to decrease the turnaround times. Provide technical support of our incoming tickets from our users, including extensive troubleshooting and root cause assessment Develop tools to aid operations and maintenance Should bring industry practices of documenting all processes of the team. Should have managed a team for at least 2 year or so. Should have driven technical projects. Document and maintain system configuration documents Document and maintain process documents. Desirable Skills: Basic knowledge of Java Should have very good knowledge on code repositories (git, bitbucket, svn) Should be very good in MongoDB, MySQL / SQL Good knowledge of Linux environment, shell scripting Implemented application monitoring for web services, with ELK, Grafana, Zabbix. Partnered with development to enhance release, change, and configuration management orchestration capabilities, employing Puppet, Ansible, Jenkins, and Git. Deployed and supported multiple virtualization environments including AWS, VMware/ vSphere, Vagrant, Docker, and VirtualBox, Kubernetes. Should have worked in project management tools like JIRA. Excellent written and oral communication skills Knowledge on Incidents and escalation practices. Exposure to cloud services like AWS is a big plus. Monitoring System Performance related to Virtual memory, Swap Space, Disk utilization and CPU utilization, Network-Related Configuration. Manage and configure FTP, Web Server (Apache), Samba and SSH servers. Configure DNS server and client Knowledge on configuring Dynamic Host Configuration Protocol (DHCP) in Linux environment. Managing Swap Configuration Configuring Access Control List (ACL) Install and configure Kernel-based Virtual Machine as per company policy. Performed troubleshooting in Linux Servers, resolving boot issues and maintenance of server issues using rescue mode and single user modes. Experience 5-7 years

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 14 Lacs

Hyderabad

Work from Office

Title : .Net Developer(.net+openshift OR Kubernetes) | 4 to 12 years | Bengaluru & Hyderabad : Assess and understand the application implementation while working with architects and business experts Analyse business and technology challenges and suggest solutions to meet strategic objectives Build cloud native applications meeting 12/15 factor principles on OpenShift or Kubernetes Migrate Dot Net Core and/ or Framework Web/ API/ Batch Components deployed in PCF Cloud to OpenShift, working independently Analyse and understand the code, identify bottlenecks and bugs, and devise solutions to mitigate and address these issues Design and Implement unit test scripts and automation for the same using Nunit to achieve 80% code coverage Perform back end code reviews and ensure compliance to Sonar Scans, CheckMarx and BlackDuck to maintain code quality Write Functional Automation test cases for system integration using Selenium. Coordinate with architects and business experts across the application to translate key Required Qualifications: 4+ years of experience in Dot Net Core (3.1 and above) and/or Framework (4.0 and above) development (Coding, Unit Testing, Functional Automation) implementing Micro Services, REST API/ Batch/ Web Components/ Reusable Libraries etc Proficiency in C# with a good knowledge of VB.NET Proficiency in cloud platforms (OpenShift, AWS, Google Cloud, Azure) and hybrid/multi-cloud strategies with at least 3 years in Open Shift Familiarity with cloud-native patterns, microservices, and application modernization strategies. Experience with monitoring and logging tools like Splunk, Log4J, Prometheus, Grafana, ELK Stack, AppDynamics, etc. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and CI/CD tools (e.g., Harness, Jenkins, UDeploy). Proficiency in Database like MS SQL Server, Oracle 11g, 12c, Mongo, DB2 Experience in integrating front-end with back-end services Experience in working with Code Versioning methodology as followed with Git, GitHub Familiarity with Job Scheduler through Autosys, PCF Batch Jobs Familiarity with Scripting languages like shell / Helm chats modules" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.

Posted 1 month ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Mysuru

Work from Office

The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus

Posted 1 month ago

Apply

3.0 - 8.0 years

5 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Operations Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in troubleshooting and optimizing applications to enhance performance and user experience, while adhering to best practices in software development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in continuous learning to stay updated with the latest technologies and methodologies.- Quickly identify, troubleshoot, and fix failures to minimize downtime.- To ensure the SLAs and OLAs are met within the timelines such that operation excellence is met. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Operations.- Strong understanding of cloud architecture and services.- Experience with application development frameworks and tools.- Familiarity with DevOps practices and CI/CD pipelines.- Ability to troubleshoot and resolve application issues efficiently.- Strong understanding of cloud networking concepts including VPC design, subnets, routing, security groups, and implementing scalable solutions using AWS Elastic Load Balancer (ALB/NLB).- Practical experience in setting up and maintaining observability tools such as Prometheus, Grafana, CloudWatch, ELK stack for proactive system monitoring and alerting.- Hands-on expertise in containerizing applications using Docker and deploying/managing them in orchestrated environments such as Kubernetes or ECS.- Proven experience designing, deploying, and managing cloud infrastructure using Terraform, including writing reusable modules and managing state across environments.- Good problem solving skills - The ability to quickly identify, analyze, and resolve issues is vital.- Effective Communication - Strong communication skills are necessary for collaborating with cross- functional teams and documenting processes and changes.- Time Management - Efficiently managing time and prioritizing tasks is vital in operations support.- The candidate should have minimum 3 years of experience in AWS Operations. Additional Information:- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Contribute to backend feature development in a microservices-based application using Java or GoLang. Develop and integrate RESTful APIs and connecting backend systems to frontend or external services. Collaborate with senior engineers to understand technical requirements and implement maintainable solutions. Participate in code reviews, write unit/integration tests, and support debugging efforts. Gain hands-on experience with CI/CD pipelines and containerized deployments (Docker, basic Kubernetes exposure). Support backend operations including basic monitoring, logging, and troubleshooting under guidance. Engage in Agile development practices, including daily stand-ups, sprint planning, and retrospectives. Demonstrate a growth mindset by learning cloud technologies, tools, and coding best practices from senior team members. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 3+ years of backend development experience using Java, J2EE, and/or GoLang. Hands-on experience building or supporting RESTful APIs and integrating backend services. Foundational understanding of Postgres or other relational databases, including basic query writing and data access patterns. Exposure to microservices principles and containerization using Docker. Basic experience with CI/CD pipelines using tools like Git, GitHub Actions, or Jenkins. Familiarity with backend monitoring/logging tools such as ELK Stack or Grafana is a plus. Exposure to cloud platforms like AWS or Azure, and ability to deploy/test services in cloud environments under guidance. Knowledge of writing unit tests and basic use of testing tools like JUnit or RestAssured. Exposure to Agile software development processes like Scrum or Kanban. Good communication skills, strong problem solving skills and willingness to collaborate with team members and learn from senior developers. Preferred technical and professional experience Exposure to microservices architecture and understanding of modular backend service design. Basic understanding of secure coding practices and awareness of common vulnerabilities (e.g., OWASP Top 10). Familiarity with API security concepts like OAuth2, JWT, or simple authentication mechanisms. Awareness of DevSecOps principles, including interest in integrating security into CI/CD workflows. Introductory knowledge of cryptographic concepts (e.g., TLS, basic encryption) and how they're applied in backend systems. Willingness to learn and work with Java security libraries and compliance-aware coding practices. Exposure to scripting with Shell, Python, or Node.js for backend automation or tooling is a plus. Enthusiasm for working on scalable systems, learning cloud-native patterns, and improving backend reliability.

Posted 1 month ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

What you will do In this vital role you will be responsible for assisting the senior Software Engineer in the team with designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Assist Senior Engineers to support complex software projects from conception to deployment. Assist in managing software delivery scope, risk, and timeline. Contribute to both front-end and back-end development using cloud technology. Provide suggestions to develop innovative solutions using generative AI technologies. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Assist development team to identify and resolve technical challenges effectively. Stay updated with the latest trends and advancements. Work closely with product team, business team, and other stakeholders. Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements. Analyze and understand the functional and technical requirements of applications, solutions, and systems and translate them into software architecture and design specifications. Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software. Identify and resolve software bugs and performance issues. Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time. Work on integrating with other systems and platforms to ensure seamless data flow and functionality. Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelors degree and 0 to 3 years of experience in Computer Science, IT, or related field experience OR Diploma and 4 to 7 years of experience in Computer Science, IT, or related field experience Must-Have Skills: Understanding of the pros and cons of various cloud services in well-architected cloud design principles. Hands-on experience with Full Stack software development. Proficient in programming languages such as Python (preferred), JavaScript, SQL/NoSQL. Problem-solving and analytical skills; ability to learn quickly; excellent communication and interpersonal skills. Knowledge in API integration, serverless, microservices architecture. Knowledge in SQL/NoSQL databases. Knowledge in website development, understanding of website localization processes, which involve adapting content to fit cultural and linguistic contexts. Preferred Qualifications: Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk). Experience with data processing tools like Hadoop, Spark, or similar. Professional Certifications: Relevant certifications such as CISSP, CompTIA Network+, or MCSE (preferred). Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements.

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 15 Lacs

Hyderabad, Secunderabad

Work from Office

Hands-on experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Knowledge of Terraform, CloudFormation, or other infrastructure automation tools. Experience with Docker, and basic knowledge of Kubernetes. Familiarity with monitoring/logging tools such as CloudWatch, Prometheus, Grafana, ELK.

Posted 1 month ago

Apply

6.0 - 8.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

3.0 - 5.0 years

6 - 16 Lacs

Pune

Work from Office

Primary Job Responsibilities: Collaborate with team members to maintain, monitor, and improve data ingestion pipelines on the Data & AI platform. Attend the office 3 times a week for collaborative sessions and team alignment. Drive innovation in ingestion and analytics domains to enhance performance and scalability. Work closely with the domain architect to implement and evolve data engineering strategies. Required Skills: Minimum 5 years of experience in Python development focused on Data Engineering. Hands-on experience with Databricks and Delta Lake format. Strong proficiency in SQL, data structures, and robust coding practices. Solid understanding of scalable data pipelines and performance optimization. Preferred / Nice to Have: Familiarity with monitoring tools like Prometheus and Grafana. Experience using Copilot or AI-based tools for code enhancement and efficiency.

Posted 1 month ago

Apply

1.0 - 6.0 years

6 - 13 Lacs

Bengaluru

Work from Office

Position Summary: We are seeking an experienced and highly skilled Lead LogicMonitor Administrator to architect, deploy, and manage scalable observability solutions across hybrid IT environments. This role demands deep expertise in LogicMonitor and a strong understanding of modern IT infrastructure and application ecosystems, including on premises, cloud-native, and hybrid environments. The ideal candidate will play a critical role in designing real-time service availability dashboards, optimizing performance visibility, and ensuring comprehensive monitoring coverage for business-critical services. Role & Responsibilities: Monitoring Architecture & Implementation Serve as the subject matter expert (SME) for LogicMonitor, overseeing design, implementation, and continuous optimization. Lead the development and deployment of monitoring solutions that integrate on premise infrastructure, public cloud (AWS, Azure, GCP), and hybrid environments. Develop and maintain monitoring templates, escalation chains, and alerting policies that align with business service SLAs. Ensure monitoring solutions adhere to industry standards and compliance requirements. Real-Time Dashboards & Visualization Design and build real-time service availability dashboards to provide actionable insights for operations and leadership teams. Leverage Logic Monitor’s APIs and data sources to develop custom visualizations, ensuring a single-pane-of-glass view for multi-layered service components. Collaborate with applications and service owners to define KPIs, thresholds, and health metrics. Proficient in interpreting monitoring data and metrics related to uptime and performance. Automation & Integration Automate onboarding/offboarding of monitored resources using LogicMonitor’s REST API, Groovy scripts, and Configuration Modules. Integrate LogicMonitor with ITSM tools (e.g., ServiceNow, Jira), collaboration platforms (e.g., Slack, Teams), and CI/CD pipelines. Enable proactive monitoring through synthetic transactions and anomaly detection capabilities. Streamline processes through automation and integrate monitoring with DevOps practices. Operations & Optimization Perform ongoing health checks, capacity planning, tools version upgrades, and tuning monitoring thresholds to reduce alert fatigue. Establish and enforce monitoring standards, best practices, and governance models across the organization. Lead incident response investigations, root cause analysis, and post-mortem reviews from a monitoring perspective. Optimize monitoring strategies for effective resource utilization and cost efficiency. Qualification Minimum Educational Qualifications: Bachelor’s degree in computer science, Information Technology, Engineering, or a related field Required Skills & Qualifications: 8+ years of total experience. 5+ years of hands-on experience with LogicMonitor, including custom DataSources, Property Sources, dashboards, and alert tuning. Proven expertise in IT infrastructure monitoring: networks, servers, storage, virtualization (VMware, Nutanix), and containerization (Kubernetes, Docker). Strong understanding of cloud platforms (AWS, Azure, GCP) and their native monitoring tools (e.g., CloudWatch, Azure Monitor). Experience in scripting and automation (e.g., Python, PowerShell, Groovy, Bash). Familiarity with observability stacks: ELK, Grafana, is a strong plus. Proficient with ITSM and incident management processes, including integrations with ServiceNow. Excellent problem-solving, communication, and documentation skills. Ability to work collaboratively in cross-functional teams and lead initiatives. Preferred Qualifications: LogicMonitor Certified Professional (LMCA and LMCP) or similar certification. Experience with APM tools (e.g., SolarWinds, AppDynamics, Dynatrace, Datadog) and log analytics platforms and logicmonitor observability Knowledge of DevOps practices and CI/CD pipelines. Exposure to regulatory/compliance monitoring (e.g., HIPAA, PCI, SOC 2). Experience with machine learning or AI-based monitoring solutions. Additional Information Intuitive is an Equal Employment Opportunity Employer. We provide equal employment opportunities to all qualified applicants and employees, and prohibit discrimination and harassment of any type, without regard to race, sex, pregnancy, sexual orientation, gender identity, national origin, color, age, religion, protected veteran or disability status, genetic information or any other status protected under federal, state, or local applicable laws. We will consider for employment qualified applicants with arrest and conviction records in accordance with fair chance laws.

Posted 1 month ago

Apply

9.0 - 14.0 years

15 - 30 Lacs

Navi Mumbai, Thiruvananthapuram, Mumbai (All Areas)

Work from Office

Dear Candidate, Greetings for the Day! Brief about Aurionpro:- Aurionpro Solutions Limited is a global leader in providing advanced technology solutions with a focus on Banking, Payments, Transit, Data Center Services, and Government sectors, leveraging Enterprise AI to create comprehensive technology for our clients worldwide. Formed in 1997 and headquartered in Mumbai, Aurionpro prides itself on its deep domain knowledge, interconnected IP, global footprint and a flexible, passionate approach to business. Our client base of over 300 institutions that trust Aurionpro with their mission-critical technology needs is backed by our team of 2500 professionals making Aurionpro one of the deepest pools of fintech, deep-tech and AI talent in the industry. It is underpinned by a flexible approach, an emphasis on passion and working across boundaries and a mentoring, learning culture. This translates into high growth success: Aurionpro grew more than 30% last FY to cross the $100m barrier. Aurionpro caters to a host of clients across BFSI, Telecom ,Digital Solutions for Government and Logistics industry. To know more about the organization you may go through the company website : - www.aurionpro.com or feel free to speak to me. Job Summary: We are seeking an experienced and highly skilled Senior PostgreSQL Database Administrator (DBA) to join our technology team. The ideal candidate will be responsible for the installation, configuration, performance tuning, and ongoing administration of PostgreSQL databases in high-availability, mission-critical environments. This role requires strong analytical skills, a proactive mindset, and the ability to lead database initiatives independently. Key Responsibilities: Administer, maintain, and tune PostgreSQL databases for performance, availability, and scalability. Implement and manage database backups, disaster recovery strategies, and highavailability configurations. Monitor database performance and proactively address issues related to query optimization, indexing, and resource utilization. Collaborate with development teams to design schema, write efficient SQL queries, and optimize data models. Ensure database security, user access controls, and compliance with internal policies and industry standards. Conduct capacity planning and anticipate future database needs based on application growth. Automate routine tasks and develop scripts for database maintenance. Required Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience as a PostgreSQL DBA in production environments. Strong proficiency in SQL, PL/pgSQL, and database architecture. Experience with replication, partitioning, and failover mechanisms (e.g., streaming replication, Patroni, etc.). Familiarity with PostgreSQL performance tuning tools and query plan analysis. Solid understanding of Linux/Unix operating systems and shell scripting. Experience with monitoring tools like Prometheus, Grafana, or pgBadger. Experience working in cloud environments (AWS, Azure, GCP) is a plus. Knowledge of automation tools (Ansible, Terraform) is a plus. Preferred Skills: Experience with other databases (MySQL, MongoDB, etc.) is a plus. Familiarity with CI/CD pipelines and DevOps practices. Knowledge of containerization (Docker, Kubernetes) is an advantage Note- Interested candidate share their resume on below Email ID Priyanka.jadhav@aurionpro.com

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies