Home
Jobs

906 Grafana Jobs - Page 32

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

11 - 21 years

30 - 45 Lacs

Mumbai Suburbs, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Min 11 to 20 yrs with exp in tools like Azure DevOps Jenkins GitLab GitHub Docker Kubernetes Terraform Ansible Exp on Dockerfile & Pipeline codes Exp automating tasks using Shell Bash PowerShell YAML Exposure in .NET Java ProC PL/SQL Oracle/SQL REDIS Required Candidate profile Exp in DevOps platform from ground up using tools at least for 2 projects Implement in platform for Req tracking cod mgmt release mgmt Exp in tools such as AppDynamics Prometheus Grafana ELK Stack Perks and benefits Addnl 40% Variable + mediclaim

Posted 3 months ago

Apply

10 - 15 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

10 - 15 years experience in Oracle administration Experience in database replication technology like Dataguard/Always On/Mirroring Competent in tuning PL/SQL scripts Proven ability to navigate Linux operating systems and utilize command-line tools proficiently. Have a proven effective and efficient troubleshooting skill set. Ability to cope well under pressure. Strong Organization Skills and Practical Sense Quick and Eager to Learn and explore both Technical and Semi-Technical work types Engineering Mindset Preferred Skills Experience / Knowledge of the following will be added advantage (but not mandatory): Experience in MSSQL or MySQL or Sybase Experience in Infrastructure Automation Development Experience with monitoring systems and log management/reporting tools (e.g.Loki, Grafana, Splunk). Proficient in Python programming and/or other shell programming is an advantage.

Posted 3 months ago

Apply

3 - 8 years

5 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title Machine Learning Engineer Responsibilities Responsibilities: As part of this role, you’ll need to comprehend various ML algorithms, their strengths, weaknesses, and how they impact deployment. Algorithm Development:-Design, develop, and implement machine learning algorithms to address specific business challenges.-Collaborate with cross-functional teams to understand requirements and deliver solutions that meet business objectives. Data Analysis and Modeling:-Perform exploratory data analysis to gain insights and identify patterns in large datasets.-Build, validate, and deploy machine learning models for predictive and prescriptive analytics. Feature Engineering:-Extract and engineer relevant features from diverse datasets to enhance model performance.-Optimize and fine-tune models for improved accuracy and efficiency. Model Evaluation and Deployment:-Conduct thorough evaluation of machine learning models using appropriate metrics.-Deploy models into production environments, ensuring scalability, reliability, and performance.-Communicate complex technical concepts to non-technical stakeholders effectively. Technical and Professional Requirements: Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or a related field. 5-6 years of hands-on experience in developing and deploying machine learning models. Proficiency in programming languages such as Python, R, or Java. Experience with data preprocessing, feature engineering, and model evaluation techniques. Understanding how to set up scalable and reliable environments for ML models is crucial. Mastery of CI/CD and Automation Tools:Continuous Integration/Continuous Deployment Knowledge of tools like Azure ML DevOps, Jenkins, GitLab CI/CD, and Kubernetes to automate workflows and ensure smooth deployments. Knowledge of Monitoring and Logging Systems:Azure Monitor, Prometheus, Grafana, and ELK stack for monitoring and logging. Strong Communication and Collaboration Abilities:As a team lead, the candidate will work closely with data scientists, engineers, and stakeholders. Preferred Skills: Technology->Machine learning->data science Additional Responsibilities: Understanding of forecasting & revenue ERP environments (e.g.:Salesforce & SAP ECC) Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn). Deep Understanding of Machine Learning Models Proficiency in Cloud and On-Premises Infrastructure Excellent communication skills for aligning goals, resolving conflicts, and driving successful ML projects. Continuous Learning:Stay abreast of the latest developments in machine learning, data science, and related fields. Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements

Posted 3 months ago

Apply

8 - 12 years

25 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary HighRadius is looking for a dynamic Java professional to join our Engineering Team. This role responsibilities include participation in Software Development activities, writing clean and efficient code for various applications and running tests to improve system functionality. Writes code that is easily maintainable, highly reliable and demonstrates knowledge of common programming best practices. Mentor junior members of the team in delivering sprint stories and tasks. Key Responsibilities Demonstrate excellent Leadership, hands on Technical, Design & Architecture skills and lead the team to arrive at optimal solutions for business challenges. Review requirements, specifications, and create technical design documents Estimate tasks and meet milestones and deadlines appropriately Provides technical guidance and support during the development phase of a project and ensure project delivery Follow best practices of the industry for delivering high-quality software in a timely manner and to the specification. Identify risks or opportunities associated with current or new technology use. Plan and execute PoCs as necessary. Strive for continuous improvement of Development Process & standards Good interpersonal communication and organizational skills to contribute as a leading member of global, distributed teams focused on delivering quality, high performant and scalable solutions Demonstrated ability to rapidly learn new and emerging technologies and vision of technology transformation developing a vision of their suitability Effectively communicate with team members, project managers, clients and other stakeholders as required. Skill & Experience Needed Experience in Payment domain is must have Bachelors Degree required Experience range : 8+ years Technology Stack : Core Java, Java 8, collections, Exception Handling, Hibernate, Spring, SQL Good to have - Ext.Js or any UI framework experience, Elastic search, Cloud architecture (AWS, GCP, Azure ) Knowledge of Design patterns, Jenkins, GIT, Grafana, ELT, JUNIT Deep understanding and experience of architecting and deployment of end to end scalable web application. Deep understanding of SCRUM/Agile process, project tracking, monitoring, risk management. What you get Competitive salary Fun-filled work culture (https://www.highradius.com/culture/) Equal employment opportunities Opportunity to build with a pre-IPO Global SaaS Centaur.

Posted 3 months ago

Apply

5 - 9 years

10 - 15 Lacs

Chennai, Bengaluru, Hyderabad

Hybrid

Naukri logo

Role & responsibilities Skill:Azure Devops CI/CD, Ansible, Linux Kibana, Grafana, Prometheus Exp: 5 to 9 Yrs Location: Chennai/Bangalore Mode of interview: F2F

Posted 3 months ago

Apply

11 - 15 years

15 - 20 Lacs

Mumbai

Work from Office

Naukri logo

Overview: Accountabilities: As a Platform Reliability Engineer, you will be responsible for the evaluation, selection, and deployment of monitoring & observability technologies. You will manage and maintain monitoring infrastructure, ensuring it aligns with industry best practices. You will collaborate with DevOps, CriticalOps and IT leadership teams to understand system requirements and design effective monitoring strategies. You will also develop and implement monitoring solutions for infrastructure, applications, and services. Essential Skills/Experience: Degree level education in computer science, information technology, or a related field Proven experience as a monitoring and observability engineer or a similar role Proficient in developing monitoring capabilities and configuring integration with tools such as Prometheus, Grafana, Splunk, SumoLogic, DataDog, DynaTrace, etc. Strong scripting skills (e.g., Python) for automation in data environments Familiarity with logging, tracing, and APM (Application Performance Monitoring) solutions. Ability to interpret and communicate technical information into business language Working knowledge of Agile Software Development techniques and Methodologies Familiarity with CI/CD pipelines and continuous deployment practices as part of an Agile team Proficient in all aspects of Agile and SaFE (can lead, teach, and run) Excellent problem-solving skills Customer engagement experience Knowledge of data processing frameworks (e.g. Apache Spark) and data storage solutions (e.g. data lakes, warehouses) Experience with data orchestration tools (e.g. Apache Airflow) Understanding of data lineage and metadata management. Good commercial awareness and understanding of the external market Demonstrate initiative, strong customer orientation, and cross-cultural working Excellent communication and interpersonal skills.

Posted 3 months ago

Apply

3 - 7 years

25 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Help shape the future of mobility. Would you like to join our exciting journey and change the automotive industry? Aptiv is one of the leading Automotive suppliers and the forefront of solving mobility’s toughest challenges. As a large technology company, we are looking for a new talent for one of our leading Tech Centers for Artificial Intelligence in Bangalore, India. We offer the chance to work in a challenging technical environment where science is transferred into real products. There, you can work together with a fantastic, passionate young, international team of technical experts from around the globe to develop new sensors, algorithms and platforms to shape the future of mobility. Want to join us? Your Role Work closely with Architects and Developers to concept and design auto scaling solutions across the world. Be part of Development and Operations and help building and enhancing a new groundbreaking CI/CD platform hosted in multiple clouds. Script, configure, and create state of the art solutions in a scalable hybrid cloud environment. Connect and deploy innovative solutions to a game changing CI platform. Participate in technical discussions with our Agile team. Ensure that all applicable data privacy requirements are met. It’s natural for you to apply and consider cost optimization while working in a cloud environment. Follow coding standards and guidelines in SW development process, Debugging, Troubleshooting and fixing bugs. Your Background Bachelor’s(BE) / Masters(MS) degree in a technical discipline (engineering, computer science, mathematics, physics, or related field of study). #EXP[3-7] Skills – Linux, typescript , node js, angular, mongo db (No Sql), Azure, AWS, microservices, CI/CD. 3+ years of experience writing software using scripting languages Javascript/Typescript and/or Python, preferably with Linux. Experience with cloud infrastructure such as Azure(Preferred), AWS, Google Cloud. Experience with NoSQL database like MongoDB. Experience with relational SQL and NoSQL databases. Experience building data analysis and visualization dashboards using tools such as Qlik Sense, Grafana/Kibana and ELK stack. Experience with RUST is a Bonus, Hands on experience with Typescript development on server-side. Why join us? You can grow at Aptiv. Whether you are working towards a promotion, stepping into leadership, considering a lateral career move, or simply expanding your network – you can do it here. Aptiv provides an inclusive work environment where all individuals can grow and develop, regardless of gender, ethnicity or beliefs. You can have an impact. Safety is a core Aptiv value; we want a safer world for us and our children, one with: Zero fatalities, Zero injuries, Zero accidents. You have support. Our team is our most valuable asset. We ensure you have the resources and support you need to take care of your family and your physical and mental health with a competitive health insurance package. Your Benefits at Aptiv: Higher Education Opportunities (UDACITY, UDEMY, COURSERA are available for your continuous growth and development); Life and accident insurance; Well Being Program that includes regular workshops and networking events; Access to fitness clubs (T&C apply); Apply today, and together let’s change tomorrow! #LI-SR1

Posted 3 months ago

Apply

4 - 6 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Job Purpose: You will support streamlining and automating the deployment, monitoring, and management of applications in cloud environments, ensuring scalability, reliability, and efficiency. You will act as the bridge between development and operations by supporting implementation of continuous integration and continuous deployment (CI/CD) pipelines, optimizing cloud infrastructure, and enhancing system performance and security towards achieving larger organizational objectives to facilitate seamless collaboration between development and operations teams to enhance the speed and quality of software delivery and its operations. Reporting Manager: Service Deliver Manager This an Individual Contributor role Roles & Responsibilities: Infrastructure Management: o Support design, deployment, and management of scalable, reliable cloud infrastructure. o Understand and utilize Infrastructure as Code (IaC) tools such as Terraform, Ansible, ARM, Bicep etc to automate provisioning. o Implement and maintain automated testing frameworks to ensure code quality and application reliability. Continuous Integration and Continuous Deployment (CI/CD): o Support development and maintenance of CI/CD pipelines to automate code testing, integration, and deployment. o Help ensure smooth and fast delivery of applications and updates. Incident Management: o Respond to and help resolve incidents, ensuring minimal downtime and impact on users. o Conduct root cause analysis and implement preventive measures for recurring issues. o Participate in on-call support for critical incidents. Commented [SR1]: Responsibility buckets will help track gradation across roles in the hierarchy Stakeholders Managment: o Collaborate with development team, Cloud Security Team, and operations teams to support project requirements, deploying and managing application and resolve issues. o Research and Development: Stay updated with the latest cloud technologies, tools, and best practices. o Continuously explore and evaluate new solutions to enhance the cloud infrastructure and DevOps processes Education & Work Experience Bachelors degree in computer science, Engineering, or related field 4-6 Years of experience as DevOps Engineer In-Depth knowledge of cloud infrastructure and services, specifically Azure or AWS or other cloud platform Hands on experience with tools like Git, Jenkins, Docker, Kubernetes, or similar technologies. Strong scripting skills using Bash, Python, PowerShell or similar languages. Strong knowledge of infrastructure as code (IAC) tools such as Terraform, Ansible, or CloudFormation Strong knowledge of monitoring and logging tools such as Prometheus, Grafana, or similar technologies Proficient in Linux and typical Unix tools Excellent problem-solving skills, analytical skills, strong abstraction capabilities and ability to troubleshoot complex issues in production environment. Knowledge of best practices in disaster recovery planning and execution Independent and autonomous approach to work A strong focus on customers and results Excellent communication, collaboration skills, ability to work in a team and a professional attitude. Willing to provide on-call support as and when needed.

Posted 3 months ago

Apply

7 - 12 years

20 - 35 Lacs

Bengaluru

Remote

Naukri logo

Design, automate, and manage CI/CD pipelines, cloud infrastructure, and containerized applications using Docker, Kubernetes, and Terraform. Collaborate with teams to ensure scalable, secure, and efficient deployments. Required Candidate profile 5+ years in DevOps, expertise in AWS/Azure/GCP, Docker, Kubernetes, Terraform, CI/CD tools, and Git. Scripting skills in Python or Bash. Cloud certifications with Agile teamwork experience.

Posted 3 months ago

Apply

4 - 9 years

5 - 11 Lacs

Pune

Work from Office

Naukri logo

Job Title: OpenShift Administrator Job Summary: We are seeking a skilled OpenShift Administrator to manage, maintain, and optimize our OpenShift Container Platform environments. This role involves configuring clusters, managing deployments, and supporting application teams in delivering highly available and scalable containerized applications. The ideal candidate will have hands-on experience with OpenShift and Kubernetes, strong problem-solving skills, and a proactive approach to security and automation. Key Responsibilities: Cluster Management and Administration Install, configure, and maintain OpenShift Container Platform clusters across various environments (on-premises). Manage nodes, load balancers, networking, and storage within the OpenShift environment. Perform regular upgrades and patches for OpenShift, Kubernetes, and underlying infrastructure components to maintain platform stability and security. Deployment of Services on different Environment (DEV, UAT, PROD, DR). Knowledge and implantation of Disaster Recovery. Creation of SR in portal, follow-up and closing. Environment Monitoring and Optimization Monitor cluster performance, resource usage, and system health, ensuring high availability and stability. Set up and configure monitoring and alerting tools like Prometheus, Grafana. Optimize infrastructure and application resources for cost-efficiency and performance in collaboration with development teams. Automation and Scripting Automate repetitive tasks such as deployments, backups, and maintenance using Bash, or other scripting languages. Integrate CI/CD pipelines with OpenShift for streamlined application delivery and continuous deployment (Tekton / ArgoCD). Security and Compliance Configure Role-Based Access Control (RBAC) to enforce secure access to OpenShift resources. Implement network policies, security context constraints, and other security configurations to protect applications and data. Ensure OpenShift clusters and workloads comply with organizational security policies and industry standards. Troubleshooting and Issue Resolution Diagnose and resolve platform and application issues related to OpenShift, Kubernetes, containers, networking, and storage. Perform root cause analysis on issues and create action plans to prevent recurrence. Collaborate with cross-functional teams to provide support and guidance on resolving application-level issues. Documentation and Knowledge Sharing Document environment configurations, standard operating procedures, troubleshooting steps, and best practices. Provide knowledge transfer and training sessions for team members and other stakeholders to facilitate efficient support and development. Collaboration and Support Work closely with DevOps, development, and infrastructure teams to support their application deployments and containerization efforts. Provide guidance on container best practices, application optimization, and troubleshooting within the OpenShift environment. Act as a point of contact for OpenShift platform issues, working to ensure effective and timely resolutions. Qualifications: Education: Bachelors degree in Computer Science, Information Technology, or a related field (or equivalent experience). Experience: 1.5+ years of experience in managing OpenShift or Kubernetes platforms. Skills: Proficiency in OpenShift administration, Kubernetes concepts, and containerization using Docker. Strong knowledge of Linux systems administration and networking fundamentals. Familiarity with monitoring tools (Prometheus, Grafana) and CI/CD tools (Tekton / ArgoCD, GitLab). Security and compliance experience, including RBAC, SELinux, network policies, and security contexts. Working knowledge of Image Registries (Quay / Nexus) Certifications (Preferred): Red Hat Certified in OpenShift Administration (D0180, D0280) Certified Kubernetes Administrator (CKA) Soft Skills: Excellent problem-solving and analytical abilities. Strong communication and collaboration skills. Ability to work independently and manage multiple projects in a fast-paced environment. Experience: 5 years Employment: On the payroll of Vinsys client will be Saraswat Bank Location Flexibility: Must visit Vashi, Mumbai as per business requirements Required Skills : D0322: Installing OpenShift on cloud, virtual, or physical infrastructure DO180: Red Hat OpenShift Administration I – Operating a Production Cluster DO280: Red Hat OpenShift Administration II – Operating a Production Kubernetes Cluster

Posted 3 months ago

Apply

5 - 10 years

10 - 20 Lacs

Pune, Bengaluru, Hyderabad

Work from Office

Naukri logo

Cohesity Consultant- (Hyderabad & Bangalore Location) 5+ Years experience 5 days work from office Key responsibilities include: 1. Develop and implement automated solutions for data backup, recovery, and protection using Cohesity and Commvault platforms 2. Create and maintain Python scripts to automate routine tasks and enhance data protection workflows 4. Align automation efforts with the customer's data protection framework, ensuring compliance with privacy and security standards 5. Collaborate with cross-functional teams to identify areas for improvement in data protection processes Supported Workloads Operating Systems AIX Red Hat Linux Red Hat Linux on PowerPC Solaris Stratus Windows Databases & Applications Cassandra Cockroach DB2 Elasticsearch MarkLogic MongoDB MS SQL MS SQL on Linux MySQL Neo4j Oracle Oracle Exadata Oracle ZDLRA SAP HANA SAP Oracle Sybase TigerGraph Storage Isilon NAS NetApp NAS Required skills: • Software Development & Automation o Proficiency with Python API development using Flask/FastAPI framework o Experience with RESTful web services and SDK integration o Experience with integrating applications with databases such as Oracle, MongoDB, Cassandra (relational, NoSQL and document-based) o Experience with utilizing Grafana, ELK and Dynatrace for monitoring • Backup Infrastructure Knowledge o Understanding of Cohesity and Commvault data protection product offerings and architecture o Proficiency with accessing Cohesity and Commvault via GUI, command line, and API o Understanding of data backup and recovery workflows for virtual machines, physical servers, databases, S3, and NAS/file storage and how they are configured on Cohesity and Commvault o Familiarity with Test Driven Development using Behave framework for Python

Posted 3 months ago

Apply

10 - 15 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

10-15 years of exp in backend architecture, API security, cloud-based microservices. Expe in Spring Boot, API Gateway, OAuth, Kubernetes (EKS) orchestration. Hands-on exp in CI/CD pipeline automation, DevSecOps best practices, performance tuning. Required Candidate profile Strong knowledge of AWS networking, IAM policies, and security compliance. Proven ability to mentor backend developers, optimize system performance, and scale cloud-based architectures

Posted 3 months ago

Apply

3 - 4 years

5 - 6 Lacs

Pune

Work from Office

Naukri logo

Job Purpose VoIP Infrastructure Design & Maintenance: oConfigure, deploy, and maintain FreeSWITCH and Kamailio-based systems. oDesign scalable and reliable VoIP architectures to support business needs. oImplement call routing, DID management, and trunk configurations. SIP & Call Routing: oDevelop and manage SIP-based call routing for internal and external communication. oTroubleshoot SIP signaling issues using tools like Wireshark or sngrep. oOptimize routing rules for least-cost routing (LCR) and high availability. Monitoring & Performance Optimization: oMonitor VoIP systems for performance, security, and uptime. oConduct capacity planning and optimize system resources. oImplement call quality monitoring tools (e.g., RTCP, QoS metrics). Collaboration & Support: oWork with cross-functional teams to integrate VoIP systems with CRM and other platforms. oProvide Level 2/3 support for VoIP-related issues. oTrain team members on VoIP best practices and system usage. Security & Compliance: oImplement VoIP security measures to prevent fraud and mitigate risks. oEnsure compliance with industry standards and regulations (e.g., GDPR, HIPAA). oConfigure firewalls and SBCs for secure SIP trunking. Database & Messaging Integration: oIntegrate VoIP systems with databases like MongoDB and PostgreSQL. oLeverage Redis for caching and RabbitMQ for messaging queues. oImplement efficient data storage and retrieval mechanisms to support VoIP services. Programming & Scripting: oDevelop custom VoIP features and modules using languages like Golang, Lua, Python, C, and C++. oAutomate repetitive tasks and processes through scripting. oIntegrate WebRTC solutions for real-time communication. Duties and Responsibilities VoIP Infrastructure Design & Maintenance Monitoring & Performance Optimization Programming & Scripting Required Qualifications and Experience VoIP Expertise: oStrong hands-on experience with FreeSWITCH and Kamailio. oIn-depth knowledge of SIP protocols, RTP, and VoIP troubleshooting. oFamiliarity with codecs like G.711, G.729, Opus, etc. Networking Proficiency: oSolid understanding of networking concepts such as NAT, RTP, and STUN/TURN. oExperience with firewall configurations and SBCs. Development Skills: oProficiency in programming languages like Golang, Lua, Python, C, and C++. oExperience with WebRTC for real-time communication. oFamiliarity with database systems like MongoDB and PostgreSQL. oExperience with Redis for caching and RabbitMQ for asynchronous messaging. Tools & Platforms: oFamiliarity with monitoring tools like Homer, Grafana, or Nagios. oHands-on experience with cloud platforms like AWS, Azure, or Google Cloud. oProficiency in containerization tools like Docker and orchestration with Kubernetes. Soft Skills: oStrong analytical and problem-solving skills. oExcellent communication and documentation abilities. oAbility to work collaboratively in a team environment Educational Qualifications B-tech IT or CS

Posted 3 months ago

Apply

1 - 6 years

3 - 8 Lacs

Pune

Work from Office

Naukri logo

You will be part of the Storage Development Business of the Infrastructure organization with the following key responsibilities: Responsibilities: You will handle the most highly escalated cases by our support/L2 teams to ensure they receive top-level help to provide better customer experience on their most impactful issues. You will be responsible for providing help to L2 support engineers. You will be part of Ceph development teams where you must fix customer-reported issues to ensure our customers and partners receive an enterprise-class product. You will work to exceed customer expectations by providing outstanding sustaining service and ensuring that regular updates are provided to L2 teams. You will need to understand our customer's and partner's needs and work with product management teams on driving these features and fixes directly into the product. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 2+ Years of experience working as a L3, sustaining or development engineer or directly related experience. Senior-level Linux Storage System administration experience, including system installation, configuration, maintenance, scripting via BASH, and utilizing Linux tooling for advanced log debugging. Advanced troubleshooting and debugging skills, with a passion for problem-solving and investigation. Must be able to work and collaborate with a global team and strive to share knowledge with peers. 1+ years of working with Ceph/Openshift/Kubernetes technologies. Strong scripting (Python, Bash, etc.) and programming skills (C/C++). Able to send upstream patches to fix customer-reported issues. In-depth knowledge of Ceph storage architecture, components, and deployment. Hands-on experience with configuring and tuning Ceph clusters. Understanding of RADOS, CephFS, and RBD (Rados Block Device). Preferred technical and professional experience Knowledge of open source development, and working experience in open source projects. Certifications related to Ceph storage and performance testing are a plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their storage services. Experience with container orchestration tools such as Kubernetes. Knowledge of monitoring tools (Prometheus, Grafana) and logging frameworks. Ability to work effectively in a collaborative, cross-functional team environment. Knowledge of AI/ML, exposure to Gen AI

Posted 3 months ago

Apply

2 - 7 years

5 - 10 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

Containerization - K8s/Docker/Openshift b) Maintainability & Observability of products using CNI c) Working knowledge of Message Queues - RabbitMQ/Kafka/Redis Streams d) Good experience in scripting languages - any one mandatory - Java script/nodejs/Python etc e) Understand Performance Engineering and in the past have had experience of scaling a product ten fold. f) Performance Engineering fundamentals - APM/Context switch/bandwidth etc g) Distributed tracing h) Good knowledge of Relational/NoSQL databases starting from Oracle/Postgresql to Redis/Mongo/Elastic etc i) CI/CD Pipelines using Jenkins j) Good knowledge of Ansible k) Security - SAST/DAST/SCA and its associated tools l) Containerization security m)Automation via shift left approach n) Working experience in deployments - Canary/Blue green o) Working experience in cloud environments for both VM/Containerized workloads

Posted 3 months ago

Apply

3 - 8 years

10 - 20 Lacs

Noida

Work from Office

Naukri logo

Role & Responsibilities: Design, implement, and manage CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Automate infrastructure provisioning and configuration using Terraform, Ansible, or CloudFormation . Deploy, monitor, and manage cloud-based infrastructure on AWS, Azure, or Google Cloud Platform (GCP) . Implement and manage containerized applications using Docker and Kubernetes . Ensure high availability, scalability, and security of cloud infrastructure and applications. Monitor system performance and troubleshoot issues using Prometheus, Grafana, ELK Stack, or Datadog . Maintain version control and best practices in Git repositories (GitHub, GitLab, Bitbucket). Enhance security with DevSecOps best practices, vulnerability scanning, and secrets management. Work closely with development teams to integrate DevOps practices into the software development lifecycle. Continuously improve deployment strategies, system reliability, and automation processes. Preferred Candidate Profile: 2-6 years of experience in DevOps, cloud infrastructure, or automation. Strong hands-on experience with AWS, Azure, or GCP cloud platforms. Proficiency in CI/CD tools like Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, or Ansible . Good knowledge of Docker, Kubernetes, and container orchestration . Strong scripting skills in Python, Bash, or PowerShell . Experience with monitoring and logging tools like Prometheus, Grafana, ELK Stack, or Datadog . Familiarity with networking, security best practices, and Linux system administration . Strong problem-solving and troubleshooting skills. Ability to work in an Agile and collaborative environment. Perks & Benefits: Competitive salary and performance-based bonuses. On-site travel opportunity. Career development and learning opportunities. Exposure to cutting-edge DevOps technologies and tools. Supportive and dynamic work environment.

Posted 3 months ago

Apply

6 - 10 years

18 - 20 Lacs

Chennai, Noida

Hybrid

Naukri logo

For the Observability Role that we are looking for, you can use the below details as a kick starting point to find the right resource The skillsets that we are looking for are as below 1. Experience in AWS environments 2. Experience in Kubernetes Environments as a administrator 3. Experience with Linux Operating systems 4. Experience in Python & shell scripting is a must 5. Experience in Jenkins Pipelines 6. Strong knowledge of DevOps principles 7. Preferably experience with the Opensource monitoring tools like Telegraf, Prometheus, Grafana, Loki 8. Experience in Developing dashboards in Grafana using various data sources like Loki , Prometheus , AWS CloudWatch 9. Experience in using Git / Bitbucket 10. Knowledge about Agile methodologies Keywords Devops Docker AWS Azure Kubernetes Pipelines Deployment Python/Java/any lan Bash Linux Jenkins Jira Bitbucket

Posted 3 months ago

Apply

5 - 8 years

18 - 20 Lacs

Pune

Work from Office

Naukri logo

5+ years of experience in a Technical Support Role p on Data based Software Product at least L3 level. Respond to customer inquiries and provide in-depth technical support. Candidate to work during EMEA time zone (2PM to 10 PM shift)

Posted 3 months ago

Apply

0 - 1 years

1 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

Requirements: Operations or systems administration experience, particularly on Linux. Passionate about Open Source Technologies and experience in dealing with open source packages. Experience with Docker, and/or cloud deployment technologies Experience with container networking on Docker. Must have knowledge in Linux Internals and Shell/AWK/Sed scripting and Terraform Experience with application deployment by using CI/CD. Experience with monitoring tools like Prometheus, Grafana, Datadog, etc. Experience in OCI, AWS, GCP, Azure and PaaS, IaaS, SaaS B.E or B.Tech in Computer Science or Electronics/Electrical is a MUST. Responsibilities: Youll Convert Software Packages into Dockerfile(s) Maintain and map the life-cycle of a software component/package using Docker Implement and monitor tools like GitLab, GitHub Implement and improve monitoring and alerting. Experience in the tools like Nagios, Zabbix Implement cluster and broker technologies using Docker Implement and manage CI/CD pipelines preferably using Jenkins/Jenkins-X, Argo CD Implement an auto-scaling system for our Kubernetes nodes. Install various open-source components and debug issues related to software package installation in Docker/Kubernetes Environments, ELK, Kafka, cassandra.

Posted 3 months ago

Apply

8 - 10 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Applies technical knowledge and problem solving methodologies to projects of moderate scope, with a focus on improving the data and systems running at scale, and ensures end to end monitoring of applications Resolves most nuances and determines appropriate escalation path Build, support, Monitor and Automate web product on Private Cloud infrastructure Drive initiatives to improve the reliability and stability of web Hosting platforms using data driven analytics to improve service levels Collaborates with team members to identify comprehensive service level indicators and stakeholders to establish reasonable service level objectives and error budgets with customers Strong knowledge of one or more infrastructure disciplines such as hardware, networking terminology, databases, storage engineering, deployment practices, integration, automation, scaling, resilience, and performance assessments Experience with multiple cloud technologies with the ability to operate in and migrate across public and private clouds Private Public Exposure Understanding and working experience, and understanding of resiliency, scalability, observability, monitoring Understanding of the Data Objects & Structure and write the queries using SQL based on client tickets, as needed Experience as SRE in complex and mission critical applications involving multitude of components of varying technical generations Deep proficiency in reliability, scalability, performance, security, enterprise system architecture, toil reduction, and other site reliability best practices with the ability to implement these practices within an application or platform Strong knowledge and experience in observability, monitoring, alerting, and telemetry collection using tools such as CloudWatch, Grafana, Dynatrace, Prometheus, Splunk, etc. Fluency in at least one programming language such as (e.g., Python, Terraform, Ansible, Java Spring Boot, Shell Scripting, Net Demonstrates a high level of technical expertise within one or more technical domains and proactively identifies and solves technology related bottlenecks in your areas of expertise Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Required Qualifications, Capabilities, and Skills Formal training or certification on engineering infrastructure disciplines concepts and 6+ years applied experience

Posted 3 months ago

Apply

8 - 13 years

25 - 30 Lacs

Chennai, Hyderabad

Work from Office

Naukri logo

Primary Skills: Airflow, Autosys, Python Good to have: Finance domain exp JD: 1. Design, implement, and maintain Airflow DAGs (Directed Acyclic Graphs) to automate ETL (Extract, Transform, Load) processes and other data workflows. 2. Configure and manage task dependencies, scheduling, retries, and error handling in Airflow. 3. Integrate Apache Airflow with different data sources, processing systems, and storage solutions (e.g., databases, cloud services, data lakes). Responsibility: 7+ years of experience in managing workflow automation and job scheduling, with at least 3 year focused on Apache Airflow. Strong background in Autosys (preferably 3+ years), including job scheduling, dependency management, and error handling. Experience migrating jobs from Autosys to Airflow and automating complex workflows. Familiarity with Airflow's components, such as operators, sensors, and hooks. Strong knowledge of Python for writing Airflow tasks and custom operators. Experience working with cloud environments (AWS, GCP, Azure). Solid understanding of SQL, databases, and data engineering concepts. Familiarity with containerization technologies (Docker, Kubernetes) is a plus. Experience with monitoring, alerting, and logging systems (e.g., Prometheus, Grafana).

Posted 3 months ago

Apply

10 - 15 years

12 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Roles Responsibilities: Oversee MES ASRE activity, provide guidance and mentoring. Ensure rapid, automated and safe deployment of technical solutions and streamline the processes. Handle complex environment requests. Carryout enhancements to maintenance housekeeping scripts as required and monitor the DB growth. Put in process in place to schedule periodic purging of the DB after agreeing with relevant stakeholders. Participate in the release activity and coordinate with QA/Release teams. Monitor spending and cost attribution. Contribute to the env management enhancement roadmap and End to end ownership of tasks. Participate in AWS stack deployment, AWS AMI patching, and stack configuration to ensure optimal performance and cost-efficiency using CloudFormation, git, CICD pipelines. Troubleshooting and resolution of Murex environment specific issues including Infrastructure related issues to ensure the system not hitting the threshold. Troubleshooting and resolution of Murex environment specific issues during regression, failure in EOD run, UAT. Address ad hoc request like warehouse rebuild, maintenance, Perform Health/sanity checks, Creating XVA engine, environment restores & backup in AWS as per project need. Experience in: 8 to 12 Years experience in Murex platform & Environment Management Experience in AWS Cloud Mandatory Skills Description: AWS Certified DevOps Engineer/Solution architect with relevant working experience with CICD tools like Git,?GitHub Actions, flows, Ansible, AWS including CDK Murex environment/support experience with RCA and troubleshooting Experienced in Python, Shell Scripting, Web development Linux/Unix server and Oracle RDS knowledge Experienced in Release and CICD process Working experience with automation/job scheduling tools such as Autosys, GitHub Action, Working experience with monitoring tools like Grafana, Splunk, Obstack, PagerDuty Working experience with Cloud technologies on AWS (cloud formation, networking, is highly desirable) Good communication and organisation skills working within a DevOps team supporting a wider IT delivery team. Nice to have skills: PL/SQL, Programming languages (Java) Technical solution design experience and start-to-end solution ownership Qualification: Bachelors degree/ masters degree in engineering.

Posted 3 months ago

Apply

8 - 13 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Overview: We are seeking a highly skilled Tech Lead to spearhead our API Platform Team. This role demands a profound expertise in Azure services and advanced skills in Terraform and Kubernetes. Key Responsibilities and Requirements: Proven experience as a Tech Lead or Senior role in Azure, Kubernetes and Terraform. Extensive Expert-level knowledge and experience with Azure services such as AKS, APIM, Application Gateway, Front Door, Load Balancers, Azure SQL, Event Hub, Application Insights, ACR, Key Vault, VNet, Prometheous, Grafana, Storage Account, Monitoring, Notification Hub, VMs, DNS and more. Extensive Expert-level knowledge and hands-on experience in design and implement complex Terraform modules for Azure and Kubernetes environments, incorporating various providers such as azurerm, azapi, kubernetes, and helm. Extensive Expert-level knowledge and hands-on experience in deploy and manage Kubernetes clusters (AKS) with a deep understanding of Helm chart writing, Helm deployments, and AKS addons, Application Troubleshooting, monitoring with Prometheus and Grafana, GitOps and more. Lead application troubleshooting, performance tuning, and ensure high availability and resilience of APIs deployed in AKS and exposed internally and externally through APIM. Drive GitOps, APIOps practices for continuous integration and deployment strategies. Strong analytical and problem-solving skills with a keen attention to detail. Excellent leadership skills and ability to take ownership to deliver platform requirements in a faster pace. Qualifications: Bachelors or masters degree in Computer Science, Engineering, or a related field. Certifications in Azure, Kubernetes and Terraform are highly preferred.

Posted 3 months ago

Apply

8 - 13 years

10 - 15 Lacs

Pune

Work from Office

Naukri logo

Has experience with API testing using Pytest and Allure Reports 2 .Has experience with Jenkins and New Relic or Grafana 3 .Has knowledge of AWS and Azure Services on infrastructure wide 4 .Python knowledge with experience in using flasks and django framework with core python functionalities as well Desired 1. Terraform Knowledge 2. Kubernetes And Pod execution knowledge 3. Design Patterns 4. My SQL databases knowledge

Posted 3 months ago

Apply

18 - 22 years

50 - 60 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a leader for our Site Reliability Engineering (SRE), Observability team. As a leader of SRE/Observability you will create compelling Offerings in SRE, Observability and Resiliency for customers and contribute to the business growth. Deliver solutions to our customers and maintain the highest standards and develop and implement Observability and SRE team and offerings for Virtusa. Be a strong thought leader in Site Reliability engineering, Observability, Operational excellence, and DevOps Principles. Strong technical acumen in Cloud Architecture, Observability, Performance Benchmarking, Capacity planning and Reliability tools. Experience in Observability platforms, application monitoring tools and performance analysis techniques. Experience managing & growing technical leaders and teams. Be responsible for building and mentoring a new team of SRE, Observability specialists Strong technical acumen in Cloud Architecture, Observability, Performance Benchmarking, Capacity planning and Reliability tools. KEY QUALIFICATION & EXPERIENCES: 15+ yrs of IT experience with minimum 5 years of experience in SRE/ Observability/ Monitoring tools Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field. Expert level experience in monitoring and logging technologies, both open source and closed source (e.g. AppDynamics, Newrelic, Datadog, Prometheus, Grafana, LogicMonitor, SumoLogic, ELK) Experience in implementing Metrics, Logs and Tracing for E2E observability A working knowledge of systems is needed. Terraform, Ansible, Chef, Puppet, Jenkins, Designing and implementing CI/CD pipelines, Infrastructure provisioning and management Ability to communicate and coordinate with cross-functional engineering teams across multiple geographic regions. Experience with AIOps and machine learning is highly desirable. Experience with other monitoring tools like Prometheus, Grafana, etc. Experience with Observability solutions like Dynatrace, DataDog, Instana etc. is highly desirable Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and manage multiple projects simultaneously. Knowledge of IT operations concepts and processes, such as monitoring, incident management, root cause analysis, remediation.

Posted 3 months ago

Apply

Exploring Grafana Jobs in India

Grafana is a popular tool used for monitoring and visualizing metrics, logs, and other data. In India, the demand for Grafana professionals is on the rise as more companies are adopting this tool for their monitoring and analytics needs.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

Average Salary Range

The average salary range for Grafana professionals in India varies based on experience level: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-20 lakhs per annum

Career Path

A typical career path in Grafana may include roles such as: 1. Junior Grafana Developer 2. Grafana Developer 3. Senior Grafana Developer 4. Grafana Tech Lead

Related Skills

In addition to Grafana expertise, professionals in this field often benefit from having knowledge or experience in: - Monitoring tools such as Prometheus - Data visualization tools like Tableau - Scripting languages (e.g., Python, Bash) - Understanding of databases (e.g., SQL, NoSQL)

Interview Questions

  • What is Grafana and how is it used? (basic)
  • Explain the difference between Grafana and Kibana. (basic)
  • How do you create a dashboard in Grafana? (medium)
  • What are plugins in Grafana and how can they be used? (medium)
  • How can you integrate Grafana with Prometheus for monitoring? (advanced)
  • Explain how alerting works in Grafana. (advanced)
  • How do you optimize queries in Grafana for better performance? (advanced)

Closing Remark

As the demand for Grafana professionals continues to grow in India, it is essential to stay updated with the latest trends and technologies in this field. Prepare thoroughly for interviews and showcase your skills confidently to land your dream job in Grafana. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies