Jobs
Interviews

6 Promql Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Network Monitoring Engineer at OSTTRA India, you will be an integral part of the global network infrastructure team responsible for overseeing the office, data center, and cloud network infrastructure. Your role will involve engaging in all aspects of the network infrastructure lifecycle, including design, implementation, and maintenance to ensure optimal network performance. To excel in this role, you should possess a degree in Computer Science or a related field, or demonstrate equivalent knowledge and work experience. With a minimum of 3 years of experience in network operations and architecture, you will implement and configure monitoring solutions such as Grafana and Prometheus to visualize network performance and metrics effectively. Your responsibilities will include designing and maintaining Grafana dashboards that offer real-time insights into network performance, traffic patterns, and system health. By utilizing Prometheus's querying language (PromQL), you will create custom alerts and notifications based on specific thresholds to ensure proactive monitoring of network health. Additionally, you will integrate AIOps network monitoring tools to leverage artificial intelligence for proactive issue detection and resolution. Collaboration with network engineers to optimize configurations, troubleshoot performance-related issues, and develop comprehensive documentation for network monitoring processes and procedures will be essential. You will also analyze network traffic patterns to provide insights for capacity planning and performance improvements while implementing automation solutions to streamline network monitoring processes and enhance operational efficiency. This role offers a unique opportunity to work with a team based in Gurgaon and collaborate with colleagues across multiple regions globally. If you are a highly motivated technology professional looking to contribute to the development of high-performance, resilient platforms that process millions of messages daily, then this position at OSTTRA India is the perfect fit for you. Join us at OSTTRA, a market leader in derivatives post-trade processing, where innovation, expertise, and networks converge to address the post-trade challenges of global financial markets. Visit www.osttra.com to learn more about our company and the exciting opportunities that await you. At S&P Global, we prioritize the well-being and growth of our employees. Our comprehensive benefits package includes healthcare coverage, generous time off, continuous learning resources, family-friendly perks, and various other incentives to support your personal and professional development. We are committed to providing a supportive and inclusive work environment for all our employees. If you are passionate about network engineering and eager to make a significant impact in the financial services industry, apply now to join our dynamic team at OSTTRA India. Your journey towards building a successful career and contributing to the global financial markets starts here.,

Posted 2 days ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Netskope Today, there&aposs more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. Staff / Sr. Staff Software Development Engineer in Performance Test , Data Security (DLP) About The Role Please note, this team is hiring across multiple levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Netskope Data Security team is focused on providing unrivaled visibility and real-time data protection. Our products ensure that customers data is protected wherever it lives. Our engineers work on architecting, designing, testing and delivering the next generation of highly scalable, effective, and high-performance data loss prevention solutions. You will have the opportunity to test best-in-class data security solutions using different technologies while working with a talented and experienced team of engineers in a dynamic and collaborative environment. Whats in it for you We are looking for software engineers who are passionate about testing highly scalable services and are motivated to create an impact on the product and provide value to the customer. Our outstanding QE organization is composed of engineers who love to write code to test, have aptitude to find ways to break things under load, understand how cloud services should work and can build performance automation for day to day tasks that machines can run. Our SDETs have built Python-based automation tools to do functional, regression, security, performance and load/scale testing of our services and clients. Your contribution is key to our continued ability to deliver highly-effective, scalable, and high-performance data inspection software that large Fortune 500 enterprises use and rely on everyday. What You Will Be Doing Design performance test plan, scripts, scenarios and dataset for enterprise applications based on requirements and acceptance criteria. Continue to develop and optimize test procedures for improving efficiency on test plan execution. Identifying metrics to monitor and work with development counterparts to get the desired metrics. Develop expertise in our cloud security solutions, and use that expertise and your experience to help design and qualify the solution as a whole. Team player who will help the team grow beyond the sum of its parts. Work closely with the development and design team to help create an amazing user experience. Identify and communicate risks about our releases. Look beyond your specific area of responsibility and ensure our solutions deliver value to our customers. Be able to own and make quality decisions for the solution. Own the release and be a customer advocate. Required Skills And Experience 8+ years of relevant experience in Performance testing (preferably in Cloud/Distributed systems) with a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment. Expertise in Performance Testing tools like JMeter and Locust. Experience in working on Grafana/Prometheus and can query in InfluxQL, PromQL. Strong analytical skills and able to find a needle in a haystack when analyzing data from various sources and pulling it all together. Analyze the CPU utilization, memory usage, network usage, and service level metrics to verify the performance of the application. Experience in building custom tools or simulators/mockers for performance testing. Execute benchmark, load, stress, endurance and other non-functional tests. Monitor application logs to determine system behavior. Expertise in writing code in Python, Go or similar languages. Expertise in debugging issues in Docker/Kubernetes environment. Attention to detail with strong verbal and written communication skills. Strong knowledge of Linux, Windows and Mac systems. Comfortable with ambiguity and taking the initiative regarding issues and decisions. Education BS in Computer Science or equivalent technical degree required, MS strongly preferred. Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope&aposs Privacy Policy for more details. Show more Show less

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position: Monitoring and Observability Engineer (Prometheus & Grafana Specialist) Experience: 3+ Years(Must) Location: Bengaluru, Karnataka, India Job Type: Full-Time( Immediate Joiners within 20 days only) Send your cv [HIDDEN TEXT] About the Role We are seeking a talented Monitoring and Observability Engineer with proven expertise in Prometheus and Grafana . The ideal candidate will have hands-on experience in designing, implementing, and optimizing observability solutions for complex systems, along with advanced skills in Grafana dashboard customization and custom plugin development . Key Responsibilities Design, implement, and maintain robust monitoring and alerting solutions using Prometheus and Grafana for mission-critical systems. Write and optimize PromQL queries for efficient data retrieval and analysis. Create highly customized Grafana dashboards for large, complex datasets with a focus on performance, readability, and actionable insights. Develop and maintain custom Grafana plugins (data source, panel, app) using JavaScript, TypeScript, React, and Go . Integrate Prometheus and Grafana with various data sources (databases, cloud services, APIs, log aggregation tools such as Loki or ELK). Configure and manage Alertmanager for alert routing, notifications, and escalations. Troubleshoot performance, data collection, and visualization issues. Collaborate with SRE, DevOps, and development teams to translate monitoring needs into effective observability solutions. Implement best practices for monitoring, alerting, and scalability. Automate setup and configuration using Terraform, Ansible , or similar IaC tools. Keep up-to-date with emerging trends in the Prometheus and Grafana ecosystem. Document configurations, dashboards, and troubleshooting processes. Required Skills & Qualifications Bachelors in Computer Science, IT , or related field. 2+ years of hands-on production experience with Prometheus & Grafana. Strong PromQL expertise. Advanced Grafana dashboard customization for large-scale datasets. Experience developing Grafana plugins using JavaScript, TypeScript, React, and/or Go. Knowledge of monitoring best practices and alerting strategies. Familiarity with Prometheus exporters . Experience with Docker, Kubernetes , and cloud platforms (AWS, Azure, GCP). Proficiency in scripting ( Python, Bash ) for automation. Strong troubleshooting, analytical, and communication skills. Preferred (Good to Have) Experience with Loki, Jaeger, OpenTelemetry . Knowledge of distributed tracing and log management. GitOps experience for monitoring configuration management. Contributions to Prometheus or Grafana open-source projects. Relevant Prometheus/Grafana certifications. Technical & Role-Specific Hashtags #MonitoringEngineer #ObservabilityEngineer #Prometheus #Grafana #PromQL #GrafanaPlugins #SREJobs #DevOpsJobs #MonitoringAndAlerting #DashboardDevelopment #WeAreHiring #HiringNow #JobOpening #TechJobs #BangaloreJobs #ITJobs #EngineeringJobs #CareerOpportunity #JoinOurTeam #AWS #Azure #GCP #Kubernetes #Docker #CloudComputing #InfrastructureAsCode Show more Show less

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At Goldman Sachs, our Engineers are dedicated to making the impossible possible. We are committed to changing the world by bridging the gap between people and capital with innovative ideas. Our mission is to tackle the most complex engineering challenges for our clients, crafting massively scalable software and systems, designing low latency infrastructure solutions, proactively safeguarding against cyber threats, and harnessing the power of machine learning in conjunction with financial engineering to transform data into actionable insights. Join our engineering teams to pioneer new businesses, revolutionize finance, and seize opportunities in the fast-paced world of global markets. Engineering at Goldman Sachs, consisting of our Technology Division and global strategists groups, stands at the heart of our business. Our dynamic environment demands creative thinking and prompt, practical solutions. If you are eager to explore the limits of digital possibilities, your journey starts here. Goldman Sachs Engineers embody innovation and problem-solving skills, developing solutions in various domains such as risk management, big data, and mobile technology. We seek imaginative collaborators who can adapt to change and thrive in a high-energy, global setting. The Data Engineering group at Goldman Sachs plays a pivotal role across all aspects of our business. Focused on offering a platform, processes, and governance to ensure the availability of clean, organized, and impactful data, Data Engineering aims to scale, streamline, and empower our core businesses. As a Site Reliability Engineer (SRE) on the Data Engineering team, you will oversee observability, cost, and capacity, with operational responsibility for some of our largest data platforms. We are actively involved in the entire lifecycle of platforms, from design to decommissioning, employing an SRE strategy tailored to this lifecycle. We are looking for individuals who have a development background and are proficient in code. Candidates should prioritize Reliability, Observability, Capacity Management, DevOps, and SDLC (Software Development Lifecycle). As a self-driven leader, you should be comfortable tackling problems with varying degrees of complexity and translating them into data-driven outcomes. You should be actively engaged in strategy development, participate in team activities, conduct Postmortems, and possess a problem-solving mindset. Your responsibilities as a Site Reliability Engineer (SRE) will include driving the adoption of cloud technology for data processing and warehousing, formulating SRE strategies for major platforms like Lakehouse and Data Lake, collaborating with data consumers and producers to align reliability and cost objectives, and devising strategies with data using relevant technologies such as Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, and Gitlab. Basic qualifications for this role include a Bachelor's or Master's degree in a computational field, 1-4+ years of relevant work experience in a team-oriented environment, at least 1-2 years of hands-on developer experience, familiarity with DevOps and SRE principles, experience with cloud infrastructure (AWS, Azure, or GCP), a proven track record in driving data-oriented strategies, and a deep understanding of data multi-dimensionality, curation, and quality. Preferred qualifications entail familiarity with Data Lake / Lakehouse technologies, experience with cloud databases like Snowflake and Big Query, understanding of data modeling concepts, working knowledge of open-source tools such as AWS Lambda and Prometheus, and proficiency in coding with Java or Python. Strong analytical skills, excellent communication abilities, a commercial mindset, and a proactive approach to problem-solving are essential traits for success in this role.,

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At Goldman Sachs, our Engineers don't just make things - we make things possible. We change the world by connecting people and capital with ideas, solving the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business. Our dynamic environment requires innovative strategic thinking and immediate, real solutions. If you want to push the limit of digital possibilities, start here. Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile, and more. We look for creative collaborators who evolve, adapt to change, and thrive in a fast-paced global environment. Data plays a critical role in every facet of the Goldman Sachs business. The Data Engineering group is at the core of that offering, focusing on providing the platform, processes, and governance for enabling the availability of clean, organized, and impactful data to scale, streamline, and empower our core businesses. As a Site Reliability Engineer (SRE) on the Data Engineering team, you will be responsible for observability, cost, and capacity with operational accountability for some of Goldman Sachs's largest data platforms. We engage in the full lifecycle of platforms from design to demise with an adapted SRE strategy to the lifecycle. We are looking for individuals with a background as a developer who can express themselves in code. You should have a focus on Reliability, Observability, Capacity Management, DevOps, and SDLC (Software Development Lifecycle). As a self-leader comfortable with problem statements, you should structure them into data-driven deliverables. You will drive strategy with skin in the game, participate in the team's activities, drive Postmortems, and have an attitude that the problem stops with you. **How You Will Fulfil Your Potential** - Drive adoption of cloud technology for data processing and warehousing - Drive SRE strategy for some of GS's largest platforms including Lakehouse and Data Lake - Engage with data consumers and producers to match reliability and cost requirements - Drive strategy with data **Relevant Technologies**: Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, Gitlab **Basic Qualifications** - A Bachelor's or Master's degree in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline) - 1-4+ years of relevant work experience in a team-focused environment - 1-2 years hands-on developer experience at some point in career - Understanding and experience of DevOps and SRE principles and automation, managing technical and operational risk - Experience with cloud infrastructure (AWS, Azure, or GCP) - Proven experience in driving strategy with data - Deep understanding of multi-dimensionality of data, data curation, and data quality - In-depth knowledge of relational and columnar SQL databases, including database design - Expertise in data warehousing concepts - Excellent communication skills - Independent thinker, willing to engage, challenge, or learn - Ability to stay commercially focused and to always push for quantifiable commercial impact - Strong work ethic, a sense of ownership and urgency - Strong analytical and problem-solving skills - Ability to build trusted partnerships with key contacts and users across business and engineering teams **Preferred Qualifications** - Understanding of Data Lake / Lakehouse technologies incl. Apache Iceberg - Experience with cloud databases (e.g., Snowflake, Big Query) - Understanding concepts of data modeling - Working knowledge of open-source tools such as AWS lambda, Prometheus - Experience coding in Java or Python,

Posted 1 month ago

Apply

1.0 - 4.0 years

1 - 4 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Drive adoption of cloud technology for data processing and warehousing You will drive SRE strategy for some of GS largest platforms including Lakehouse and Data Lake Engage with data consumers and producers to match reliability and cost requirements You will drive strategy with data Relevant Technologies: Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, Gitlab BasicQualifications A Bachelor or Masters degree in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline) 1-4+ years of relevant work experience in a team-focused environment 1-2 years hands on developer experience at some point in career Understanding and experience of DevOps and SRE principles and automation, managing technical and operational risk Experience with cloud infrastructure (AWS, Azure, or GCP) Proven experience in driving strategy with data Deep understanding of multi-dimensionality of data, data curation and data quality, such as traceability, security, performance latency and correctness across supply and demand processes In-depth knowledge of relational and columnar SQL databases, including database design Expertise in data warehousing concepts (eg star schema, entitlement implementations, SQL v/s NoSQL modelling, milestoning, indexing, partitioning) Excellent communications skills and the ability to work with subject matter experts to extract critical business concepts Independent thinker, willing to engage, challenge or learn Ability to stay commercially focused and to always push for quantifiable commercial impact Strong work ethic, a sense of ownership and urgency Strong analytical and problem-solving skills Ability to build trusted partnerships with key contacts and users across business and engineering teams Preferred Qualifications Understanding of Data Lake / Lakehouse technologies incl. Apache Iceberg Experience with cloud databases (eg Snowflake, Big Query) Understanding concepts of data modelling Working knowledge of open-source tools such as AWS lambda, Prometheus Experience coding in Java or Python

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies