Home
Jobs
Companies
Resume

27 Influxdb Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 5.0 years

2 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Ensemble Energy is an exciting startup in industrial IoT space focused on energy. Our mission is to accelerate the clean energy revolution by making it more competitive using the power of data. Ensembles AI enabled SaaS platform provides prescriptive analytics to power plant operators by combining the power of machine learning, big data, and deep domain expertise. As a Full Stack/IOT Intern, you will be participating in developing and deploying frontend/backend applications, creating vizualization dashboards and developing ways to integrate high frequency data data from devices onto our platform. Required Skills & Experience: React/Redux, HTML5, CSS3, JavaScript, Python, Django and REST APIs. BS or MS in Computer Science or related field. Strong foundation in Computer Science, with deep knowledge of data structures, algorithms, and software design. Experience with GIT, CI/CD tools, Sentry, Atlassian software and AWS CodeDeploy a plus Contribute with ideas to overall product strategy and roadmap. Improve codebase with continuous refactoring. Self-starter to take ownership of the platform engineering and application development. Work on multiple projects simultaneously and get things done. Take products from prototype to production. Collaborate with team in Sunnyvale, CA to lead 24x7 product development. Bonus: If you have worked on one or more below then highlight those projects when applying: Experience with Time Series DB - M3DB, Prometheus, InfluxDB, OpenTSDB, ELK Stack Experience with visualization tools like Tableau, KeplerGL etc. Experience with MQTT or other IoT communication protocols a plus

Posted -1 days ago

Apply

10.0 - 13.0 years

20 - 25 Lacs

Pune

Work from Office

Naukri logo

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And were only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that youre more than your work. Thats why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If youre passionate about our purpose people then we cant wait to support whatever gives you purpose. Were united by purpose, inspired by you. Site Reliability Engineers at UKG are team members that have a breadth of knowledge encompassing all aspects of service delivery. They develop software solutions to enhance, harden and support our service delivery processes. This can include building and managing CI/CD deployment pipelines, automated testing, capacity planning, performance analysis, monitoring, alerting, chaos engineering and auto remediation. Site Reliability Engineers must have a passion for learning and evolving with current technology trends. They strive to innovate and are relentless in their pursuit of a flawless customer experience. They have an automate everything mindset, helping us bring value to our customers by deploying services with incredible speed, consistency and availability. Primary/Essential Duties and Key Responsibilities: Proficient in Splunk/ELK, and Datadog. Experience with observability tools such as Prometheus/InfluxDB, and Grafana. Possesses strong knowledge of at least one scripting language such as Python, Bash, Powershell or any other relevant languages. Design, develop, and maintain observability tools and infrastructure. Collaborate with other teams to ensure observability best practices are followed. Develop and maintain dashboards and alerts for monitoring system health. Troubleshoot and resolve issues related to observability tools and infrastructure. Engage in and improve the lifecycle of services from conception to EOL, includingsystem design consulting, and capacity planning Define and implement standards and best practices related toSystem Architecture, Service delivery, metrics and the automation of operational tasks Support services, product & engineering teams by providing common tooling and frameworks to deliver increased availability and improved incident response. Improve system performance, application delivery and efficiency through automation, process refinement, postmortem reviews, and in-depth configuration analysis Collaborate closely with engineering professionals within the organization to deliver reliable services Identify and eliminate operational toil by treating operational challenges as a software engineering problem Actively participate in incident response, including on-call responsibilities Partner with stakeholders to influence and help drive the best possible technical and business outcomes Guide junior team members and serve as a champion for Site Reliability Engineering Engineering degree, or a related technical discipline, and 10+years of experience in SRE. Experience coding in higher-level languages (e.g., Python, Javascript, C++, or Java) Knowledge of Cloud based applications & Containerization Technologies Demonstrated understanding of best practices in metric generation and collection, log aggregation pipelines, time-series databases, and distributed tracing Ability to analyze current technology utilized and engineering practices within the company and develop steps and processes to improve and expand upon them Working experience with industry standards like Terraform, Ansible. (Experience, Education, Certification, License and Training) Must have hands-on experience working within Engineering or Cloud. Experience with public cloud platforms (e.g. GCP, AWS, Azure) Experience in configuration and maintenance of applications & systems infrastructure. Experience with distributed system design and architecture Experience building and managing CI/CD Pipelines Where were going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet its our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com

Posted 1 week ago

Apply

3.0 - 7.0 years

15 - 20 Lacs

Noida, Pune

Work from Office

Naukri logo

The duties of a Site Reliability Engineer will be to support and maintain various Cloud Infrastructure Technology Tools in our hosted production/DR environments. He/she will be the subject matter expert for specific tool(s) or monitoring solution(s). Will be responsible for testing, verifying and implementing upgrades, patches and implementations. He/She will also partner with the other service and/or service functions to investigate and/or improve monitoring solutions. May mentor one or more tools team members or provide training to other cross functional teams as required. May motivate, develop, and manage performance of individuals and teams while on shift. May be assigned to produces regular and adhoc management reports in a timely manner. Proficient in Splunk/ELK, and Datadog. Experience with observability tools such as Prometheus/InfluxDB, and Grafana. Possesses strong knowledge of at least one scripting language such as Python, Bash, Powershell or any other relevant languages. Design, develop, and maintain observability tools and infrastructure. Collaborate with other teams to ensure observability best practices are followed. Develop and maintain dashboards and alerts for monitoring system health. Troubleshoot and resolve issues related to observability tools and infrastructure. Bachelors Degree in information systems or Computer Science or related discipline with relevant experience of 5-8 years Proficient in Splunk/ELK, and Datadog. Experience with Enterprise Software Implementations for Large Scale Organizations Exhibit extensive experience about the new technology trends prevalent in the market like SaaS, Cloud, Hosting Services and Application Management Service Monitoring tools like : Grafana, Prometheus, Datadog, Experience in deployment of application & infrastructure clusters within a Public Cloud environment utilizing a Cloud Management Platform Professional and positive with outstanding customer-facing practices Can-do attitude, willing to go the extra mile Consistently follows-up and follows-through on delegated tasks and actions

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Data Analyst Location: Bangalore Experience: 8 - 15 Yrs Type: Full-time Role Overview We are seeking a skilled Data Analyst to support our platform powering operational intelligence across airports and similar sectors. The ideal candidate will have experience working with time-series datasets and operational information to uncover trends, anomalies, and actionable insights. This role will work closely with data engineers, ML teams, and domain experts to turn raw data into meaningful intelligence for business and operations stakeholders. Key Responsibilities Analyze time-series and sensor data from various sources Develop and maintain dashboards, reports, and visualizations to communicate key metrics and trends. Correlate data from multiple systems (vision, weather, flight schedules, etc) to provide holistic insights. Collaborate with AI/ML teams to support model validation and interpret AI-driven alerts (e.g., anomalies, intrusion detection). Prepare and clean datasets for analysis and modeling; ensure data quality and consistency. Work with stakeholders to understand reporting needs and deliver business-oriented outputs. Qualifications & Required Skills Bachelors or Masters degree in Data Science, Statistics, Computer Science, Engineering, or a related field. 5+ years of experience in a data analyst role, ideally in a technical/industrial domain. Strong SQL skills and proficiency with BI/reporting tools (e.g., Power BI, Tableau, Grafana). Hands-on experience analyzing structured and semi-structured data (JSON, CSV, time-series). Proficiency in Python or R for data manipulation and exploratory analysis. Understanding of time-series databases or streaming data (e.g., InfluxDB, Kafka, Kinesis). Solid grasp of statistical analysis and anomaly detection methods. Experience working with data from industrial systems or large-scale physical infrastructure. Good-to-Have Skills Domain experience in airports, smart infrastructure, transportation, or logistics. Familiarity with data platforms (Snowflake, BigQuery, Custom-built using open-source). Exposure to tools like Airflow, Jupyter Notebooks and data quality frameworks. Basic understanding of AI/ML workflows and data preparation requirements.

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Data Engineer Location: Bangalore - Onsite Experience: 8 - 15 years Type: Full-time Role Overview We are seeking an experienced Data Engineer to build and maintain scalable, high-performance data pipelines and infrastructure for our next-generation data platform. The platform ingests and processes real-time and historical data from diverse industrial sources such as airport systems, sensors, cameras, and APIs. You will work closely with AI/ML engineers, data scientists, and DevOps to enable reliable analytics, forecasting, and anomaly detection use cases. Key Responsibilities Design and implement real-time (Kafka, Spark/Flink) and batch (Airflow, Spark) pipelines for high-throughput data ingestion, processing, and transformation. Develop data models and manage data lakes and warehouses (Delta Lake, Iceberg, etc) to support both analytical and ML workloads. Integrate data from diverse sources: IoT sensors, databases (SQL/NoSQL), REST APIs, and flat files. Ensure pipeline scalability, observability, and data quality through monitoring, alerting, validation, and lineage tracking. Collaborate with AI/ML teams to provision clean and ML-ready datasets for training and inference. Deploy, optimize, and manage pipelines and data infrastructure across on-premise and hybrid environments. Participate in architectural decisions to ensure resilient, cost-effective, and secure data flows. Contribute to infrastructure-as-code and automation for data deployment using Terraform, Ansible, or similar tools. Qualifications & Required Skills Bachelors or Master’s in Computer Science, Engineering, or related field. 6+ years in data engineering roles, with at least 2 years handling real-time or streaming pipelines. Strong programming skills in Python/Java and SQL. Experience with Apache Kafka, Apache Spark, or Apache Flink for real-time and batch processing. Hands-on with Airflow, dbt, or other orchestration tools. Familiarity with data modeling (OLAP/OLTP), schema evolution, and format handling (Parquet, Avro, ORC). Experience with hybrid/on-prem and cloud platforms (AWS/GCP/Azure) deployments. Proficient in working with data lakes/warehouses like Snowflake, BigQuery, Redshift, or Delta Lake. Knowledge of DevOps practices, Docker/Kubernetes, Terraform or Ansible. Exposure to data observability, data cataloging, and quality tools (e.g., Great Expectations, OpenMetadata). Good-to-Have Experience with time-series databases (e.g., InfluxDB, TimescaleDB) and sensor data. Prior experience in domains such as aviation, manufacturing, or logistics is a plus. Role & responsibilities Preferred candidate profile

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Minimum of 5+ years of DevOps tools. Strong hands-on experience with Grafana, InfluxDB for monitoring and visualization. Experience the ETL tools like : Pentaho, Apache Hop. Experience with the visualization tools like : Grafana. Solid experience in Shell and Python scripting for automation. Experience in the Telco industry. Skills required Program languages: Python (Must). Databases: MySQL, Influxdb, Hive (big data (Must)) Server Ops: Management of Redhat Linux/Centos7, Flatcar (Must) Containerization and container platforms: Docker, Docker-compose (Must) Scripting: JavaScript, Shell, Bash (Must) • Monitoring tools: Grafana. (Must), tableau (Nice) Big data tools: (Nice). • DevOps/Design Tools : Draw io., JIRA, Confluence. • Software Management Tools: Maven (Nice) • CI/CD: Bitbucket, GitLab, Jenkins.

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Hybrid

Naukri logo

Urgent Requirement for Grafana, Employment:C2H Notice Period:Immediate We are seeking a skilled Database Specialist with strong expertise in Time-Series Databases, specifically Loki for logs, InfluxDB, and Splunk for metrics. The ideal candidate will have a solid background in query languages, Grafana, Alert Manager, and Prometheus. This role involves managing and optimizing time-series databases, ensuring efficient data storage, retrieval, and visualization. Key Responsibilities: Design, implement, and maintain time-series databases using Loki, InfluxDB, and Splunk to store and manage high-velocity time-series data. Develop efficient data ingestion pipelines for time-series data from various sources (e.g., IoT devices, application logs, metrics). Optimize database performance for high write and read throughput, ensuring low latency and high availability. Implement and manage retention policies, downsampling, and data compression strategies to optimize storage and query performance. Collaborate with DevOps and infrastructure teams to deploy and scale time-series databases in cloud or on-premise environments. Build and maintain dashboards and visualization tools (e.g., Grafana) for monitoring and analyzing time-series data. Troubleshoot and resolve issues related to data ingestion, storage, and query performance. Work with development teams to integrate time-series databases into applications and services. Ensure data security, backup, and disaster recovery mechanisms are in place for time-series databases. Stay updated with the latest advancements in time-series database technologies and recommend improvements to existing systems. Key Skills: Strong expertise in Time-Series Databases with Loki (for logs), InfluxDB, and Splunk (for metrics).

Posted 1 week ago

Apply

8.0 - 12.0 years

27 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

We are looking for "Sr. IOT Engineer / SME" with Minimum 8 years experience Contact- Atchaya (95001 64554) Required Candidate profile Basic understanding of IoT data routing Experience with databases and storage systems like: InfluxDB, PostgreSQL, Redis Strong knowledge in Azure and Azure Kubernetes Service (AKS)

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary: We are looking for a Machine Learning Engineer with strong data engineering capabilities to support the development and deployment of predictive models in a smart manufacturing environment. This role involves building robust data pipelines, developing high-accuracy ML models for defect prediction, and implementing automated control systems for real-time corrective actions on the production floor. Key Responsibilities: Data Engineering & Integration: Validate and ensure the correct flow of data from Influx DB/CDL to Smart box/Databricks. Assist data scientists in the initial modeling phase through reliable data provisioning. Provide ongoing support for data pipeline corrections and ad-hoc data extraction. ML Model Development for Defect Prediction: Develop 3 separate ML models for predicting 3 types of defects based on historical data. Predict defect occurrence within a 5-minute window using: Artificial sampling techniques Dimensionality reduction Deliver results with: Accuracy 95% Precision & recall 80% Feature importance insights Closed-Loop Control System Implementation: Prescribe machine setpoint changes based on model outputs to prevent defect occurrence. Design and implement a closed-loop system that includes: Real-time data fetching from production line PLCs (via Influx DB/CDL). Deployment of ML models on Smart box. Pipeline to output recommendations to the appropriate PLC tag. Retraining pipeline triggered by drift detection (cloud-based retraining when recommendations deviate from centerlines). Qualifications: Education: Bachelor's or Masters degree in Computer Science, Data Science, Electrical Engineering, or related field. Technical Skills: Proficient in Python and ML libraries (e.g., scikit-learn, XG Boost, pandas) Experience with: Influx DB and CDL for industrial data integration Smart box and Databricks for model deployment and data processing Real-time data pipelines and industrial control systems (PLCs) Model performance tracking and retraining pipelines Preferred: Experience in manufacturing analytics or predictive maintenance Familiarity with Industry 4.0 principles and edge/cloud hybrid architectures Soft Skills: Strong analytical and problem-solving abilities Effective communication with cross-functional teams (data science, automation, production) Attention to detail and focus on solution reliability

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

Job Overview: We are seeking a skilled and proactive Machine Learning Engineer to join our smart manufacturing initiative. You will play a pivotal role in building data pipelines, developing ML models for defect prediction, and implementing closed-loop control systems to improve production quality. Responsibilities: Data Engineering & Pipeline Support: Validate and ensure correct data flow from Influx DB/CDL to Smart box/Databricks platforms. Collaborate with data scientists to support model development through accurate data provisioning. Provide ongoing support in resolving data pipeline issues and performing ad-hoc data extractions. ML Model Development: Develop three distinct ML models to predict different types of defects using historical production data. Predict short-term outcomes (next 5 minutes) using techniques like artificial sampling and dimensionality reduction. Ensure high model performance: Accuracy 95%, Precision & Recall 80%. Extract and present feature importance to support model interpretability. Closed-loop Control Architecture: Implement end-to-end ML-driven automation to proactively correct machine settings based on model predictions. Key architecture components include: Real-time data ingestion from PLCs via Influx DB/CDL. Model deployment and inference on Smart box. Output pipeline to share actionable recommendations via PLC tags. Automated retraining pipeline in the cloud triggered by model drift or recommendation deviations. Qualifications: Proven experience with real-time data streaming from industrial systems (PLCs, Influx DB/CDL). Hands-on experience in building and deploying ML models in production. Strong understanding of data preprocessing, dimensionality reduction, and synthetic data techniques. Familiarity with cloud-based retraining workflows and model performance monitoring. Experience in smart manufacturing or predictive maintenance is a plus.

Posted 2 weeks ago

Apply

6.0 - 9.0 years

32 - 35 Lacs

Noida, Kolkata, Chennai

Work from Office

Naukri logo

Dear Candidate, We are hiring a Rust Developer to build safe, concurrent, and high-performance applications for system-level or blockchain development. Key Responsibilities: Develop applications using Rust and its ecosystem (Cargo, Crates) Write memory-safe and zero-cost abstractions for systems or backends Build RESTful APIs, CLI tools, or blockchain smart contracts Optimize performance using async/await and ownership model Ensure safety through unit tests, benchmarks, and fuzzing Required Skills & Qualifications: Proficient in Rust , lifetimes , and borrowing Experience with Tokio , Actix , or Rocket frameworks Familiarity with WebAssembly , blockchain (e.g. Substrate) , or embedded Rust Bonus: Background in C/C++ , systems programming, or cryptography Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Chennai

Work from Office

Naukri logo

We are seeking a highly skilled and experienced Senior Full Stack Developer to join our dynamic team. The ideal candidate will be proficient in both front-end and back-end technologies and capable of leading the design, development, and maintenance of scalable web applications. Key Responsibilities: Design and develop robust and scalable web applications using modern frameworks and technologies. Lead the full software development lifecycle from requirements gathering to deployment and maintenance. Collaborate with cross-functional teams including product managers, designers, and QA engineers. Optimize applications for maximum speed and scalability. Ensure code quality through test-driven development and code reviews. Stay current with emerging technologies and best practices. Key Skills & Technologies: Frontend Development : React.js, JavaScript, Next.js Backend Development : Node.js, NestJS Databases : MongoDB Other Technologies : Redis, Kafka, InfluxDB, WebSocket Additional Skills : Typescript, Architectural Design Preferred (but not required) Skills: Experience with Blockchain and Cryptocurrency technologies Knowledge of Artificial Intelligence concepts and applications Requirements: 5+ years of experience in full stack development. Strong problem-solving skills and attention to detail. Proven experience with RESTful APIs and modern application architectures. Excellent communication skills and the ability to mentor junior developers. Why Join Us? Work with a forward-thinking, innovative team. Competitive salary and benefits. Opportunities for growth and learning.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Job description Walk in interview on 31st May 25 - Azure Devops Engineer - Bangalore Years of Experience - 8 to 12 Years Work mode: Hybrid Interview Date - 31st May 2025, Saturday Time of Interview - 9.30 AM To 4.00 PM Kindly carry 2 hard copy of resume Interview Location: Arrow Electronics India Pvt Ltd, Rockline Seethalaxmi (SKAV) Building, Kasturba Road, Shanthala Nagar, Opp to Vishweshwaraya Museum, Bengaluru - 560001 What youll be doing: Principal Accountabilities Designs and develops software solutions to meet business requirements. Manages full software development life cycle including testing, implementation, and auditing. Performs product design, bug verification, and beta support, which may require research and analysis. Operates under moderate supervision. Usually reports to the Manager of Software Development. Execute, assess, and troubleshoot software programs and applications. Analyze and amend software errors in a timely and accurate fashion. Coding, developing, and documenting software specifications throughout the project life cycle. Participate in software upgrades, revisions, fixes and patches as mandated by the vendor. Job Complexity Requires in-depth knowledge and experience Solves complex problems; takes a new perspective using existing solutions Works independently; receives minimal guidance Acts as a resource for colleagues with less experience Represents the level at which career may stabilize for many years or even until retirement Contributes to process improvements Typically resolves problems using existing solutions Provides informal guidance to junior staff Works with minimal guidance What we are looking for: Typically requires 5–7 years of related experience with a 4 year degree; or 3 years and an advanced degree; or equivalent work experience. Experience with Influx DB and Flux language PowerShell + SQL query experience Experience with administrating Telegraph agents Experience as administrating Grafana Building dashboards Creating alerts Log collections (Loki logs) Experience with monitoring and alerting tools such as: InfluxDB Grafana Loki Logs Candidate should possess good communication skills Self-driven, Bottom line oriented and take ownership of tasks assigned Effective working relationships with all functional units of the organization, Ability to work as part of a cross-cultural team including flexibility to support multiple locations when necessary, Excellent interpersonal skills in areas such as teamwork, facilitation, and negotiation, and able to work independently or as part of a team. Excellent problem-solving skills and the ability to work efficiently Working knowledge of Azure DevOps GIT

Posted 2 weeks ago

Apply

6.0 - 9.0 years

32 - 35 Lacs

Noida, Kolkata, Chennai

Work from Office

Naukri logo

Dear Candidate, We are hiring a Lua Developer to create lightweight scripting layers in games, embedded systems, or automation tools. Key Responsibilities: Develop scripts and integrations using Lua Embed Lua in C/C++ applications for extensibility Write custom modules or bindings for game engines or IoT devices Optimize Lua code for memory and execution time Integrate with APIs, data sources, or hardware systems Required Skills & Qualifications: Proficient in Lua and its integration with host languages Experience with Love2D , Corona SDK , or custom engines Familiarity with C/C++ , embedded Linux , or IoT Bonus: Game scripting or automation experience Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

12.0 - 20.0 years

45 - 65 Lacs

Bengaluru

Work from Office

Naukri logo

Lead OT, IIoT, XR, and real-time data strategy across digital platforms and agile teams. Required Candidate profile 12–18 yrs in OT strategy, real-time data systems, Kafka/Spark, edge compute, global team management.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

What Youll Do - Configure and manage observability agents across AWS, Azure & GCP - Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack - Experience with different language stacks such as Java, Ruby, Python and Go - Instrument services using OpenTelemetry and integrate telemetry pipelines - Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs - Create dashboards, set up alerts, and track SLIs/SLOs - Enable RCA and incident response using observability data - Secure the observability pipeline You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong skills in reading and interpreting logs, metrics, and traces - Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. - Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. - Knowledge of OpenTelemetry, IaC, and security best practices - Clear documentation of observability processes, logging standards & instrumentation guidelines - Ability to proactively identify, debug, and resolve issues using observability data - Focused on maintaining data quality and integrity across the observability pipeline

Posted 3 weeks ago

Apply

3.0 - 5.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

What Youll Do - Configure and manage observability agents across AWS, Azure & GCP - Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack - Experience with different language stacks such as Java, Ruby, Python and Go - Instrument services using OpenTelemetry and integrate telemetry pipelines - Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs - Create dashboards, set up alerts, and track SLIs/SLOs - Enable RCA and incident response using observability data - Secure the observability pipeline You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong skills in reading and interpreting logs, metrics, and traces - Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. - Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. - Knowledge of OpenTelemetry, IaC, and security best practices - Clear documentation of observability processes, logging standards & instrumentation guidelines - Ability to proactively identify, debug, and resolve issues using observability data - Focused on maintaining data quality and integrity across the observability pipeline

Posted 3 weeks ago

Apply

6 - 10 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

The duties of a Site Reliability Engineer will be to support and maintain various Cloud Infrastructure Technology Tools in our hosted production/DR environments. He/she will be the subject matter expert for specific tool(s) or monitoring solution(s). Will be responsible for testing, verifying and implementing upgrades, patches and implementations. He/She will also partner with the other service and/or service functions to investigate and/or improve monitoring solutions. May mentor one or more tools team members or provide training to other cross functional teams as required. May motivate, develop, and manage performance of individuals and teams while on shift. May be assigned to produces regular and adhoc management reports in a timely manner. Proficient in Splunk/ELK, and Datadog. Experience with observability tools such as Prometheus/InfluxDB, and Grafana. Possesses strong knowledge of at least one scripting language such as Python, Bash, Powershell or any other relevant languages. Design, develop, and maintain observability tools and infrastructure. Collaborate with other teams to ensure observability best practices are followed. Develop and maintain dashboards and alerts for monitoring system health. Troubleshoot and resolve issues related to observability tools and infrastructure. Bachelors Degree in information systems or Computer Science or related discipline with relevant experience of 5-8 years Proficient in Splunk/ELK, and Datadog. Experience with Enterprise Software Implementations for Large Scale Organizations Exhibit extensive experience about the new technology trends prevalent in the market like SaaS, Cloud, Hosting Services and Application Management Service Monitoring tools like : Grafana, Prometheus, Datadog, Experience in deployment of application & infrastructure clusters within a Public Cloud environment utilizing a Cloud Management Platform Professional and positive with outstanding customer-facing practices Can-do attitude, willing to go the extra mile Consistently follows-up and follows-through on delegated tasks and actions

Posted 2 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

The core infrastructure team is responsible for this infrastructure, spread across 10 production deployments across the globe, 24/7, with 4 nines of uptime. Our infrastructure is managed using Terraform (for IaC), GitLab CI and monitored using Prometheus and Datadog. We're looking for you if: You are strong infrastructure engineer with specialty in networking and site reliability. You have strong networking fundamentals (DNS, subnets, VPN, VPCs, security groups, NATs, Transit Gateway etc) You have extensive and deep experience (~4 years) with IaaS Cloud Providers. AWS is ideal, but GCP/Azure would be fine too. You have experience with running cloud orchestration technologies like Kubernetes and/or Cloud Foundry, and designing highly resilient architectures for these. You have strong knowledge of Unix/Linux fundamentals You have experience with infrastructure as code tools. Ideally Terraform, OpenTofu but CloudFormation or Pulumi are fine too. You have experience designing cross Cloud/on-prem connectivity and observability You have a DevOps mindset: you build it, you run it. You care about code quality, and know how to lead by example: from a clean Git history, to well thought-out unit and integration tests. Even better (but not essential!) if you have experience with: Monitoring tools that we use, such as Datadog and Prometheus CI/CD tooling such as GitLab CI You have programming experience with (ideally) Golang or Python You are willing and able to use your technical expertise to mentor, train, and lead other engineers Youll help drive digital innovation by: Continually improving our security + operational excellence. Work directly with customers to set up connectivity between Mendix Cloud platform and customers backend infrastructure. Rapidly scaling our infrastructure to match our rapidly increasing customer base. Continuously improving the observability of our platform, so that we can fix problems before they occur. Improving our automation and surrounding tooling to further streamline deployments + platform upgrade. Improving the way we use AWS resources, and defining cost optimization strategies Here are many of the tools we make use of: Amazon Web Services (EC2, Fargate, RDS, S3, ELB, VPC, CloudWatch, Lambda, IAM, and more !) PaaS: (Open Source) Kubernetes, Docker, Open Service Broker API Eventing: AWS MSK and Confluent Warpstream BYOK Monitoring: Prometheus, InfluxDB, Grafana, Datadog CI/CD: GitLab CI, ArgoCD Automation: Terraform, Helm Programming languages: mostly Golang and Python, with a sprinkling of Ruby and Lua Scripting: Bash, Python Version Control: Git + GitLab Database: PostgreSQL

Posted 2 months ago

Apply

5 - 10 years

7 - 17 Lacs

Hyderabad, Noida

Hybrid

Naukri logo

About the Role - We are seeking a highly skilled Grafana Developer with 5 years of experience in building robust, scalable, and visually appealing observability dashboards and solutions. The ideal candidate should specialize in developing custom Grafana plugins including data sources, panel plugins, and app plugins to extend Grafanas capabilities and integrate with a variety of data sources. Key Responsibilities • Design, develop, and maintain custom Grafana plugins (data source, panel, and app plugins) tailored to project requirements. • Integrate Grafana with diverse backend systems and APIs (REST, GraphQL, etc.). • Customize and enhance existing dashboards to support advanced visualization and user experience. • Work closely with DevOps, SRE, and Engineering teams to build monitoring solutions aligned with system architecture. • Implement access control, user authentication, and secure plugin deployment as per enterprise standards. • Troubleshoot and resolve issues related to Grafana integrations and visualizations. • Write clean, maintainable code with appropriate documentation. • Stay up to date with the latest Grafana releases, plugin APIs, and best practices. Required Skills & Qualification s • 5+ years of experience in monitoring, observability, or dashboarding tools, with deep hands-on experience in Grafana. • Proven expertise in developing custom Grafana plugins (data sources, panels, apps) using TypeScript, React, and Grafana Plugin SDK. • Strong knowledge of J avaScript/TypeScript, HTML/CSS , and frontend frameworks. • Experience working with time-series databases such as Prometheus, Influx DB, Graphite , or cloud-native services (e.g., AWS CloudWatch, Azure Monitor). • Proficiency in integrating RESTful APIs and handling data transformations for visualization. • Familiarity with CI/CD pipelines, Git, Docker, and cloud platforms. • Understanding of authentication mechanisms (OAuth, LDAP, etc.) and RBAC within Grafan

Posted 2 months ago

Apply

3 - 8 years

10 - 20 Lacs

Pune, Bengaluru, Hyderabad

Hybrid

Naukri logo

Minimum of 3-5 years of DevOps tools. Strong hands-on experience with Grafana, InfluxDB for monitoring and visualization. Experience the ETL tools like : Pentaho, Apache Hop. Experience with the visualization tools like : Grafana. Solid experience in Shell and Python scripting for automation. Experience in the Telco industry. Skills required Program languages: Python (Must). Databases: MySQL, Influxdb, Hive (big data (Must)) Server Ops: Management of Redhat Linux/Centos7, Flatcar (Must) Containerization and container platforms: Docker, Docker-compose (Must) Scripting: JavaScript, Shell, Bash (Must) • Monitoring tools: Grafana. (Must), tableau (Nice) Big data tools: (Nice). • DevOps/Design Tools : Draw io., JIRA, Confluence. • Software Management Tools: Maven (Nice) • CI/CD: Bitbucket, GitLab, Jenkins.

Posted 2 months ago

Apply

7 - 12 years

0 - 3 Lacs

Hyderabad

Hybrid

Naukri logo

Experience-7+Years Immediate joiners only Hyderabad(Madhapur) hybrid mode Design, develop, and optimize Grafana dashboards for real-time data visualization and monitoring. Integrate Grafana with various data sources such as Prometheus, InfluxDB, Elasticsearch, MySQL, and others. Troubleshoot and optimize existing Grafana implementations to improve performance and user experience. Collaborate with data engineering and DevOps teams to define metrics, create monitoring solutions, and implement alerting systems. Develop custom Grafana plugins and visualizations as needed. Ensure security and data privacy in all Grafana-related solutions. Provide training and support to internal teams on Grafana best practices and troubleshooting. Keep up with the latest Grafana updates and trends to ensure continuous improvement of the monitoring infrastructure. Troubleshoot complex monitoring issues and provide timely resolutions. Required Skills and Qualifications: 7+ years of experience in software development, with a focus on Grafana and data visualization. Hands-on experience in Grafana dashboard creation, maintenance, and optimization. Proficiency in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, Elasticsearch). Experience with time-series databases such as InfluxDB, Prometheus, or similar. Strong understanding of metrics, monitoring systems, and alerting frameworks. Proficient in Grafana configuration, customization, and data source integration. Experience with automation tools, such as Terraform, Ansible, or Kubernetes. Familiarity with scripting languages (e.g., Python, Bash, Go) for automating tasks and improving workflow. Strong troubleshooting, debugging, and problem-solving skills. Experience working in Agile environments and using version control tools like Git. Excellent communication skills, both written and verbal. Preferred Qualifications: Experience with cloud platforms like AWS, Azure, or GCP. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Knowledge of monitoring tools like Prometheus, Grafana, and ELK Stack. Understanding of IT infrastructure and cloud-native architectures. Certifications related to Grafana, cloud platforms, or DevOps practices.

Posted 2 months ago

Apply

10 - 20 years

45 - 65 Lacs

Bengaluru

Work from Office

Naukri logo

Lead Messaging & Scheduling systems, manage real-time data processing & predictive analytics, work with time-series databases (InfluxDB, Kafka, OSIsoft), design event-driven architectures (Flink, Spark, Azure Event Hubs), and oversee IT teams

Posted 2 months ago

Apply

10 - 20 years

35 - 50 Lacs

Bengaluru

Work from Office

Naukri logo

Design and implement SCADA, PLC, and IIoT solutions for industrial automation. Manage real-time data processing, edge computing, and cybersecurity for OT. Ensure secure IT-OT integration, optimize instrumentation, and lead FAT/SAT and compliance. Required Candidate profile UG /PG in Instrumentation, Electrical, with 5-15 years in OT, industrial automation, and IIoT, SCADA, PLC, real-time data, cybersecurity, edge computing, and protocols like Modbus, OPC UA, and HART.

Posted 2 months ago

Apply

8 - 12 years

10 - 20 Lacs

Noida

Remote

Naukri logo

Senior Full Stack .NET Developer – Build high-performance web/desktop apps using C#, .NET Core, SQL Server, Angular/React, REST APIs, Microservices. Exp. in Financial domain, InfluxDB, kdb+, OneTick required.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies