Bengaluru, Karnataka, India
INR Not disclosed
On-site
Full Time
Position: Monitoring and Observability Engineer (Prometheus & Grafana Specialist) Experience: 3+ Years(Must) Location: Bengaluru, Karnataka, India Job Type: Full-Time( Immediate Joiners within 20 days only) Send your cv [HIDDEN TEXT] About the Role We are seeking a talented Monitoring and Observability Engineer with proven expertise in Prometheus and Grafana . The ideal candidate will have hands-on experience in designing, implementing, and optimizing observability solutions for complex systems, along with advanced skills in Grafana dashboard customization and custom plugin development . Key Responsibilities Design, implement, and maintain robust monitoring and alerting solutions using Prometheus and Grafana for mission-critical systems. Write and optimize PromQL queries for efficient data retrieval and analysis. Create highly customized Grafana dashboards for large, complex datasets with a focus on performance, readability, and actionable insights. Develop and maintain custom Grafana plugins (data source, panel, app) using JavaScript, TypeScript, React, and Go . Integrate Prometheus and Grafana with various data sources (databases, cloud services, APIs, log aggregation tools such as Loki or ELK). Configure and manage Alertmanager for alert routing, notifications, and escalations. Troubleshoot performance, data collection, and visualization issues. Collaborate with SRE, DevOps, and development teams to translate monitoring needs into effective observability solutions. Implement best practices for monitoring, alerting, and scalability. Automate setup and configuration using Terraform, Ansible , or similar IaC tools. Keep up-to-date with emerging trends in the Prometheus and Grafana ecosystem. Document configurations, dashboards, and troubleshooting processes. Required Skills & Qualifications Bachelors in Computer Science, IT , or related field. 2+ years of hands-on production experience with Prometheus & Grafana. Strong PromQL expertise. Advanced Grafana dashboard customization for large-scale datasets. Experience developing Grafana plugins using JavaScript, TypeScript, React, and/or Go. Knowledge of monitoring best practices and alerting strategies. Familiarity with Prometheus exporters . Experience with Docker, Kubernetes , and cloud platforms (AWS, Azure, GCP). Proficiency in scripting ( Python, Bash ) for automation. Strong troubleshooting, analytical, and communication skills. Preferred (Good to Have) Experience with Loki, Jaeger, OpenTelemetry . Knowledge of distributed tracing and log management. GitOps experience for monitoring configuration management. Contributions to Prometheus or Grafana open-source projects. Relevant Prometheus/Grafana certifications. Technical & Role-Specific Hashtags #MonitoringEngineer #ObservabilityEngineer #Prometheus #Grafana #PromQL #GrafanaPlugins #SREJobs #DevOpsJobs #MonitoringAndAlerting #DashboardDevelopment #WeAreHiring #HiringNow #JobOpening #TechJobs #BangaloreJobs #ITJobs #EngineeringJobs #CareerOpportunity #JoinOurTeam #AWS #Azure #GCP #Kubernetes #Docker #CloudComputing #InfrastructureAsCode Show more Show less
karnataka
INR Not disclosed
On-site
Full Time
We are looking for a self-driven and technically strong Data Engineer with over 6 years of experience to join our team in Bangalore. If you are passionate about developing efficient data solutions and possess expertise in PySpark, Python, Kafka, SQL, and Azure tools, we are interested in hearing from you. As a Data Engineer, your key responsibilities will include developing and optimizing data pipelines using Databricks and PySpark, writing efficient and complex SQL queries for large-scale data processing, delivering end-to-end solutions from data ingestion to reporting, collaborating with teams to meet data requirements, integrating data from multiple sources including REST APIs & cloud storage, and working with ADF, ADL, and BI tools to enable reporting. The ideal candidate should have strong SQL skills, hands-on experience with Databricks & PySpark, familiarity with Azure Data Factory (ADF) and Azure Data Lake (ADL), knowledge of REST API integration, exposure to BI tools (Power BI/Tableau preferred), strong analytical & problem-solving skills, and proficiency in PySpark, Python, and Kafka. If you meet the above requirements and are excited about working in a dynamic environment where you can contribute to cutting-edge projects, please apply for this Data Engineer position.,
Bengaluru, Karnataka, India
INR Not disclosed
On-site
Full Time
Job Title: Python Developer (Linux and shell scripting mandatory skill) Experience: 4 to 7 Years Location: Bengaluru Employment Type: Full-time -work from office ( immediate Joiners within 20days) Job Summary: We are seeking a skilled Python Automation Engineer with 47 years of experience in automation, scripting, and DevOps practices. The ideal candidate should have strong hands-on expertise in Python, Linux environments, and experience working with CI/CD tools, Infrastructure as Code (IaC), and container orchestration technologies. Key Responsibilities: Develop and maintain automation scripts and tools using Python and shell scripting. Manage and automate infrastructure tasks using Ansible , Terraform , or similar IaC tools. Build and optimize CI/CD pipelines using tools like Jenkins , Tekton , and GitHub Actions . Collaborate with development, QA, and DevOps teams to streamline deployment processes. Troubleshoot and resolve issues in development, test, and production environments. Implement containerization using Docker and orchestration with Kubernetes . Work within a Linux-based environment to support automation and deployment tasks. Maintain version control and workflow practices using Git and GitHub . Required Skillset: Strong programming/scripting experience in Python and Shell scripting . Proficiency with Linux operating systems. Experience with Git , GitHub , and CI/CD pipelines . Hands-on experience with Terraform , Tekton , Ansible , and/or Jenkins . Good understanding of DevOps methodologies and automation principles. Nice to Have (Additional Skillset): Working experience with Docker and Kubernetes . Familiarity with cloud platforms (AWS, GCP, or Azure) is a plus. Exposure to monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack) is beneficial. Qualifications: Bachelor&aposs degree in Computer Science, Engineering, or related field (or equivalent experience). Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. PythonDeveloper #LinuxJobs #ShellScripting#DevOpsEngineer #AutomationEngineer #PythonJobs#CI_CD #InfrastructureAsCode #JenkinsJobs #TerraformJobs #Ansible #Terraform #GitHubActions #Docker#Kubernetes #Tekton #AWS #Azure #GCP #NowHiring #HiringAlert #TechJobs #JobSearch #BangaloreJobs #BengaluruJobs #WorkFromOffice #ImmediateJoiners #JoinOurTeam#CareerOpportunity #HiringNow #WeAreHiring #JobOpening #CareerSwitch #NextCareerMove #NewOpportunity#BetterOpportunity #WorkWithTheBest#StepUpYourCareer #TechCareers #GrowthMindset #DreamJob #FutureReady#FromStartupToScale #Exjunipernetworks #Expaloaulto #ExAmazon #ExDELL #ExAccenture #ExFlipkart #ExL&T #Exustglobal#Exquestglobal Show more Show less
karnataka
INR Not disclosed
On-site
Full Time
The role of HPC Research Engineer at Infobell IT Solutions Pvt. Ltd., Bangalore is open to PhD holders, whether fresh graduates or experienced professionals, who possess expertise in High-Performance Computing (HPC) or related fields such as Computer Science, Engineering, Physics, or Computational Science. If you are enthusiastic about performance optimization, parallel computing, and enjoy tackling intricate computational challenges, this position is well-suited for you. Key Qualifications: - Hold a PhD in HPC or a related discipline. - Proficient in programming with strong skills in C, C++, or Python. - Familiarity with MPI, OpenMP, CUDA, or other parallel computing frameworks is preferred. - Demonstrated passion for performance, scalability, and impactful problem-solving. - Possess excellent analytical, research, and problem-solving abilities. Who Can Apply: - Fresh PhD graduates with a keen interest in cutting-edge research within the realm of HPC. - Experienced researchers or professionals with a background in HPC from academia or industry. Why Join Us: Joining Infobell IT will provide you with the opportunity to work on high-impact projects alongside a talented team, utilizing cutting-edge HPC technologies to address real-world problems.,
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.