Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 6.0 years
50 - 55 Lacs
Chandigarh
Work from Office
As a Golang developer, you will be immersed in our backend infrastructure, taking charge of complex architecture and coding challenges. Your primary focus will be based on pure backend coding, strategy thinking, and working closely with the frontend development team to deliver seamless and innovative solutions. Role and Responsibilities : - Backend Language : Using Go as your backend language throughout any development and maintenance procedures - Backend Development : Development of scalable and robust backend solutions, including maintaining, enhancing, and integrating backend technologies - Hands-on Coding : Implementing end-to-end solutions that seamlessly integrate with the frontend, ensuring a cohesive user experience. - API Development : Creating and implementing robust RESTful APIs to extend application functionalities, facilitating seamless integration with third-party services 2 for enhanced features - Third-Party Integrations : Seamlessly incorporating external services into the system, including payment gateways, real-time call functionalities, and live communication features. - Collaboration : Collaborating closely with the frontend team to ensure smooth functionality and user experiences. - Reliability and Scalability : Assuring the reliability, scalability, and efficiency of our backend systems to meet the demands of our applications. - Technical Guidance : Providing technical guidance and mentorship when needed to the development team, fostering a culture of excellence and continuous improvement. - Innovation : staying updated with the latest technologies and industry trends, driving technical innovation within the organization. Work experience requirements : - Experience in national and/or global technology projects with significant demand. - Experience in the implementation and management of payment solutions and real-time data systems. - Experience in the implementation of cybersecurity protocols, with a preference for military-grade protocols. - Experience in the implementation of highly complex database architectures in AWS or similar. Qualifications : To be successful in this role, you must possess the following qualifications : - Education: Bachelors degree in Computer Science or Software Engineering. - Masters Degree: Masters degree in Computer Science or Software Engineering. - Experience : 4 to 8 years of professional experience as a backend developer, with a proven track record of building complex, scalable applications. - GO Proficiency : Proficiency in Go Language or similar languages. - Additional Backend Languages : Proficiency and previous experience in other backend languages such as Java, Node.js, or Python are a plus. - Frameworks : Experience with frameworks such as Gin, Echo, Spring Boot, Express.js, or Django. - Database Expertise : Solid understanding of database systems, including SQL and NoSQL databases. - Containerization Technologies : Master Level in containerization technologies Kubernetes and Docker - Cloud Experience : Previous in-depth level experience with AWS or Azure. - Software Engineering : Strong knowledge of software engineering principles, design patterns, and best practices. - Problem-Solving : Excellent problem-solving skills and attention to detail. - Communication : Ability to motivate the team with exceptional communication and interpersonal skills. To be successful in the application for the position : You must have master-level experience in the following technologies : - Golang - Postgres - gRPC - Redis - RabbitMQ - OAuth2 You should have advanced knowledge in the following technologies : - Kubernetes - Docker - Gitlab CI/CD - Prometheus - Grafana4 - Kong - ArgoCD
Posted 3 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
Pune
Remote
We are seeking a Grafana Implementation Expert with deep expertise in Grafana and Prometheus, focusing on core development and customization rather than SRE or DevOps responsibilities. This role requires a specialist in monitoring tools, responsible for designing, developing, and optimizing Grafana dashboards, plugins, and data sources to provide real-time observability and analytics. Key Responsibilities : - Develop, customize, and optimize Grafana dashboards with advanced visualizations, queries, and alerting mechanisms.- Integrate Grafana with Prometheus and other data sources (i.e. Loki, InfluxDB, Elasticsearch, MySQL, PostgreSQL, OpenTelemetry).- Extend Grafana capabilities by developing custom plugins, panels, and data sources using JavaScript, TypeScript, React, and Go.- Optimize Prometheus queries (PromQL) and storage solutions to ensure efficient data retrieval and visualization.- Automate dashboard provisioning using JSON, Terraform, or Grafana APIs for seamless deployment across environments.- Work closely with engineering teams to translate monitoring requirements into scalable and maintainable solutions.- Troubleshoot and enhance Grafana performance, including load balancing, scaling, and security hardening.- Implement advanced alerting mechanisms using Alertmanager, Grafana Alerts, and webhook integrations.- Stay updated on Grafana ecosystem advancements and contribute to best practices in observability tooling.- Document configurations, implementation guidelines, and best practices for internal stakeholders. Required Skills & Experience : - 5+ years of experience in monitoring and observability tools with a strong focus on Grafana and Prometheus.- Expertise in Grafana internals, including API usage, dashboard templating, and custom plugin development.- Strong hands-on experience with Prometheus, including metric collection, relabeling, and PromQL queries.- Proficiency in JavaScript, TypeScript, React, and Go for Grafana plugin and dashboard development.- Familiarity with infrastructure monitoring, including Kubernetes, cloud services (AWS, GCP, Azure), and system-level metrics. - Experience with time-series databases and log aggregation tools (i.e., Loki, Elasticsearch, InfluxDB). - Knowledge of security best practices in Grafana, including authentication, RBAC, and API security.- Experience with automation and infrastructure-as-code (IaC) for monitoring stack deployment.- Strong problem-solving skills with the ability to debug and optimize dashboards and alerting configurations.- Excellent communication and documentation skills to collaborate with cross-functional teams. Preferred Qualifications : - Grafana Certified Observability Engineer or equivalent certifications.- Experience contributing to open-source Grafana projects or plugin development.- Knowledge of distributed tracing tools like Jaeger or Zipkin.- Familiarity with service meshes (Istio, Linkerd) and their monitoring strategies.- This is a high-impact role focused on developing and enhancing Grafana-based monitoring solutions for enterprise-grade observability
Posted 3 weeks ago
3.0 - 8.0 years
16 - 20 Lacs
Mumbai
Work from Office
What will you do at Fynd? - Run the production environment by monitoring availability and taking a holistic view of system health. - Improve reliability, quality, and time-to-market of our suite of software solutions - Be the 1st person to report the incident. - Debug production issues across services and levels of the stack. - Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realise it. - Building automated tools in Python / Java / GoLang / Ruby etc. - Help Platform and Engineering teams gain visibility into our infrastructure. - Lead design of software components and systems, to ensure availability, scalability, latency, and efficiency of our services. - Participate actively in detecting, remediating and reporting on Production incidents, ensuring the SLAs are met and driving Problem Management for permanent remediation. - Participate in on-call rotation to ensure coverage for planned/unplanned events. - Perform other task like load-test & generating system health reports. - Periodically check for all dashboards readiness. - Engage with other Engineering organizations to implement processes, identify improvements, and drive consistent results. - Working with your SRE and Engineering counterparts for driving Game days, training and other response readiness efforts. - Participate in the 24x7 support coverage as needed Troubleshooting and problem-solving complex issues with thorough root cause analysis on customer and SRE production environments - Collaborate with Service Engineering organizations to build and automate tooling, implement best practices to observe and manage the services in production and consistently achieve our market leading SLA. - Improving the scalability and reliability of our systems in production. - Evaluating, designing and implementing new system architectures. Some specific Requirements : - B.Tech. in Engineering, Computer Science, technical degree, or equivalent work experience - At least 3 years of managing production infrastructure. - Leading / managing a team is a huge plus. - Experience with cloud platforms like - AWS, GCP. - Experience developing and operating large scale distributed systems with Kubernetes, Docker and and Serverless (Lambdas) - Experience in running real-time and low latency high available applications (Kafka, gRPC, RTP) - Comfortable with Python, Go, or any relevant programming language. - Experience with monitoring alerting using technologies like Newrelic / zybix /Prometheus / Garafana / cloudwatch / Kafka / PagerDuty etc. - Experience with one or more orchestration, deployment tools, e. CloudFormation / Terraform / Ansible / Packer / Chef. - Experience with configuration management systems such as Ansible / Chef / Puppet. - Knowledge of load testing methodologies, tools like Gating, Apache Jmeter. - Work your way around Unix shell. - Experience running hybrid clouds and on-prem infrastructures on Red Hat Enterprise Linux / CentOS - A focus on delivering high-quality code through strong testing practices.
Posted 3 weeks ago
3.0 - 6.0 years
4 - 8 Lacs
Karnataka
Work from Office
Key Responsibilities : - Design, develop, and maintain backend services and automation tools using Golang - Build scalable and efficient microservices, RESTful APIs, and background jobs - Automate repetitive tasks and system processes across CI/CD, deployments, and data pipelines - Optimize code and systems for performance, reliability, and scalability - Collaborate with DevOps, QA, and other engineering teams to streamline operations and workflows - Write scripts and automation for provisioning, monitoring, and self-healing infrastructure - Maintain technical documentation for developed services, APIs, and scripts - Debug and troubleshoot issues across services and systems - Participate in code reviews, testing, and continuous integration activities - Research and implement tools and frameworks to improve development and automation efficiency Required Technical Skills : Programming Languages : - Strong proficiency in Go (Golang) - Familiarity with Python, Bash, or Shell scripting is a plus Automation & DevOps : - Hands-on experience with CI/CD pipelines (e., GitHub Actions, GitLab CI, Jenkins) - Proficiency in writing automation scripts and job schedulers - Familiarity with Ansible, Terraform, or other automation tools is a plus API Development : - RESTful API design, development, testing, and documentation - JSON, gRPC, and protocol buffers experience is a bonus Database Technologies : - Experience with both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases - Understanding of database schema design and query optimization Cloud & Containers : - Hands-on experience with Docker, Kubernetes, or other container orchestration tools - Familiarity with cloud platforms like AWS, GCP, or Azure Monitoring & Logging : - Working knowledge of tools like Prometheus, Grafana, ELK Stack, or Splunk Version Control : - Proficient in Git and Git workflows Preferred Qualifications : - Bachelors degree in Computer Science, Engineering, or related field - 36 years of backend development experience, with 2+ years in Golang - Experience working in Agile/Scrum teams - Exposure to event-driven architecture and message brokers like Kafka, RabbitMQ, or NATS Soft Skills : - Strong problem-solving and debugging skills - Good communication and documentation habits - Ability to work independently and within a team - Strong attention to detail and proactive attitude
Posted 3 weeks ago
4.0 - 8.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Roles & Responsibilities : - Working closely with the CTO and members of technical staff to meet deadlines. - Working with an agile team to setup and configure GitOps (CI/CD) based pipelines on GitLab - Create and deploy Edge AIoT pipelines using AWS Greengrass or Azure IoT - Design and develop secure cloud system architectures in accordance with enterprise standards - Package and automate deployment of releases using Helm charts - Analyze and optimize resource consumption of deployments - Integrate with Prometheus, Grafana, Kibana etc. for application monitoring - Adhering to best practices to deliver secure and robust solutions Requirements : - Experience with Kubernetes and AWS - Knowledge of cloud architecture concepts (IaaS, PaaS, SaaS) - Knowledge of Docker and Linux bash scripting - Strong desire to expand knowledge in modern cloud architectures - Knowledge of System Security Concepts (SAST, DAST, Penetration Testing, Vulnerability analysis) - Familiarity with version control concepts (Git)
Posted 3 weeks ago
4.0 - 7.0 years
10 - 15 Lacs
Noida
Work from Office
As a Consultant in Automation domain, you will be responsible for delivering automation use cases enabled by AI and Cloud technologies. In this role, you play a crucial part in building the next-generation autonomous networks. You will develop efficient and scalable automation solutions, you will leverage your technical expertise, problem-solving abilities, and domain knowledge to drive innovation and efficiency. You have: Bachelor's degree in Computer Science, Engineering, or a related field preferred, with 8-10+ years of experience in automation or telecommunications. Understanding of telecom network architecture, including Core networks, OSS, and BSS ecosystems, along with industry frameworks like TM Forum Open APIs and eTOM. Practical experience in programming and scripting languages such as Python, Go, Java, or Bash, and automation tools like Terraform, Ansible, and Helm. Hands-on experience with CI/CD pipelines using Jenkins, GitLab CI, or ArgoCD, as well as containerization (Docker) and orchestration (Kubernetes, OpenShift). It would be nice if you also had: Exposure to agile development methodologies and cross-functional collaboration. Experience with real-time monitoring tools (Prometheus, ELK Stack, OpenTelemetry, Grafana) and AI/ML for predictive automation and network optimization is a plus. Familiarity with GitOps methodologies and automation best practices for telecom environments. Design, develop, test, and deploy automation scripts using languages such as Python, Go, Bash, or YAML. Automate the provisioning, configuration, and lifecycle management of network and cloud infrastructure. Design and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, or Tekton. Automate continuous integration, testing, deployment, and rollback mechanisms for cloud-native services. Implement real-time monitoring, logging, and tracing using tools such as Prometheus, Grafana, ELK, and OpenTelemetry. Develop AI/ML-driven observability solutions for predictive analytics and proactive fault resolution, integrating AI/ML models to enable predictive scaling. Automate self-healing mechanisms to remediate network and application failures. Collaborate with DevOps and Network Engineers to align automation with business goals.
Posted 3 weeks ago
3.0 - 6.0 years
8 - 13 Lacs
Bengaluru
Work from Office
As a vLab R&D Cloud Engineer, your job requires expertise in Cloud Computing Platforms, Linux and Networking. You have: 8+ years of relevant experience on deployment and troubleshooting of Infrastructure/Platforms especially OpenShift and ACM. Red Hat Certified OpenShift Administrator certification is must Prior troubleshooting experience on OpenStack, Kubernetes and OpenShift Platforms. Expert in software engineering practices like DevOps, Agile Methodologies, Continuous Integration and Test Automation. Practical experience with Kubernetes (K8s), podman and containerized infrastructure management. Expertise in Git, Gerrit, Jenkins, ArgoCD, Ansible, Python scripting for automation and deploying and maintaining common services like Kafka, Redis, Prometheus, Grafana, etc. Expertise in Layer 2/Layer 3 Data Networking. It would be nice if you also had: BE/BTech/MTech in Engineering Degree required. Good knowledge in troubleshooting Ceph and Openshift Data Foundation (ODF) issues and good knowledge of HP/Airframe/Dell NFVI x.x hardware. Good communication, organizational and problem-solving skills. Ability to identify and implement platform/process improvements, create new procedures and ability to work with a global team. Red Hat Certified Specialist in MultiCluster Management certification Learn to Deploy and maintain common cloud services platforms for Cloud and Network Services to meet security, performance, scalability, and reliability requirements. Collaborate with global cross-functional teams to design and implement solutions in a microservices architecture. Explore and implement best practices for continuous integration and continuous deployment (CI/CD). Contribute to short / mid-term decisions in own area and be part of high-performance team. Learn new platform as it evolves
Posted 3 weeks ago
3.0 - 5.0 years
60 - 65 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a talented and passionate Engineer to design, develop, and enhance our SaaS platform. As a key member of the team, you will work to create the best developer tools, collaborate with designers and engineers, and ensure our platform scales as it grows. The ideal candidate will have strong expertise in backend development, cloud infrastructure, and a commitment to delivering reliable systems. Location-Remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 3 weeks ago
8.0 - 12.0 years
35 - 60 Lacs
Pune
Work from Office
About the Role: We are seeking a skilled Site Reliability Engineer (SRE) / DevOps Engineer to join our infrastructure team. In this role, you will design, build, and maintain scalable infrastructure, CI/CD pipelines, and observability systems to ensure high availability, reliability, and security of our services. You will work cross-functionally with development, QA, and security teams to automate operations, reduce toil, and enforce best practices in cloud-native environments. Key Responsibilities: Design, implement, and manage cloud infrastructure (GCP/AWS/Azure) using Infrastructure as Code (Terraform). Maintain and improve CI/CD pipelines using tools like circleci, GitLab CI, or ArgoCD. Ensure high availability and performance of services using Kubernetes (GKE/EKS/AKS) and container orchestration. Implement monitoring, logging, and alerting using Prometheus, Grafana, ELK, or similar tools. Collaborate with developers to optimize application performance and deployment processes. Manage and automate security controls such as IAM, RBAC, network policies, and vulnerability scanning. Basic Qualifications: Strong knowledge of Linux Experience with scripting languages such as Python, Bash, or Go. Experience with cloud platforms (GCP preferred, AWS or Azure acceptable). Proficient in Kubernetes operations, including Helm, operators, and service meshes. Experience with Infrastructure as Code (Terraform). Solid experience with CI/CD pipelines (GitLab CI, Circleci, ArgoCD, or similar). Familiarity with monitoring and observability tools (Prometheus, Grafana, ELK, etc.). Experience with scripting languages such as Python, Bash, or Go. Knowledge of networking concepts (TCP/IP, DNS, Load Balancers, Firewalls). Preferred Qualifications Experience with advanced networking solutions. Familiarity with SRE principles such as SLOs, SLIs, and error budgets. Exposure to multi-cluster or hybrid-cloud environments. Knowledge of service meshes (Istiol). Experience participating in incident management and postmortem processes.
Posted 3 weeks ago
1.0 - 6.0 years
3 - 8 Lacs
Bengaluru
Work from Office
We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.
Posted 3 weeks ago
1.0 - 5.0 years
3 - 8 Lacs
Bengaluru
Work from Office
We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/ OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.
Posted 3 weeks ago
6.0 - 11.0 years
16 - 31 Lacs
Noida, Hyderabad, Gurugram
Hybrid
Sr Site Reliability Engineer Role Overview: . Our team supports a range of critical functions, including: Resiliency and Reliability Initiatives : Partnering with teams on various improvement projects. Observability : Ensuring comprehensive visibility into our systems. Alert Analysis and Optimization : Refining alert mechanisms to minimize disruptions. Automation and Self-Healing : Implementing automated solutions to proactively address issues. Incident and Problem Management : Supporting priority incidents, assisting with restoration, root cause analysis, and preventative actions. Release Management : Streamlining the release process for seamless updates. Monitor Engineering : Support installation, configuration, and review of monitor instrumentation. Cloud Operations support : Supporting and augmenting activities handled by the OptumRx Public Cloud team. Responsibilities: Support priority incidents. Gather requirements for automation opportunities and instrumentation improvements. Recommend improvements and changes to monitoring configurations and service architecture design. Summarize and provide updates to key stakeholders. Assist with incident/problem root cause analysis (RCA) and identification of trends. Help teams define service level objectives and build views within monitoring tools. Conduct analysis on alerts and incident data and recommend changes and improvements. Drive improvements to monitoring and instrumentation for services. Assess and monitor overall application stability and performance, providing insights for potential improvements. Build automation and self-healing capabilities to improve efficiency, stability, and reliability of services. Participate in rotational on-call support. Technical Skills: Proficiency in monitoring and instrumentation tools. U nderstanding of application performance monitoring (APM) and log management tools, with a preference for Dynatrace and Splunk. Experience with automation and scripting languages (e.g., Python, Bash, PowerShell) with a preference for Python. Experience implementing comprehensive monitoring for services to detect anomalies and trigger timely alerts. Understanding of cloud platforms (e.g., AWS, Azure, Google Cloud). Knowledge of incident management and root cause analysis (RCA) processes. Familiarity with service level objectives (SLOs) and service level agreements (SLAs). Analytical Skills: Ability to analyze alerts and incident data to identify trends and recommend improvements. Strong problem-solving skills and attention to detail. Communication Skills: Excellent verbal and written communication skills. Ability to summarize and present updates to key stakeholders effectively. Collaboration Skills: Experience working in cross-functional teams. Ability to collaborate with different teams to define service level objectives, gather requirements, discuss opportunities and recommend improvements. Experience working across geographies. Operational Skills: Ability to participate in rotational on-call support.
Posted 3 weeks ago
5.0 - 7.0 years
30 - 40 Lacs
Bengaluru
Hybrid
Senior Software Developer (Python) Experience: 5 - 7 Years Exp Salary : Upto USD 40,000 / year Preferred Notice Period : Within 60 Days Shift : 11:00AM to 8:00PM IST Opportunity Type: Hybrid (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Apache Airflow, Astronomer, Pandas/Pyspark/Dask, RESTful API, Snowflake, Docker, Python, SQL Good to have skills : CI/CD, Data Vizualization, Matplotlib, Prometheus, AWS, Kubernetes A Single Platform for Loans/Securities & Finance (One of Uplers' Clients) is Looking for: Senior Software Developer (Python) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary We are seeking a highly skilled Senior Python Developer with expertise in large-scale data processing and Apache Airflow. The ideal candidate will be responsible for designing, developing, and maintaining scalable data applications and optimizing data pipelines. You will be an integral part of our R&D and Technical Operations team, focusing on data engineering, workflow automation, and advanced analytics. Key Responsibilities Design and develop sophisticated Python applications for processing and analyzing large datasets. Implement efficient and scalable data pipelines using Apache Airflow and Astronomer. ¢ Create, optimize, and maintain Airflow DAGs for complex workflow orchestration. ¢ Work with data scientists to implement and scale machine learning models. ¢ Develop robust APIs and integrate various data sources and systems. ¢ Optimize application performance for handling petabyte-scale data operations. ¢ Debug, troubleshoot, and enhance existing Python applications. ¢ Write clean, maintainable, and well-tested code following best practices. ¢ Participate in code reviews and mentor junior developers. ¢ Collaborate with cross-functional teams to translate business requirements into technical solutions. Required Skills & Qualifications ¢ Strong programming skills in Python with 5+ years of hands-on experience. ¢ Proven experience working with large-scale data processing frameworks (e.g., Pandas, PySpark, Dask). ¢ Extensive hands-on experience with Apache Airflow for workflow orchestration. ¢ Experience with Astronomer platform for Airflow deployment and management. ¢ Proficiency in SQL and experience with Snowflake database. ¢ Expertise in designing and implementing RESTful APIs. ¢ Basic knowledge of Java programming. ¢ Experience with containerization technologies (Docker). ¢ Strong problem-solving skills and the ability to work independently. Preferred Skills ¢ Experience with cloud platforms (AWS). ¢ Knowledge of CI/CD pipelines and DevOps practices. ¢ Familiarity with Kubernetes for container orchestration. ¢ Experience with data visualization libraries (Matplotlib, Seaborn, Plotly). ¢ Background in financial services or experience with financial data. ¢ Proficiency in monitoring tools like Prometheus, Grafana, and ELK stack. Engagement Type: Fulltime Direct-hire on Riskspan Payroll Job Type: Permanent Location: Hybrid (Bangalore Working time: 11:00 AM to 8:00 PM Interview Process - 3- 4 Rounds How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: RiskSpan uncovers insights and mitigates risk for mortgage loans and structured products. The Edge Platform provides data and predictive models to run forecasts under a range of scenarios and analyze Agency and non-Agency MBS, loans, and MSRs. Leverage our bleeding-edge cloud, machine learning, and AI capabilities to scale faster, optimize model builds, and manage information more efficiently. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
4.0 - 6.0 years
25 - 30 Lacs
Hyderabad
Hybrid
Senior AI Developer Experience: 4 - 6 Years Exp Salary : INR 20-30 Lacs per annum Preferred Notice Period : Within 60 Days Shift : 2:30PM to 11:30PM IST Opportunity Type: Hybrid (Hyderabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : AI RAG, Fast or Flask API, Vector DB, postgresql, Python, Langchain OR Llama Good to have skills : Grafana or Prometheus, Docker, Kubernetes K-3 Innovations (One of Uplers' Clients) is Looking for: Senior AI Developer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About K3-Innovations K3-Innovations, Inc. is building a cutting-edge, AI-driven SaaS platform that automates critical workflows in the biopharma industry. We are expanding our team with an AI RAG Engineer who will help us design and optimize retrieval-augmented generation (RAG) pipelines for knowledge workflows. This role combines deep expertise in database design, vector search optimization, backend architecture, and LLM (Large Language Model) integration. You will play a key role in building a scalable AI platform, bridging structured and unstructured biopharma data with next-generation AI. If you're passionate about building intelligent retrieval systems, fine-tuning prompt pipelines, and optimizing LLM-based applications for real-world datasets, we want to hear from you! Key Responsibilities 1. RAG Pipeline Design and Optimization (Priority #1) Architect and implement retrieval-augmented generation pipelines integrating document retrieval and LLM response generation. Design and maintain knowledge bases and vector stores using tools like FAISS, Weaviate, or PostgreSQL PGVector. Optimize retrieval mechanisms (chunking, indexing strategies, reranking) to maximize response accuracy and efficiency. Integrate context-aware querying from structured (Postgres) and unstructured (text/PDF) sources. 2. Database and Embedding Management Design relational schemas to support knowledge base metadata and chunk-level indexing. Manage embeddings pipelines using open-source models (e.g., HuggingFace sentence transformers) or custom embedding services. Optimize large-scale vector search performance (indexing, sharding, partitioning). 3. LLM and Prompt Engineering Develop prompt engineering strategies for retrieval-augmented LLM pipelines. Experiment with prompt chaining, memory-augmented generation, and adaptive prompting techniques. Fine-tune lightweight LLMs or integrate APIs from OpenAI, Anthropic, or open-source models (e.g., LlamaIndex, LangChain). 4. Backend API and Workflow Orchestration Build scalable, secure backend services (FastAPI/Flask) to serve RAG outputs to applications. Design orchestration workflows integrating retrieval, generation, reranking, and response streaming. Implement system monitoring for LLM-based applications using observability tools (Prometheus, OpenTelemetry). 5. Collaboration and Platform Ownership Work closely with platform architects, AI scientists, and domain experts to evolve the knowledge workflows. Take ownership from system design to model integration and continuous improvement of RAG performance. Required Skills AI RAG Engineering (Most Critical) Knowledge Retrieval: o Experience building RAG architectures in production environments. o Expertise with vector stores (e.g., FAISS, Weaviate, Pinecone, PGVector). o Experience with embedding models and retrieval optimization strategies. Prompt Engineering: o Deep understanding of prompt construction for factuality, context augmentation, and reasoning. o Familiarity with frameworks like LangChain, LlamaIndex, or Haystack. Database and Backend Development (Essential) PostgreSQL Expertise: o Strong proficiency in relational and vector extension design (PGVector preferred). o SQL optimization, indexing strategies for large datasets. Python Development: o Experience building backend services using FastAPI or Flask. o Proficiency with async programming and API integrations. Observability and DevOps (Supportive) System monitoring for AI workflows using Prometheus, Grafana, OpenTelemetry. Familiarity with Docker, Kubernetes-based deployment pipelines. Preferred Experience (Bonus but not Required) Working with large-scale scientific or healthcare datasets. Exposure to clinical standards like SDTM, ADaM (advantageous for biopharma workflows). Experience integrating domain-specific ontologies into retrieval systems. Familiarity with fine-tuning LLMs on private knowledge bases. What Were Looking For AI Problem Solver: You are excited by combining retrieval, reasoning, and generative capabilities to solve real-world problems. Backend and Data Specialist: You understand database performance and scalable architectures for retrieval and serving. Builder's Mindset: You thrive in dynamic, evolving environments where you can architect and implement end-to-end solutions. What We Offer Meaningful Impact: Build AI systems that accelerate workflows in the critical biopharma space. Technical Growth: Deepen your expertise in retrieval-augmented generation and scalable AI systems. Remote Flexibility: Results-driven work culture with location flexibility. Competitive Compensation: Attractive salary, benefits, and learning opportunities. Join Us Help us revolutionize how biopharma manages and accesses knowledge through the power of AI. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: K3-Innovations is redefining clinical research with a strategic scaling approach, blending AI-powered automation, adaptive clinical resourcing, and advanced data science. As a next-generation CRO, we provide flexible FSP models, regulatory-compliant statistical programming, and AI-driven analytics to accelerate clinical trial execution and regulatory submissions. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
3.0 - 5.0 years
15 - 20 Lacs
Pune
Work from Office
About the job Sarvaha would like to welcome Kafka Platform Engineer (or a seasoned backend engineer aspiring to move into platform architecture) with a minimum of 4 years of solid experience in building, deploying, and managing Kafka infrastructure on Kubernetes platforms. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do - Deploy and manage scalable Kafka clusters on Kubernetes using Strimzi, Helm, Terraform, and StatefulSets - Tune Kafka for performance, reliability, and cost-efficiency - Implement Kafka security: TLS, SASL, ACLs, Kubernetes Secrets, and RBAC - Automate deployments across AWS, GCP, or Azure - Set up monitoring and alerting with Prometheus, Grafana, JMX Exporter - Integrate Kafka ecosystem components: Connect, Streams, Schema Registry - Define autoscaling, resource limits, and network policies for Kubernetes workloads - Maintain CI/CD pipelines (ArgoCD, Jenkins) and container workflows You Bring - BE/BTech/ MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong foundation in the Apache Kafka ecosystem and internals (brokers, ZooKeeper/ KRaft, partitions, storage) - Proficient in Kafka setup, tuning, scaling, and topic/partition management - Skilled in managing Kafka on Kubernetes using Strimzi, Helm, Terraform - Experience with CI/CD, containerization, and GitOps workflows - Monitoring expertise using Prometheus, Grafana, JMX - Experience on EKS, GKE, or AKS preferred - Strong troubleshooting and incident response mindset - High sense of ownership and automation-first thinking - Excellent collaboration with SREs, developers, and platform teams - Clear communicator, documentation-driven, and eager to mentor/share knowledge.
Posted 3 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to resolve, maintain and manage client’s software/ hardware/ network based on the service requests raised from the end-user as per the defined SLA’s ensuring client satisfaction Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of client’s network/ server/ system/ storage/ platform/ infrastructure and other equipment’s to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner Deliver NoPerformance ParameterMeasure1. 100% adherence to SLA/ timelines Multiple cases of red time Zero customer escalation Client appreciation emails Mandatory Skills: AIOPS Grafana Observability. Experience3-5 Years.
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Experienced as an Opsramp Developer/Architect Hands-on experience with Prometheus, OpenTelemetry Experience with data pipelines and redirecting Prometheus metrics to opsramp Proficiency in scripting and programming languages such as Python, Ansible, and Bash. Familiarity with CI/CD deployment pipelines (Ansible, GIT). Strong knowledge of performance monitoring, metrics, capacity planning, and management. Excellent communication skills with the ability to articulate technical details to different audiences. Experience with application onboarding, capturing requirements, understanding data sources, and architecture diagrams. Will work in a collaborative manner with clients and team, abiding to critical timelines and deliverable The general scope of the work for this position is as follows: Design, implement, and optimize OpsRamp solutions in multi tenant model. Implement and configure components of the OpsRamp, Gateway, discovery, opsramp agents, instrumentation via Prometheus etc. Opsramp for Infra , network , app observability OpsRamp event management. Create and maintain comprehensive documentation for OpsRamp configurations and processes. Ensure seamless integration between Opsrmap and other element monitoring tools and ITSM platforms Develop and maintain advanced dashboards and visualizations. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 3 weeks ago
7.0 - 10.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Job Title:DevOps Lead Experience7-10 Years Location:Bengaluru : Overall, 7-10 years of experience in IT In-depth knowledge of GCP services and resources to design, deploy, and manage cloud infrastructure efficiently. Certification is big plus. Proficiency in Java or Shell or Python scripting. Develop, maintain, and optimize Infrastructure as Code scripts and templates using tools like Terraform and Ansible, ensuring resource automation and consistency. Strong expertise in Kubernetes using Helm, HAProxy, and containerization technologies Manage and fine-tune databases, including Neo4j, MySQL, PostgreSQL, and Redis Cache Clusters, to ensure performance and data integrity. Skill in managing and optimizing Apache Kafka and RabbitMQ to facilitate efficient data processing and communication. Design and maintain Virtual Private Cloud (VPC) network architecture for secure and efficient data transmission. Implement and maintain monitoring tools such as Prometheus, Zipkin, Loki and Grafana. Utilize Helm charts and Kubernetes (K8s) manifests for containerized application management. Proficient with Git, Jenkins, and ArgoCD to set up and enhance CI and CD pipelines. Utilize Google Artifact Registry and Google Container Registry for artifact and container image management. Familiarity with CI/CD practices, version control and branching and DevOps methodologies. Strong understanding of cloud network design, security, and best practices. Strong Linux and Network debugging skills Primary Skills: - Strong Kubernetes GKE Clusters Grafana Prometheus Terraform and Ansible - good working knowledge Devops Why Join Us: Opportunity to work in a fast-paced and innovative environment. Collaborative team culture with continuous learning and growth opportunities
Posted 3 weeks ago
8.0 - 12.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Title:Performance Testing Experience8-12 Years Location:Bangalore JMETER (min 5+ yrs ) 8+yearsofstrongexperienceinPerformanceTesting,candidateshouldbeabletocodeanddesignPerformancetestscripts Abletosetup/maintainandexecutePerformancetestscriptsfromscratch. GoodinJmeter,AzureDevops,Grafana(Goodtohave) ExcellentBusinesscommunicationskill ExperienceinPerformancetestingforbothwebandAPI ExperienceinAgilemethodologyandprocess CustomerInteractionandworkindependentlyonhisownonthedailytasks. Goodinteamhandling,EffortEstimation,PerformanceMetricstrackingandTaskmanagement. Shouldbeproactive,solutionproviderandgoodinstatusreporting.
Posted 3 weeks ago
3.0 - 5.0 years
13 - 15 Lacs
Gurugram
Work from Office
A skilled DevOps Engineer to manage and optimize both on-premises and AWS cloud infrastructure. The ideal candidate will have expertise in DevOps tools, automation, system administration, and CI/CD pipeline management while ensuring security, scalability, and reliability. Key Responsibilities: 1. AWS & On-Premises Solution Architecture: o Design, deploy, and manage scalable, fault-tolerant infrastructure across both on-premises and AWS cloud environments. o Work with AWS services like EC2, IAM, VPC, CloudWatch, GuardDuty, AWS Security Hub, Amazon Inspector, AWS WAF, and Amazon RDS with Multi-AZ. o Configure ASG and implement load balancing techniques such as ALB and NLB. o Optimize cost and performance leveraging Elastic Load Balancing and EFS. o Implement logging and monitoring with CloudWatch, CloudTrail, and on-premises monitoring solutions. 2. DevOps Automation & CI/CD: o Develop and maintain CI/CD pipelines using Jenkins and GitLab for seamless code deployment across cloud and on-premises environments. o Automate infrastructure provisioning using Ansible, and CloudFormation. o Implement CI/CD pipeline setups using GitLab, Maven, Gradle, and deploy on Nginx and Tomcat. o Ensure code quality and coverage using SonarQube. o Monitor and troubleshoot pipelines and infrastructure using Prometheus, Grafana, Nagios, and New Relic. 3. System Administration & Infrastructure Management: o Manage and maintain Linux and Windows systems across cloud and on-premises environments, ensuring timely updates and security patches. o Configure and maintain web/application servers like Apache Tomcat and web servers like Nginx and Node.js. o Implement robust security measures, SSL/TLS configurations, and secure communications. o Configure DNS and SSL certificates. o Maintain and optimize on-premises storage, networking, and compute resources. 4. Collaboration & Documentation: o Collaborate with development, security, and operations teams to optimize deployment and infrastructure processes. o Provide best practices and recommendations for hybrid cloud and on-premises architecture, DevOps, and security. o Document infrastructure designs, security configurations, and disaster recovery plans for both environments. Required Skills & Qualifications: Cloud & On-Premises Expertise: Extensive knowledge of AWS services (EC2, IAM, VPC, RDS, etc.) and experience managing on-premises infrastructure. DevOps Tools: Proficiency in SCM tools (Git, GitLab), CI/CD (Jenkins, GitLab CI/CD), and containerization. Code Quality & Monitoring: Experience with SonarQube, Prometheus, Grafana, Nagios, and New Relic. Operating Systems: Experience managing Linux/Windows servers and working with CentOS, Fedora, Debian, and Windows platforms. Application & Web Servers: Hands-on experience with Apache Tomcat, Nginx, and Node.js. Security & Networking: Expertise in DNS configuration, SSL/TLS implementation, and AWS security services. Soft Skills: Strong problem-solving abilities, effective communication, and proactive learning. Preferred Qualifications: AWS certifications (Solutions Architect, DevOps Engineer) and a bachelors degree in Computer Science or related field. Experience with hybrid cloud environments and on-premises infrastructure automation.
Posted 3 weeks ago
5.0 - 8.0 years
30 - 32 Lacs
Gurugram, Sector-39
Work from Office
Responsibilities: As a Senior DevOps Engineer at SquareOps, you'll be expected to: Drive the scalability and reliability of our customers' cloud applications. Work directly with clients, engineering, and infrastructure teams to deliver high-quality solutions. Design and develop various systems from scratch with a focus on scalability, security, and compliance. Develop deployment strategies and build configuration management systems. Lead a team of junior DevOps engineers, providing guidance and support on day-to-day activities. Drive innovation within the team, promoting the adoption of new technologies and practices to improve project outcomes. Demonstrate ownership and accountability for project implementations, ensuring projects are delivered on time and within budget. Act as a mentor to junior team members, fostering a culture of continuous learning and growth. The Ideal Candidate: A proven track record in architecting complex production systems with multi-tier application stacks. Expertise in designing solutions tailored to industry-specific requirements such as SaaS, AI, Data Ops, and highly compliant enterprise architectures. Extensive experience working with Kubernetes, various CI/CD tools, and cloud service providers, preferably AWS. Proficiency in automating cloud infrastructure management, primarily with tools like Terraform, Shell scripting, AWS Lambda, and Event Bridge. Solid understanding of cloud financial management strategies to ensure cost-effective use of cloud resources. Experience in setting up high availability and disaster recovery for cloud infrastructure. Strong problem-solving skills with an innovative mindset. Excellent communication skills, capable of effectively liaising with clients, engineering, and infrastructure teams. The ability to lead and mentor a team, guiding them to achieve their objectives. 10. High levels of empathy and emotional intelligence, with a talent for managing and resolving conflict. An adaptable nature, comfortable working in a fast-paced, dynamic environment. At SquareOps, we believe in the power of diversity and inclusion. We encourage applicants of all backgrounds, experiences, and perspectives to apply.
Posted 3 weeks ago
7.0 - 10.0 years
13 - 23 Lacs
Bengaluru
Work from Office
Title : DevOps Engineer Location : Bangalore Office (4 Days WFO) Exp : 7 to 10 Years Skills : Devops, Kubernetes, CI/CD, Prometheus or Grafana, AWS, Basic SRE
Posted 3 weeks ago
8.0 - 10.0 years
13 - 15 Lacs
Pune
Work from Office
We are seeking a hands-on Lead Data Engineer to drive the design and delivery of scalable, secure data platforms on Google Cloud Platform (GCP). In this role you will own architectural decisions, guide service selection, and embed best practices across data engineering, security, and performance disciplines. You will partner with data modelers, analysts, security teams, and product owners to ensure our pipelines and datasets serve analytical, operational, and AI/ML workloads with reliability and cost efficiency. Familiarity with Microsoft Azure data services (Data Factory, Databricks, Synapse, Fabric) is valuable, as many existing workloads will transition from Azure to GCP. Key Responsibilities Lead end-to-end development of high-throughput, low-latency data pipelines and lake-house solutions on GCP (BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Composer, Dataplex, etc.). Define reference architectures, technology standards for data ingestion, transformation, and storage. Drive service-selection trade-offscost, performance, scalability, and securityacross streaming and batch workloads. Conduct design reviews and performance tuning sessions; ensure adherence to partitioning, clustering, and query-optimization standards in BigQuery. Contribute to long-term cloud data strategy, evaluating emerging GCP features and multi-cloud patterns (Azure Synapse, Data Factory, Purview, etc.) for future adoption. Lead the code reviews and oversee the development activities delegated to Data engineers. Implement best practices recommended by Google Cloud Provide effort estimates for the data engineering activities Participate in discussions to migrate existing Azure workloads to GCP, provide solutions to migrate the work loads for selected data pipelines Must-Have Skills 810 years in data engineering, with 3+ years leading teams or projects on GCP. Expert in GCP data services (BigQuery, Dataflow/Apache Beam, Dataproc/Spark, Pub/Sub, Cloud Storage) and orchestration with Cloud Composer or Airflow. Proven track record designing and optimizing large-scale ETL/ELT pipelines (streaming + batch). Strong fluency in SQL and one major programming language (Python, Java, or Scala). Deep understanding of data lake / lakehouse, dimensional & data-vault modeling, and data governance frameworks. Excellent communication and stakeholder-management skills; able to translate complex technical topics to non-technical audiences. Nice-to-Have Skills Hands-on experience with Microsoft Azure data services (Azure Synapse Analytics, Data Factory, Event Hub, Purview). Experience integrating ML pipelines (Vertex AI, Dataproc ML) or real-time analytics (BigQuery BI Engine, Looker). Familiarity with open-source observability stacks (Prometheus, Grafana) and FinOps tooling for cloud cost optimization. Preferred Certifications Google Professional Data Engineer (strongly preferred) or Google Professional Cloud Architect Microsoft Certified: Azure Data Engineer Associate (nice to have) Education Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related technical field. Equivalent professional experience will be considered.
Posted 3 weeks ago
2.0 - 4.0 years
11 - 12 Lacs
Bengaluru
Work from Office
Employment Type: Contract iSource Services is hiring for one of their client for the position of Commerce - DevOps - Engineer II About the Role: We are looking for a skilled DevOps Engineer (Level II) to support our Commerce platform. The ideal candidate will have 24 years of experience with a strong foundation in DevOps practices, CI/CD pipelines, and solid exposure to React.js, Node.js, and MongoDB for build and deployment automation. Key Responsibilities: Manage CI/CD pipelines and deployment automation for commerce applications Collaborate with development teams using React.js, Node.js, and MongoDB Monitor system performance, automate infrastructure, and troubleshoot production issues Maintain and improve infrastructure as code using tools like Terraform, Ansible, or similar Ensure security, scalability, and high availability of environments Participate in incident response and post-mortem analysis Qualifications: 24 years of hands-on experience in DevOps engineering Proficiency in CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI) Working knowledge of React.js, Node.js, and MongoDB Experience with containerization (Docker, Kubernetes) Familiarity with monitoring tools (e.g., Prometheus, Grafana, ELK stack) Good scripting skills (Shell, Python, or similar)
Posted 3 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Pune
Work from Office
Role Overview Join our Pune AI Center of Excellence to drive software and product development in the AI space. As an AI/ML Engineer, youll build and ship core components of our AI products—owning end-to-end RAG pipelines, persona-driven fine-tuning, and scalable inference systems that power next-generation user experiences. Key Responsibilities Model Fine-Tuning & Persona Design Adapt and fine-tune open-source large language models (LLMs) (e.g. CodeLlama, StarCoder) to specific product domains. Define and implement “personas” (tone, knowledge scope, guardrails) at inference time to align with product requirements. RAG Architecture & Vector Search Build retrieval-augmented generation systems: ingest documents, compute embeddings, and serve with FAISS, Pinecone, or ChromaDB. Design semantic chunking strategies and optimize context-window management for product scalability. Software Pipeline & Product Integration Develop production-grade Python data pipelines (ETL) for real-time vector indexing and updates. Containerize model services in Docker/Kubernetes and integrate into CI/CD workflows for rapid iteration. Inference Optimization & Monitoring Quantize and benchmark models for CPU/GPU efficiency; implement dynamic batching and caching to meet product SLAs. Instrument monitoring dashboards (Prometheus/Grafana) to track latency, throughput, error rates, and cost. Prompt Engineering & UX Evaluation Craft, test, and iterate prompts for chatbots, summarization, and content extraction within the product UI. Define and track evaluation metrics (ROUGE, BLEU, human feedback) to continuously improve the product’s AI outputs. Must-Have Skills ML/AI Experience: 3–4 years in machine learning and generative AI, including 18 months on LLM- based products. Programming & Frameworks: Python, PyTorch (or TensorFlow), Hugging Face Transformers. RAG & Embeddings: Hands-on with FAISS, Pinecone, or ChromaDB and semantic chunking. Fine-Tuning & Quantization: Experience with LoRA/QLoRA, 4-bit/8-bit quantization, and model context protocol (MCP). Prompt & Persona Engineering: Deep expertise in prompt-tuning and persona specification for product use cases. Deployment & Orchestration: Docker, Kubernetes fundamentals, CI/CD pipelines, and GPU setup. Nice-to-Have Multi-modal AI combining text, images, or tabular data. Agentic AI systems with reasoning and planning loops. Knowledge-graph integration for enhanced retrieval. Cloud AI services (AWS SageMaker, GCP Vertex AI, or Azure Machine Learning)
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Prometheus is a popular monitoring and alerting tool used in the field of DevOps and software development. In India, the demand for professionals with expertise in Prometheus is on the rise. Job seekers looking to build a career in this field have a promising outlook in the Indian job market.
These cities are known for their vibrant tech industry and have a high demand for professionals skilled in Prometheus.
The salary range for Prometheus professionals in India varies based on experience levels. Entry-level positions can expect to earn around ₹5-8 lakhs per annum, whereas experienced professionals can earn up to ₹15-20 lakhs per annum.
A typical career path in Prometheus may include roles such as: - Junior Prometheus Engineer - Prometheus Developer - Senior Prometheus Engineer - Prometheus Architect - Prometheus Consultant
As professionals gain experience and expertise, they can progress to higher roles with increased responsibilities.
In addition to Prometheus, professionals in this field are often expected to have knowledge and experience in: - Kubernetes - Docker - Grafana - Time series databases - Linux system administration
Having a strong foundation in these related skills can enhance job prospects in the Prometheus domain.
As you explore opportunities in the Prometheus job market in India, remember to continuously upgrade your skills and stay updated with the latest trends in monitoring and alerting technologies. With dedication and preparation, you can confidently apply for roles in this dynamic field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2