Jobs
Interviews

71 Loki Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an engineer joining Zinier's Customer Engineering team, you will be focusing on a low-code platform. Your role will involve debugging, analyzing JavaScript code, optimizing queries, solving customer-facing issues, and automating routine tasks. You will be responsible for investigating and resolving customer-reported issues in a JavaScript + JSON low-code environment. This includes identifying and fixing bugs, implementing enhancements to enhance product performance, reliability, and usability, and supporting customers globally. Additionally, you will create and maintain documentation related to program development, logic, coding, testing, and changes. Collaboration with cross-functional teams is a key aspect of this role. You will partner with customer success, solution/engineering teams to address issues promptly, provide feedback from field operations to enhance product robustness, and participate in continuous improvement cycles. You should have the ability to drive outcomes, meet delivery milestones, and coordinate effectively across multiple teams. The required skills for this role include a minimum of 3 years of experience in Solution Development or Engineering roles, a strong understanding of JavaScript, JSON handling, and API interactions, proficiency in SQL with the ability to debug query bottlenecks, familiarity with observability stacks like Grafana, Loki, Tempo, and knowledge of AWS. Desirable skills include exposure to the Field Service Management domain, experience in products with workflows, debugging algorithms related to scheduling, or working on backend systems. Joining Zinier offers a unique opportunity to work closely with Solution Architects, influence Product blueprints, and collaborate across the full tech stack. You will have the chance to work on debugging backend services in Java, Spring Boot, explore front-end interfaces in React, and contribute to mobile UI development. Additionally, you will build internal tools, address production issues, and contribute to engineering stability while learning from experienced platform, product, and solution engineers. Being part of a high-impact team at Zinier means bridging engineering and customer experience to enhance product quality and customer trust. The company values learning, ownership, and long-term growth, providing you with a rewarding environment to grow your skills and expertise.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

telangana

On-site

You will be joining our team as a System Development Engineer focusing on the Hybrid Scientific Computing Stack. A strong background in computer science and software development is required for this role, and knowledge of quantum computing would be an added advantage. Your responsibilities will include working on backend services such as FastAPI, Celery, OAuth, PostgreSQL, and Redis. You will also be involved in hybrid job orchestration using tools like Celery, RabbitMQ, Slurm, and Kubernetes. Containerized workflows using Docker, Singularity, and Helm will be part of your tasks. Monitoring and observability tasks will involve tools like Prometheus, Grafana, Loki, and Flower. Cloud-based deployment on platforms like GCP, AWS, and Azure, as well as secure on-prem server management, will also be within your purview. Additionally, you will work on scientific environments involving CUDA, Qiskit, Conda, GROMACS, and Lmod. To qualify for this position, you should hold a minimum Bachelor's Degree in Computer Science or related fields and have at least 2 years of professional work experience in full-stack systems engineering. Proficiency in Python (FastAPI/Celery), Linux (Ubuntu/Debian), and DevOps is required. Familiarity with cloud-native tools like Docker, Kubernetes, Helm, and GitHub Actions is essential. Experience with Slurm, GPU resource allocation, and secure job execution will be beneficial. Any familiarity with quantum SDKs such as Qiskit, PennyLane, and Cirq will be considered a bonus.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

telangana

On-site

You will be joining our team as a Systems Development Engineer for the Hybrid Scientific Computing Stack. A strong background in computer science and software development is essential for this role, with knowledge of quantum computing considered a valuable asset. Your responsibilities will include managing backend services such as FastAPI, Celery, OAuth, PostgreSQL, and Redis. You will also be involved in hybrid job orchestration using tools like Celery, RabbitMQ, Slurm, and Kubernetes, as well as working on containerized workflows with Docker, Singularity, Helm, and Kubernetes. Monitoring and observability tasks will involve tools like Prometheus, Grafana, Loki, and Flower. Additionally, you will be responsible for cloud-based deployment on platforms like GCP, AWS, and Azure, as well as secure on-prem server management including GPU/CPU scheduling, RBAC, and SSH-only access. Familiarity with scientific environments such as CUDA, Qiskit, Conda, GROMACS, and Lmod will also be part of your role. To qualify for this position, you should hold a minimum Bachelor's Degree in Computer Science or related fields and have at least 2 years of professional work experience in full-stack systems engineering. Proficiency in Python (FastAPI/Celery), Linux (Ubuntu/Debian), and DevOps is required. You should also be familiar with cloud-native tools like Docker, Kubernetes, Helm, and GitHub Actions. Experience with Slurm, GPU resource allocation, and secure job execution will be beneficial. Any familiarity with quantum SDKs such as Qiskit, PennyLane, and Cirq will be considered a bonus.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Senior Software DevOps Engineer, you will lead the design, implementation, and evolution of telemetry pipelines and DevOps automation that enable next-generation observability for distributed systems. You will blend a deep understanding of Open Telemetry architecture with strong DevOps practices to build a reliable, high-performance, and self-service observability platform across hybrid cloud environments (AWS & Azure). Your mission is to empower engineering teams with actionable insights through rich metrics, logs, and traces, while championing automation and innovation at every layer. You will be responsible for: Observability Strategy & Implementation: Architect and manage scalable observability solutions using OpenTelemetry (OTel), encompassing Collectors, Instrumentation, Export Pipelines, Processors & Extensions for advanced enrichment and routing. DevOps Automation & Platform Reliability: Own the CI/CD experience using GitLab Pipelines, integrating infrastructure automation with Terraform, Docker, and scripting in Bash and Python. Build resilient and reusable infrastructure-as-code modules across AWS and Azure ecosystems. Cloud-Native Enablement: Develop observability blueprints for cloud-native apps across AWS (ECS, EC2, VPC, IAM, CloudWatch) and Azure (AKS, App Services, Monitor). Optimize cost and performance of telemetry pipelines while ensuring SLA/SLO adherence for observability services. Monitoring, Dashboards, and Alerting: Build and maintain intuitive, role-based dashboards in Grafana, New Relic, enabling real-time visibility into service health, business KPIs, and SLOs. Implement alerting best practices integrated with incident management systems. Innovation & Technical Leadership: Drive cross-team observability initiatives that reduce MTTR and elevate engineering velocity. Champion innovation projects including self-service observability onboarding, log/metric reduction strategies, AI-assisted root cause detection, and more. Mentor engineering teams on instrumentation, telemetry standards, and operational excellence. Requirements: - 6+ years of experience in DevOps, Site Reliability Engineering, or Observability roles - Deep expertise with OpenTelemetry, including Collector configurations, receivers/exporters (OTLP, HTTP, Prometheus, Loki), and semantic conventions - Proficient in GitLab CI/CD, Terraform, Docker, and scripting (Python, Bash, Go). Strong hands-on experience with AWS and Azure services, cloud automation, and cost optimization - Proficiency with observability backends: Grafana, New Relic, Prometheus, Loki, or equivalent APM/log platforms - Passion for building automated, resilient, and scalable telemetry pipelines - Excellent documentation and communication skills to drive adoption and influence engineering culture Nice to Have: - Certifications in AWS, Azure, or Terraform - Experience with OpenTelemetry SDKs in Go, Java, or Node.js - Familiarity with SLO management, error budgets, and observability-as-code approaches - Exposure to event streaming (Kafka, RabbitMQ), Elasticsearch, Vault, Consul,

Posted 2 months ago

Apply

1.0 - 5.0 years

0 Lacs

chandigarh

On-site

You will be a part of our team as a Junior DevOps Engineer, where you will contribute to building, maintaining, and optimizing our cloud-native infrastructure. Your role will involve collaborating with senior DevOps engineers and development teams to automate deployments, monitor systems, and ensure the high availability, scalability, and security of our applications. Your key responsibilities will include managing and optimizing Kubernetes (EKS) clusters, Docker containers, and Helm charts for deployments. You will support CI/CD pipelines using tools like Jenkins, Bitbucket, and GitHub Actions, and help deploy and manage applications using ArgoCD for GitOps workflows. Monitoring and troubleshooting infrastructure will be an essential part of your role, utilizing tools such as Grafana, Prometheus, Loki, and OpenTelemetry. Working with various AWS services like EKS, ECR, ALB, EC2, VPC, S3, and CloudFront will also be a crucial aspect to ensure reliable cloud infrastructure. Automating infrastructure provisioning using IaC tools like Terraform and Ansible will be another key responsibility. Additionally, you will assist in maintaining Docker image registries and collaborate with developers to enhance observability, logging, and alerting while adhering to security best practices for cloud and containerized environments. To excel in this role, you should have a basic understanding of Kubernetes, Docker, and Helm, along with familiarity with AWS cloud services like EKS, EC2, S3, VPC, and ALB. Exposure to CI/CD tools such as Jenkins, GitHub/Bitbucket pipelines, basic scripting skills (Bash, Python, or Groovy), and knowledge of observability tools like Prometheus, Grafana, and Loki will be beneficial. Understanding GitOps (ArgoCD) and infrastructure as code (IaC), experience with Terraform/CloudFormation, and knowledge of Linux administration and networking are also required skills. This is a full-time position that requires you to work in person. If you are interested in this opportunity, please feel free to reach out to us at +91 6284554276.,

Posted 2 months ago

Apply

10.0 - 15.0 years

20 - 30 Lacs

Mumbai, Powai

Work from Office

Notice period : Immediate to 30 days, currently serving Notice period Job Responsibilities: Engineer and automate various database platforms and services. Assist in the ongoing process of rationalizing the technology and usage of databases. Participate in the creation and implementation of operational policies, procedures & documentation. Database Administration and Production support for databases hosted on private cloud across all regions. Database version Upgrades and Security patching. Performance Tuning. Database replication administration. Collaborate with development teams and utilize coding skills to design and implement database solutions for new and existing applications. Willing to work in the weekend and non-of f ice hours as part of wider scheduled support group. Willingness to learn and adapt to new technologies and methodologies. Required Skills Mandatory The candidate must have the following skills and experience: 10 + years of experience in MSSQL DBA administration Proven ability to navigate Linux operating systems and utilize command -line tools prof iciently. Clear understanding on MS SQL availability group Exposure in scripting languages like Python and automation tools like Ansible. Have a proven effective and efficient troubleshooting skill set. Ability to cope well under pressure. Strong Organization Skills and Practical Sense Quick and Eager to Learn and explore both Technical and Semi -Technical work types Engineering Mindset Preferred Skills Experience / Knowledge of the following will be added advantage (but not mandatory): Experience in MySQL and Oracle Experience in Infrastructure Automation Development Experience with monitoring systems and log management/reporting tools (e.g.Loki, Grafana, Splunk).

Posted 2 months ago

Apply

7.0 - 12.0 years

10 - 15 Lacs

Pune

Work from Office

Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 7 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes. In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies early, and drive operational excellence. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. What Youll Do : - Configure and manage observability agents across AWS, Azure & GCP. - Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack. - Experience with different language stacks such as Java, Ruby, Python and Go. - Instrument services using OpenTelemetry and integrate telemetry pipelines. - Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs. - Create dashboards, set up alerts, and track SLIs/SLOs. - Enable RCA and incident response using observability data. - Secure the observability pipeline. You Bring : - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering. - Strong skills in reading and interpreting logs, metrics, and traces. - Proficiency with LGTM (Loki, Grafana, Tempo, Mimir) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. - Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. - Knowledge of OpenTelemetry, IaC, and security best practices. - Clear documentation of observability processes, logging standards & instrumentation guidelines. - Ability to proactively identify, debug, and resolve issues using observability data. - Focused on maintaining data quality and integrity across the observability pipeline.

Posted 2 months ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Senior Software DevOps Engineer, you will be responsible for leading the design, implementation, and evolution of telemetry pipelines and DevOps automation to enable next-generation observability for distributed systems. Your main focus will be on leveraging a deep understanding of Open Telemetry architecture along with strong DevOps practices to construct a reliable, high-performance, and self-service observability platform that spans hybrid cloud environments such as AWS and Azure. Your primary goal will be to provide engineering teams with actionable insights through rich metrics, logs, and traces while promoting automation and innovation at all levels. In your role, you will be involved in the following key activities: Observability Strategy & Implementation: - Design and manage scalable observability solutions using OpenTelemetry (OTel), including deploying OTel Collectors for ingesting and exporting telemetry data, guiding teams on instrumentation best practices, building telemetry pipelines for data routing, and utilizing processors and extensions for advanced enrichment and routing. DevOps Automation & Platform Reliability: - Take ownership of the CI/CD experience using GitLab Pipelines, integrate infrastructure automation with Terraform, Docker, and scripting in Bash and Python, and develop resilient and reusable infrastructure-as-code modules across AWS and Azure ecosystems. Cloud-Native Enablement: - Create observability blueprints for cloud-native applications on AWS and Azure, optimize cost and performance of telemetry pipelines, and ensure SLA/SLO adherence for observability services. Monitoring, Dashboards, and Alerting: - Build and maintain role-based dashboards in tools like Grafana and New Relic for real-time visibility into service health and business KPIs, implement alerting best practices, and integrate with incident management systems. Innovation & Technical Leadership: - Drive cross-team observability initiatives to reduce MTTR and enhance engineering velocity, lead innovation projects such as self-service observability onboarding and AI-assisted root cause detection, and mentor engineering teams on telemetry standards and operational excellence. Qualifications and Skills: - 10+ years of experience in DevOps, Site Reliability Engineering, or Observability roles - Deep expertise with OpenTelemetry, GitLab CI/CD, Terraform, Docker, and scripting languages (Python, Bash, Go) - Hands-on experience with AWS and Azure services, cloud automation, and cost optimization - Proficiency with observability backends such as Grafana, New Relic, Prometheus, and Loki - Strong passion for building automated, resilient, and scalable telemetry pipelines - Excellent documentation and communication skills to drive adoption and influence engineering culture Nice to Have: - Certifications in AWS, Azure, or Terraform - Experience with OpenTelemetry SDKs in Go, Java, or Node.js - Familiarity with SLO management, error budgets, and observability-as-code approaches - Exposure to event streaming technologies (Kafka, RabbitMQ), Elasticsearch, Vault, and Consul,

Posted 2 months ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining as a talented SDE1 - DevOps Engineer with the exciting opportunity to contribute towards building a top-notch DevOps infrastructure that can scale to accommodate the next 100M users. As an ideal candidate, you will be expected to tackle a variety of challenges with enthusiasm and take full ownership of your responsibilities. Your main responsibilities will include running a highly available Cloud-based software product on AWS, designing and implementing new systems in close collaboration with the Software Development team, setting up and maintaining CI/CD systems, and automating the deployment of software. You will also be tasked with continuously enhancing the security posture and operational efficiency of the Amber platform, as well as optimizing the operational costs. To excel in this role, you should possess 2-3 years of experience in a DevOps / SRE role, with a minimum of 2 years. You must have hands-on experience with AWS services such as ECS, EKS, RDS, Elasticache, and CloudFront, as well as familiarity with Google Cloud Platform. Proficiency in Infrastructure as Code tools like Terraform, CI/CD tools like Jenkins and GitHub Actions, and scripting languages such as Python and Bash is essential. Additionally, you should have a strong grasp of SCM in GitHub, networking concepts, and experience with observability and monitoring tools like Grafana, Loki, Prometheus, and ELK. Prior exposure to On-Call Rotation and mentoring junior DevOps Engineers would be advantageous. While not mandatory, knowledge of NodeJS and Ruby, including their platforms and workflows, would be considered a plus for this role.,

Posted 2 months ago

Apply

3.0 - 6.0 years

12 - 22 Lacs

Gurugram, Bengaluru, Mumbai (All Areas)

Work from Office

In the role of a DevOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure and CI/CD pipelines necessary to support our Generative AI projects. Furthermore, you will have the opportunity to critically assess and influence the engineering design, architecture, and technology stack across multiple products, extending beyond your immediate focus. - Design, deploy, and manage scalable, reliable, and secure Azure cloud infrastructure to support Generative AI workloads. - Implement monitoring, logging, and alerting solutions to ensure the health and performance of AI applications. - Optimize cloud resource usage and costs while ensuring high performance and availability. - Work closely with Data Scientists and Machine Learning Engineers to understand their requirements and provide the necessary infrastructure and tools. - Automate repetitive tasks, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, and Azure Resource Manager (ARM). - Utilize APM (Application Performance Monitoring) to identify and resolve performance bottlenecks Maintain comprehensive documentation for infrastructure, processes, and workflows. Must Have Skills: - Extensive knowledge of Azure services: Kubernetes, Azure App Service, Azure API management(APIM), Application gateway, AAD, GitHub Action, Istio, Datadog, Proficiency in containerization and orchestration tools such as (Jenkins, GitLab CI/CD, Azure DevOps) - Knowledge of API management platforms like APIM for API governance, security, and lifecycle management. - Expertise in monitoring and observability tools like Datadog, loki, grafana, prometheus for comprehensive monitoring, logging, and alerting solutions. Good scripting skills (Python, Bash, PowerShell). - Experience with infrastructure as code (Terraform, ARM Templates). - Experience in optimizing cloud resource usage and costs utilizing insights from Azure cost and monitor metrics.

Posted 2 months ago

Apply

5.0 - 8.0 years

6 - 16 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Job Title : DevOps Engineer Location: Mumbai/Bangalore/Chennai/Delhi NCR/Hyderabad Experience Required: 5+ Years Job Description Key Responsibilities: • Implement and maintain the cloud infrastructure • Ensure the smooth operation of environment • Evaluate new technologies in the field of infrastructure automation and cloud computing • Look for opportunities to improve performance, reliability and automation • Provide DevOps capability to team mates and customers • Perform code deployments • Release management activities • Resolve incidents and change requests • Document Solutions and communicate it to the users • Perform optimizations on existing solutions • Diagnose, troubleshoot, and resolve ensuring smooth operation of services. • Shows attitude and aptitude for owning responsibility of own work done and collaborate with other team member in their activities • Updates job knowledge by self-learning or participating in learning initiatives provided by organization Required Skills & Qualifications: • Bachelors degree in IT, computer science, computer engineering, or similar • 6 years of Overall experience with 3+years as Devops Engineer • Advanced experience with Cloud Infrastructure / Cloud Services (preferable on Microsoft Azure) • Container Orchestration (Kubernetes, docker ,Helm) • Experience with Linux incl. Scripting (Bash, Python) • Log and metrics management (ELK Stack), Monitoring (Prometheus,loki,Grafana,dynatrace) • Infrastructure as code / Deployment and configuration automation ( Terraform) • Continuous Integration / Continuous Delivery (Gitlab CI, Jenkins, Nexus etc.) • Infrastructure Security Principles • Advance experience in Helm and CI/CD pipelines • Advance Experience in configuration of DevOps Tools such as Jenkins ,sonarqube, Nexus etc • Exposure to SDLC & Agile process • Experience with SSO integrations • Knowledgeable on AI tools & efficient usage in day to day work • Attitude, Soft & Communication Skills • Experience in handling technically critical escalated situations, drive team of experts & come-up with best-in-class workarounds / solutions • Critical thinking generated by observation, experience, reflection, reasoning, and communication. • DevOps mindset (you build it, you run it; taking e2e responsibility and accountability) • Able to demonstrate how customer centric thinking is expressed and reinforced through the digital product design processes • Fluent English (written and spoken) is a must, other languages (e.g. German, French, etc.) are a plus. Nice to Have: • Knowledge in python • Databases (e.g. PostgreSQL, Elasticsearch)

Posted 2 months ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Bengaluru

Work from Office

The Opportunity Are you a self-starter with a strong background in UI development, automation, and cloud technologies, who thrives in a collaborative environment? If so, youll find an exciting opportunity on our team, where youll engage in innovative projects, deliver impactful demos, and work closely with diverse experts to drive real-world customer outcome solutions. This team strives to promote continuous learning and growth in a flexible and supportive culture. About the Team The team for this role is part of the Solutions & Performance Engineering organization within R&D at Nutanix, a global organization which operates out of various geographic locations. The team is known for its collaborative culture, where innovation and continuous learning are highly valued. The mission of the Solutions & Performance Engineering team is to engage customers on their technological and business challenges and leverage advanced technologies to develop impactful solutions, and provide efficient, seamless automation processes for clients worldwide. Your Role We are seeking a highly skilled Front-End Engineer to design, build, and optimize user interfaces with a focus on scalability and efficiency , that empower our engineering teams with deep insights into system performance. This role is ideal for someone with strong React.js expertise, a passion for building high-performing UIs, and a problem-solving mindset. Youll work closely with backend engineers and infrastructure teams to develop dashboards, integrate with APIs, and automate the visualization of complex data. Your work will help drive decisions, detect performance regressions, and streamline infrastructure automation workflows . 1. UI/UX Design & Front-End Development Build scalable and responsive front-end applications using React.js . Optimize UI/UX by managing cookies, caching , and performance tuning for large-scale apps (1,000+ pages). Revamp and modernize legacy front-end codebases for better maintainability and performance. Integrate with microservices-based backend architectures to ensure seamless data flow. Collaborate with design teams to create intuitive and visually appealing user interfaces. 2. Data Visualization & Insights Generation Develop interactive dashboards to visualize system performance trends and analytics. Work with APIs and performance benchmarks to translate backend data into actionable visual insights. Collaborate with backend engineers to define and optimize API contracts for UI needs. Utilize tools like Figma for UI design and translate wireframes into high-quality front-end components. What You Will Bring Required Skills & Experience: Proficiency in React.js , JavaScript, and front-end architecture. Strong experience with UI/UX design principles and tools such as Figma . Familiarity with REST APIs and microservices integration. Version control with Git ; experience in CI/CD pipelines , Docker , and Kubernetes . Experience building UIs that scale and perform efficiently under large data loads. Soft Skills & Qualities: Problem Solver: Can troubleshoot complex issues and design innovative, scalable solutions. Effective Communicator: Comfortable explaining technical concepts to both engineers and non-technical stakeholders. Team Player: Works well across teams and contributes to a collaborative, solution-oriented environment. Self-Starter: Independent learner who adapts quickly to new technologies and challenges. Detail-Oriented: Produces high-quality, efficient, and reliable code. Accountable: Takes ownership of tasks and delivers end-to-end solutions. Organized: Strong time management and prioritization skills in fast-paced environments. Preferred / Bonus Skills: Experience with distributed systems and cloud-native architectures . Familiarity with observability tools (e.g., Prometheus, Grafana, Loki, Jaeger, ELK stack). Background in cloud infrastructure automation using AWS, Azure, GCP, or OpenStack. Hands-on with infrastructure as code and workload orchestration tools like Terraform , Ansible , or Kubernetes

Posted 2 months ago

Apply

3.0 - 6.0 years

10 - 14 Lacs

Bengaluru

Hybrid

Hi all , we are looking for a role DevOps Engineer experience : 3 - 6 years notice period : Immediate - 15 days location : Bengaluru Description: Job Title: DevOps Engineer with 4+ years experience Job Summary We're looking for a dynamic DevSecOps Engineer to lead the charge in embedding security into our DevOps lifecycle. This role focuses on implementing secure, scalable, and observable cloud-native systems, leveraging Azure, Kubernetes, GitHub Actions, and security tools like Black Duck, SonarQube, and Snyk. Key Responsibilities • Architect, deploy, and manage secure Azure infrastructure using Terraform and Infrastructure as Code (IaC) principles • Build and maintain CI/CD pipelines in GitHub Actions, integrating tools such as Black Duck, SonarQube, and Snyk • Operate and optimize Azure Kubernetes Service (AKS) for containerized applications • Configure robust monitoring and observability stacks using Prometheus, Grafana, and Loki • Implement incident response automation with PagerDuty • Manage and support MS SQL databases and perform basic operations on Cosmos DB • Collaborate with development teams to promote security best practices across SDLC • Identify vulnerabilities early and respond to emerging security threats proactively Required Skills • Deep knowledge of Azure Services, AKS, and Terraform • Strong proficiency with Git, GitHub Actions, and CI/CD workflow design • Hands-on experience integrating and managing Black Duck, SonarQube, and Snyk • Proficiency in setting up monitoring stacks: Prometheus, Grafana, and Loki • Familiarity with PagerDuty for on-call and incident response workflows • Experience managing MSSQL and understanding Cosmos DB basics • Strong scripting ability (Python, Bash, or PowerShell) • Understanding of DevSecOps principles and secure coding practices • Familiarity with Helm, Bicep, container scanning, and runtime security solutions

Posted 2 months ago

Apply

2.0 - 4.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Skills Required: Technical areas (hands-on experience in academic projects/internships) Experience with Kubernetes, Jenkins, Gitlab, Github, CI/CD, Terraform, Linux, Bash, Python, AWS, GCP, GKE, and EKSUnderstanding of Public/Private/Hybrid Cloud Solutions. Own the responsibility for platform management, supporting services, and all related tooling and automation. Proficient in cloud-native technologies, automation, and containerization. Experience in setting up and managing cloud infrastructure and services for a wide range of Applications. Some experience in ReactJS / NodeJS, PHP, Phyton and UNIX shell,so a background in system- oriented languages is important. Managing and deploying cloud-native applications on Kubernetes clusters, Setting CI/CD pipelines in (Jenkins, Gitlab, Github), Databases Migration (MySQL, Postgresql, Cassandra), Setting up Monitoring (Grafana, Loki, Prometheus, Mimir, ELK Stack). Certified in Kubernetes and Jenkins.Experienced in using Terraform to automate infrastructure provisioning. We are looking for bright, passionate, and dedicated people with clearly demonstrated initiative and a history of success in their past positions to join our growing team.

Posted 2 months ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Req ID: 322342 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Engineer - Azure to join our team in Bangalore, Karn?taka (IN-KA), India (IN). NTT DATA, Inc. is seeking a talented DevOps Engineer to join our dynamic team. As a leading solution provider company, we are committed to delivering exceptional solutions to our clients. Our success is driven by the dedication and expertise of our employees, who play a vital role in shaping our growth and staying ahead of the competition. By joining our team, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our clients success. Primary Responsibilities of this role: As a DevOps Engineer you will be responsible for the smooth operation of our customers infrastructure. You will collaborate closely with internal teams and client organizations focusing on automation, continuous integration/delivery (CI/CD), infrastructure management, and collaboration to improve software delivery speed and quality for our clients. What you will do: Supporting GCP environment. Engage in Azure DevOps administration Implement Grafana for visualization and monitoring, including Prometheus and Loki for metrics and logs management. Respond to platform to performance and availability issues Open and follows tickets with Vendor product owners Manages license assignment and allocations Install approved Azure Market Place plugins Provide general supports to app teams for supported DevOps tools Troubleshoot Azure DevOps issues and related to DevOps toolsets and deployment capabilities Work in general backlog of support tickets Manage and support Artifact Management (Jfrog) Managing and support Artifact Management (SonarQube) Operate as a member of global, distributed teams that deliver quality services Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Rapidly gain knowledge of emerging technologies in cloud and their potential application in the customer environment / impact analysis What you will bring: 5+ years of experience in IT 2-3 years of experience in GCP, GKE and Azure DevOps as well as general DevOps toolsets. Azure Devops Administration Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Ability to work independently and as a productive team member, actively participating in agile ceremonies Ability to identify potential issues and take ownership of resolving existing issues Strong analytical skills, Curious nature, Strong communication skills written and verbal Team-oriented, motivated, and enthusiastic with the willingness to take initiative and maintain a positive approach Ability to work with a virtual team Excellent communication, presentation, and relationship-building skills Solid understanding of enterprise operational processes, planning, and operations Preferred Skills - Good to have: Grafana experience is a plus Jira - Ticketing tool - Good to have About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 2 months ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Req ID: 322341 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Engineer - Azure to join our team in Bangalore, Karn?taka (IN-KA), India (IN). NTT DATA, Inc. is seeking a talented DevOps Engineer to join our dynamic team. As a leading solution provider company, we are committed to delivering exceptional solutions to our clients. Our success is driven by the dedication and expertise of our employees, who play a vital role in shaping our growth and staying ahead of the competition. By joining our team, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our clients success. Primary Responsibilities of this role: As a DevOps Engineer you will be responsible for the smooth operation of our customers infrastructure. You will collaborate closely with internal teams and client organizations focusing on automation, continuous integration/delivery (CI/CD), infrastructure management, and collaboration to improve software delivery speed and quality for our clients. What you will do: Supporting GCP environment. Engage in Azure DevOps administration Implement Grafana for visualization and monitoring, including Prometheus and Loki for metrics and logs management. Respond to platform to performance and availability issues Open and follows tickets with Vendor product owners Manages license assignment and allocations Install approved Azure Market Place plugins Provide general supports to app teams for supported DevOps tools Troubleshoot Azure DevOps issues and related to DevOps toolsets and deployment capabilities Work in general backlog of support tickets Manage and support Artifact Management (Jfrog) Managing and support Artifact Management (SonarQube) Operate as a member of global, distributed teams that deliver quality services Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Rapidly gain knowledge of emerging technologies in cloud and their potential application in the customer environment / impact analysis What you will bring: 5+ years of experience in IT 2-3 years of experience in GCP, GKE and Azure DevOps as well as general DevOps toolsets. Azure Devops Administration Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Ability to work independently and as a productive team member, actively participating in agile ceremonies Ability to identify potential issues and take ownership of resolving existing issues Strong analytical skills, Curious nature, Strong communication skills written and verbal Team-oriented, motivated, and enthusiastic with the willingness to take initiative and maintain a positive approach Ability to work with a virtual team Excellent communication, presentation, and relationship-building skills Solid understanding of enterprise operational processes, planning, and operations Preferred Skills - Good to have: Grafana experience is a plus Jira - Ticketing tool - Good to have About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 2 months ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Req ID: 322318 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Engineer - GCP to join our team in Bangalore, Karn?taka (IN-KA), India (IN). NTT DATA, Inc. is seeking a talented DevOps Engineer to join our dynamic team. As a leading solution provider company, we are committed to delivering exceptional solutions to our clients. Our success is driven by the dedication and expertise of our employees, who play a vital role in shaping our growth and staying ahead of the competition. By joining our team, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our clients success. Primary Responsibilities of this role: As a DevOps Engineer you will be responsible for the smooth operation of our customers infrastructure. You will collaborate closely with internal teams and client organizations focusing on automation, continuous integration/delivery (CI/CD), infrastructure management, and collaboration to improve software delivery speed and quality for our clients. What you will do: Supporting GCP environment. Engage in Azure DevOps administration Implement Grafana for visualization and monitoring, including Prometheus and Loki for metrics and logs management. Respond to platform to performance and availability issues Open and follows tickets with Vendor product owners Manages license assignment and allocations Install approved Azure Market Place plugins Provide general supports to app teams for supported DevOps tools Troubleshoot Azure DevOps issues and related to DevOps toolsets and deployment capabilities Work in general backlog of support tickets Manage and support Artifact Management (Jfrog) Managing and support Artifact Management (SonarQube) Operate as a member of global, distributed teams that deliver quality services Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Rapidly gain knowledge of emerging technologies in cloud and their potential application in the customer environment / impact analysis What you will bring: 5+ years of experience in IT 2-3 years of experience in GCP, GKE and Azure DevOps as well as general DevOps toolsets. Azure Devops administration experience. Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Ability to work independently and as a productive team member, actively participating in agile ceremonies Ability to identify potential issues and take ownership of resolving existing issues Strong analytical skills, Curious nature, Strong communication skills written and verbal Team-oriented, motivated, and enthusiastic with the willingness to take initiative and maintain a positive approach Ability to work with a virtual team Excellent communication, presentation, and relationship-building skills Solid understanding of enterprise operational processes, planning, and operations Preferred Skills - Good to have: Grafana experience is a plus Jira - Ticketing tool - Good to have About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 2 months ago

Apply

5.0 - 8.0 years

20 - 30 Lacs

Hyderabad

Work from Office

About the Role We are looking for a highly skilled Site Reliability Engineer (SRE) to lead the implementation and management of our observability stack across Azure-hosted infrastructure and .NET Core applications. This role will focus on configuring and managing Open Telemetry, Prometheus, Loki, and Tempo, along with setting up robust alerting systems across all services including Azure infrastructure and MSSQL databases. You will work closely with developers, DevOps, and infrastructure teams to ensure the performance, reliability, and visibility of our .NET Core applications and cloud services. Key Responsibilities • Observability Platform Implementation: Design and maintain distributed tracing, metrics, and logging using OpenTelemetry, Prometheus, Loki, and Tempo. Ensure complete instrumentation of .NET Core applications for end-to-end visibility. o Implement telemetry pipelines for application logs, performance metrics, and traces. Monitoring & Alerting: Develop and manage SLIs, SLOs, and error budgets. Create actionable, noise-free alerts using Prometheus Alertmanager and Azure Monitor. o Monitor key infrastructure components, applications, and databases with a focus on reliability and performance. • Azure & Infrastructure Integration: Integrate Azure services (App Services, VMs, Storage, etc.) with the observability stack. o Configure monitoring for MSSQL databases, including performance tuning metrics and health indicators. o Use Azure Monitor, Log Analytics, and custom exporters where necessary. Automation & DevOps: Automate observability configurations using Terraform, PowerShell, or other IaC tools. Integrate telemetry validation and health checks into CI/CD pipelines. Maintain observability as code for repeatable deployments and easy scaling. • Resilience & Reliability Engineering: Conduct capacity planning to anticipate scaling needs based on usage patterns and growth. Define and implement disaster recovery strategies for critical Azure-hosted services and databases. Perform load and stress testing to identify performance bottlenecks and validate infrastructure limits. Support release engineering by integrating observability checks and rollback strategies in CI/CD pipelines. Apply chaos engineering practices in lower environments to uncover potential reliability risks proactively. • Collaboration & Documentation: Partner with engineering teams to promote observability best practices in .NET Core development. o Create dashboards (Grafana preferred) and runbooks for system insights and incident response. o Document monitoring standards, troubleshooting guides, and onboarding materials. Required Skills and Experience 4+ years of experience in SRE, DevOps, or infrastructure-focused roles. Deep experience with .NET Core application observability using OpenTelemetry. Proficiency with Prometheus, Loki, Tempo, and related observability tools. Strong background in Azure infrastructure monitoring, including App Services and VMs. Hands-on experience monitoring MSSQL databases (deadlocks, query performance, etc.). • Familiarity with Infrastructure as Code (Terraform, Bicep) and scripting (PowerShell, Bash). Experience building and tuning alerts, dashboards, and metrics for production systems. Preferred Qualifications Azure certifications (e.g., AZ-104, AZ-400). Experience with Grafana, Azure Monitor, and Log Analytics integration. Familiarity with distributed systems and microservice architectures. Prior experience in high-availability, regulated, or customer-facing environments.

Posted 3 months ago

Apply

5.0 - 10.0 years

0 Lacs

pune, chennai, bengaluru

Work from Office

Observability-Related Knowledge of observability - monitoring and alerting Experience with Kubermetheus Stack (Kubernetes, Prometheus, Loki, Grafana, Alert Manager) Specifically, must be able to plan, build, test, and launch an observability platform from end-to-end Experience with AWS Proficiency in Python Recent portfolio projects will be required for review and interviews will include coding challenges PagerDuty experience Experience with CI/CD pipelines Infrastructure as Code experience Strong troubleshooting and problem-solving skills Mandatory Skill: Looking for senior SRE profiles who is strong in Coding/ Automation (Python), Terraform, Kubernetes, Prometheus, Loki, Grafana, Alert Manager , PagerDuty, AWS

Posted Date not available

Apply

10.0 - 20.0 years

30 - 40 Lacs

mumbai, navi mumbai, mumbai (all areas)

Work from Office

We're Hiring: DevOps Architect | Navi Mumbai (CBD Belapur) Join a high-impact Banking Payment Aggregator project where youll lead large-scale DevOps initiatives in a fast-paced, compliance-driven environment. Job Title: DevOps Architect Location: Navi Mumbai (CBD Belapur) Employment Type: Full-time (Alternate Saturdays working) Experience: 10+ years in DevOps (3+ years in a leadership role) Key Responsibilities: Lead the architecture and deployment of scalable, secure microservices on OpenShift & Kubernetes Drive CI/CD pipelines, infrastructure automation, and Service Mesh (Istio/Linkerd) integrations Collaborate with stakeholders to ensure compliance with PCI-DSS, SOX, and banking regulations Mentor and manage a high-performing DevOps team Continuously improve performance, observability, and security in a containerized environment Mandatory Skills: Deep understanding of Microservices, OpenShift, Mesh Services, Istio, Helm Charts, Loki, GitLab Strong experience with Docker, cloud platforms (AWS/Azure/GCP), and Infrastructure as Code Hands-on with scripting tools (e.g., Bash, Python) and config management (Ansible, Chef, Puppet) Good to Have: Exposure to the banking/payments domain Experience with tools like Prometheus, Grafana, Terraform This is a great leadership role for someone passionate about driving innovation in cloud-native DevOps environments. Ready to take the lead? Share your resume at rahul@furation.tech

Posted Date not available

Apply

10.0 - 17.0 years

30 - 45 Lacs

pune, chennai, bengaluru

Hybrid

Please find the below JD and company portfolio for your reference: Senior Observability Specialist Location: [Chennai, Pune, Bangalore] Employment Type: Fulltime Experience Required: [15-18] Job Summary: We are seeking a highly skilled Senior Observability Specialist to design, implement, and manage endtoend observability strategies across cloud and on-premises environments. This role requires expertise in modern monitoring, logging, and tracing tools, ensuring system reliability, performance optimization, and proactive incident detection. The ideal candidate will have experience with Dynatrace, Datadog, and various opensource solutions, including Grafana, Loki, Tempo, Mimir, and Prometheus. Key Responsibilities: Design and implement fullstack observability architectures that provide seamless monitoring, logging, and tracing capabilities. Define best practices for observability across hybrid cloud, multicloud, and onpremises environments. Ensure scalability, availability, and resilience of monitoring solutions in hightraffic applications. Monitoring & Dashboarding Architecture: Architect Grafana-based observability platforms for real-time visualization and analysis of metrics. Establish Prometheus-based metric collection pipelines optimized for high-volume environments. Integrate Dynatrace and Datadog into cloud-native infrastructure for proactive monitoring. Centralized Logging & Distributed Tracing: Design and implement centralized logging solutions using Loki, ensuring efficient log ingestion, indexing, and querying. Develop distributed tracing strategies with Tempo to enhance performance monitoring in microservices architectures. Optimize Mimir-based metric storage for seamless data retrieval and scalability. Observability Strategy & Automation: Define and implement observability-driven DevOps methodologies to improve system reliability. Lead automation initiatives for log analysis, alerting, and anomaly detection using machine learning models. Architect automated alerting workflows using Prometheus Alertmanager, Dynatrace AI alerts, and Datadog event notifications. Ensure efficient KPI tracking and proactive troubleshooting based on observability insights. Scripting & API Integrations: Develop custom API integrations using Python or Go to query, retrieve, and process monitoring data. Architect event-driven observability pipelines for automated data collection and reporting. DevOps & CI/CD Integration: Collaborate with DevOps teams to integrate observability tooling within CI/CD pipelines. Optimize system performance and resource utilization through proactive monitoring. Advocate for best practices in observability driven software development. Cloud-Native Observability & DevOps Alignment: Design observability strategies tailored for Kubernetes-based microservices and cloud-native architectures. Collaborate with DevOps teams to embed observability practices within CI/CD pipelines for continuous monitoring. Optimize logging and metrics pipelines to support containerized and serverless environments. Qualifications & Skills Architectural Focus • Strong expertise in designing observability frameworks across Dynatrace, Datadog, Grafana, Loki, Tempo, Mimir, and Prometheus. • Proficiency in observability architecture, ensuring scalable and reliable monitoring solutions. • Advanced experience in scripting with Python or Go for custom API integrations. • Deep understanding of DevOps methodologies, CI/CD best practices, and cloud-native observability tools. • Experience in microservices architecture and distributed systems monitoring. • Ability to troubleshoot bottlenecks, optimize performance, and implement predictive observability insights. Preferred Certifications (Optional): • Certified Kubernetes Administrator (CKA) • AWS Certified DevOps Engineer • Dynatrace Performance Monitoring Certification • Prometheus Certified Associate. Who we are CitiusTech - Shaping Healthcare Possibilities. CitiusTech is a global IT services, consulting, and business solutions enterprise 100% focused on the healthcare and life sciences industry. We enable 140+ enterprises to build a human-first ecosystem that is efficient, effective, and equitable with deep domain expertise and next-gen technology. With over 8,500 healthcare technology professionals worldwide, CitiusTech powers healthcare digital innovation, business transformation, and industry-wide convergence through next-generation technologies, solutions, and products. Our Purpose We are shaping healthcare possibilities to make our clients businesses successful, which is not just a statement but our purpose, driving us to explore whats next in healthcare. Our goal is clear: to make healthcare better for all more efficient, effective, and equitable. We are investing in people, technology, innovation, and partnerships to create meaningful change. We see technology not just as a tool but as a catalyst that amplifies human ingenuity to solve complex healthcare challenges. 100% healthcare focus | Trusted by 140+ healthcare and life sciences enterprises | 40% of Fortune 500 healthcare enterprises are our clients | #1 Rated as a leader by top analyst firms Our vision To inspire new possibilities for the health ecosystem with technology and human ingenuity. Our commitment To combine the best of IT services, consulting, products, accelerators, and frameworks with a client-first mindset and next-gen tech understanding. Together, we’re humanizing healthcare to make a positive impact on human lives. What drives us At CitiusTech, we believe in making a tangible difference in healthcare. We constantly explore new ways to transform the industry, from AI-driven solutions to advanced data analytics and cloud computing. Our collaborative culture, combined with a relentless drive for excellence, positions us as innovators reshaping the healthcare landscape, one solution at a time. Life@CitiusTech We focus on building highly motivated engineering teams and thought leaders with an entrepreneurial mindset centered on our core values of Passion, Respect, Openness, Unity, and Depth (PROUD) of knowledge . Our success lies in creating a fun, transparent, non-hierarchical, diverse work culture that focuses on continuous learning and work-life balance. Rated by our employees as the ‘Great Place to Work for’ according to the Great Place to Work survey. We offer you comprehensive benefits to ensure you have a long and rewarding career with us. Our EVP Be You Be Awesome is our EVP. It reflects our continuing efforts to create CitiusTech as a great workplace where our employees can thrive, personally and professionally. It encompasses the unique benefits and opportunities we offer to support your growth, well-being, and success throughout your journey with us and beyond. Together with our clients, we are solving some of the greatest healthcare challenges and positively impacting human lives. Welcome to the world of Faster Growth, Higher Learning, and Stronger Impact. Here is an opportunity for you to make a difference and collaborate with global leaders to shape the future of healthcare and positively impact human lives. To learn more about CitiusTech, visit https://www.citiustech.com/careers and follow us on Happy applying!

Posted Date not available

Apply
Page 3 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies