Jobs
Interviews

3020 Datadog Jobs - Page 37

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

12 - 22 Lacs

Gurugram, Bengaluru, Mumbai (All Areas)

Work from Office

In the role of a DevOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure and CI/CD pipelines necessary to support our Generative AI projects. Furthermore, you will have the opportunity to critically assess and influence the engineering design, architecture, and technology stack across multiple products, extending beyond your immediate focus. - Design, deploy, and manage scalable, reliable, and secure Azure cloud infrastructure to support Generative AI workloads. - Implement monitoring, logging, and alerting solutions to ensure the health and performance of AI applications. - Optimize cloud resource usage and costs while ensuring high performance and availability. - Work closely with Data Scientists and Machine Learning Engineers to understand their requirements and provide the necessary infrastructure and tools. - Automate repetitive tasks, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, and Azure Resource Manager (ARM). - Utilize APM (Application Performance Monitoring) to identify and resolve performance bottlenecks Maintain comprehensive documentation for infrastructure, processes, and workflows. Must Have Skills: - Extensive knowledge of Azure services: Kubernetes, Azure App Service, Azure API management(APIM), Application gateway, AAD, GitHub Action, Istio, Datadog, Proficiency in containerization and orchestration tools such as (Jenkins, GitLab CI/CD, Azure DevOps) - Knowledge of API management platforms like APIM for API governance, security, and lifecycle management. - Expertise in monitoring and observability tools like Datadog, loki, grafana, prometheus for comprehensive monitoring, logging, and alerting solutions. Good scripting skills (Python, Bash, PowerShell). - Experience with infrastructure as code (Terraform, ARM Templates). - Experience in optimizing cloud resource usage and costs utilizing insights from Azure cost and monitor metrics.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Get to know Okta Okta is The World’s Identity Company. We free everyone to safely use any technology, anywhere, on any device or app. Our flexible and neutral products, Okta Platform and Auth0 Platform, provide secure access, authentication, and automation, placing identity at the core of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We’re building a world where Identity belongs to you. Who Is Okta? | About Okta Okta is an enterprise-grade identity management service, built from the ground up in the cloud and delivered with an unwavering focus on customer success. With Okta you can manage access across any application, person, or device. Whether the people are employees, partners, or customers, or the applications are in the cloud, on-premises, or on a mobile device, Okta helps you become more secure, make people more productive, and maintain compliance. The Okta service provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on-premises applications, directories, and identity management systems. Watch this 1-min video to meet the people behind the world’s leading identity company:Here Location: India Innovation Centre, Bangalore Position Description: We are looking for engineering graduates who are ready to kickstart their careers! You will join one of our Site Reliability Engineering Domains, where you will address real-world challenges and create innovative value for our customers. We are looking for problem solvers who care about making a meaningful impact for our customers and enjoy working as part of a collaborative, distributed team. As a new grad, you will be supported along the way and will have opportunities for continuous learning and mentorship. Job Duties and Responsibilities: Assist in monitoring the health and performance of production systems using internal tools and dashboards. Help automate routine tasks through scripting and simple infrastructure-as-code solutions. Collaborate with SRE and developers to improve system reliability and deployment workflows. Contribute to the creation and maintenance of internal documentation for operational processes and runbooks. Participate in testing and validation of system updates, deployments, and configuration changes. Learn and apply best practices in observability, alerting, and service-level objectives (SLO). Perform basic troubleshooting of services, network issues, and infrastructure components. Attend team standups and planning meetings to align with ongoing priorities and projects. Take ownership of a small project or deliverable under the guidance of a mentor. Writing and maintaining documentation. Minimum Required Knowledge, Skills, and Abilities: Graduated with a Bachelor’s degree in Computer Science Basic understanding of Linux/Unix operating systems and command-line tools. Familiarity with networking fundamentals (e.g., DNS, HTTP, TCP/IP). Awareness of software development and deployment lifecycles. Introductory knowledge of monitoring tools (e.g., Prometheus, Grafana, Nagios, Datadog) and logging frameworks (e.g., ELK stack, Splunk) Willingness to learn new technologies and tools quickly. Ability to work both independently and as part of a collaborative team. Strong attention to detail and a proactive approach to problem-solving. Understanding basic programming concepts, data structures, and algorithms will help you understand and contribute to SRE tools and automation efforts. Basic understanding of database concepts and SQL Skills: Proficiency in at least one scripting language (e.g., Python, GO, Bash, or Shell). Understanding of cloud computing principles (AWS or GCP preferred). Basic troubleshooting and problem-solving capabilities. Comfort with Git and version control workflows. Good communication skills, especially in explaining technical concepts. Enjoy being part of a highly collaborative, remote-friendly environment. Basic knowledge of container technologies like Docker and orchestration tools like Kubernetes Familiarity with Infrastructure-as-Code (IaC) tools like Terraform or CloudFormation, and configuration management tools like Ansible, Puppet, or Chef, can be a plus. What You Can Look Forward to as a Full-Time/Intern at Okta Projects with real-world impact Mentorship and career growth Inclusive culture and fun communities Social impact opportunities What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/.

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Equifax is where you can power your possible. We seek individuals to achieve their potential, develop new skills, and collaborate with bright minds. The Technology Operations Resilience Center – Incident Management Supervisor will lead a team providing 24x7 support for Event and Incident Management of all Equifax applications and infrastructure. The primary goal is to identify and mitigate incidents proactively. This role requires partnership with business sponsors, project managers, application support, networking, system administrators, and business owners. What You’ll Do Lead and manage a team of Incident Coordinators / Managers in India. Monitor Equifax Applications and Infrastructure. Perform initial analysis of alert events and guide the team in determining next steps. Ensure the team performs basic System Administration tasks to provide Level 1 NixSA and WinSA services. Lead and participate actively in Incident Management bridge lines and chats. Oversee the team's coordination of low-priority issues, ensuring proper team engagement, incident investigation progress, timely resolution, and accurate documentation. Oversee fault handling and escalation (identifying and responding to faults, liaising with 3rd party suppliers, and handling escalation). Ensure 24x7 support coverage with team shift management across time zones. Provide an “eyes on glass” presence, ensuring immediate identification of system degradation or failure, and guide the team in the same. Provide the team with tools, training, and guidance to react to alerting, provide first-level analysis, and perform mitigation actions. Manage communication with external customers during overflow situations. Mentor, train, and develop Incident Coordinators, fostering a high-performing team environment. Conduct performance reviews and provide feedback to team members. Ensure adherence to ITIL Incident Management processes and Equifax policies. Drive continuous improvement initiatives within the team and the incident management process. What Experience You Need A Bachelor’s Degree in a Technology field OR 5+ years of equivalent work experience. English (B2+). High School Diploma. 3+ years of experience in Incident Management or a related field. 2+ years of experience in a supervisory or team lead role. Experience managing a team in India or remotely. What Could Set You Apart Experience in any of the following technologies: ServiceNow PagerDuty Datadog SolarWinds GCP Statuspage Experience with performance management and team development. We offer comprehensive compensation and healthcare packages, 401k matching, paid time off, and organizational growth potential. Equifax is an Equal Opportunity Employer.

Posted 4 weeks ago

Apply

7.0 - 12.0 years

19 - 25 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Project description We're seeking a solid and creative AWS Cloud DevOps eager to solve scale problems and work on cutting-edge and open-source technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting-edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture strives to solve challenging problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. We do it with a truly flexible environment, high-impact projects in Agile environments, a culture focused on results, training, and strong support to grow your career. In this project, you will be a member of the Information Technology Team, within the Information Technology Division. Responsibilities +7 years of Experience as a AWS DevOps Engineer with technical expertise in Build and Release Management. ECS Fargate CloudFormation Elastic Cache Redis Open Search Solace/Active MQ Github and Github actions Route 53 DR setup ( Active / Active or Active / Passive ) Monitoring Tools Kibana/ Datadog/ Dynatrace r any similar tools Skills Must have Strong in communication. AWS hands-on experience on AWS 5+ years as SrDevOps and over all IT experience 10+ years CI/CD 7+ years Lead Experience 2 -3 minimum Experience doing production support and able to guide the team. Able to drive innovation and leadership. Nice to have EKS Document DB Dynamo Neptune Harness Quantum Metrics

Posted 4 weeks ago

Apply

10.0 - 15.0 years

13 - 18 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Project description We're seeking a solid and creative AWS Cloud DevOps eager to solve scale problems and work on cutting-edge and open-source technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting-edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture strives to solve challenging problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. We do it with a truly flexible environment, high-impact projects in Agile environments, a culture focused on results, training, and strong support to grow your career. In this project, you will be a member of the Information Technology Team, within the Information Technology Division. Responsibilities This position supports and transforms existing and new mission-critical and highly visible operational website(s) and applications spanning multiple technology stacks through all phases of SDLC, while working collaboratively across IT, business, and third-party suppliers from around the globe in a 24x7, fast-paced, and Agile based environment. Assist in design and development of the account application Influencing peers, juniors, and seniors both within the organization and across the account. Coordinate and collaborate with the Product and Engineering teams to understand and solve problems, come up with creative solutions, and help with tracking and delivering within the release plan Collaborate with Engineering and QA to resolve bugs Skills Must have 10+ years experience in IT 5+ years as SrDevOps , with AWS hands-on experience 7+ years CI/CD experience 2+ Years experience in Technical leadership role Experience infrastructure provisioning using Terraforms, Cloudformation Experience doing production support and able to guide the team. Hands on experience in AWS provisioning and good knowledge of AWS Services like ECS Fargate (Mandatory), OpenSearch. Knowledge of CloudFormation scripts for automation purpose. Experience in creating of Elastic Cache Redis cluster and monitoring the metrics. Experience in working on GitHub and GitHub actions. Experience in DR setup ( Active / Active or Active / Passive ) Experience in Monitoring the Kibana / DataDog / Dynatrace and checking the latency issues if any. Experience in Route 53 Akamai Understanding Nice to have Experience in creating of DynamoDB tables, DocumentDB, and AuroraDB. Harness

Posted 4 weeks ago

Apply

7.0 - 12.0 years

17 - 32 Lacs

Hyderabad

Hybrid

The GCP CloudOps Engineer is accountable for a continuous, repeatable, secure, and automated deployment, integration, and test solutions utilizing Infrastructure as Code (IaC) and DevSecOps techniques 8+ years of hands-on experience in infrastructure design, implementation, and delivery 3+ years of hands-on experience with monitoring tools (Datadog, New Relic, or Splunk) 4+ years of hands-on experience with Container orchestration services, including Docker or Kubernetes, GKE. Experience with working across time zones and with different cultures. 5+ years of hands-on experience in Cloud technologies GCP is preferred. Maintain an outstanding level of documentation, including principles, standards, practices, and project plans. Having experience building a data warehouse using Databricks is a huge plus. Hands-on experience with IaC patterns and practices and related automation tools such as Terraform, Jenkins, Spinnaker, CircleCI, etc., built automation and tools using Python, Go, Java, or Ruby. Deep knowledge of CICD processes, tools, and platforms like GitHub workflows and Azure DevOps. Proactive collaborator and can work in cross-team initiatives with excellent written and verbal communication skills. Experience with automating long-term solutions to problems rather than applying a quick fix. Extensive knowledge of improving platform observability and implementing optimizations to monitoring and alerting tools. Experience measuring and modeling cost and performance metrics of cloud services and establishing a vision backed by data. Develop tools and CI/CD framework to make it easier for teams to build, configure, and deploy applications Contribute to Cloud strategy discussions and decisions on overall Cloud design and best approach for implementing Cloud solutions Follow and Develop standards and procedures for all aspects of a Digital Platform in the Cloud Identify system enhancements and automation opportunities for installing/maintaining digital platforms Adhere to best practices on Incident, Problem, and Change management Implementing automated procedures to handle issues and alerts proactively Experience with debugging applications and a deep understanding of deployment architectures. Pluses: Data bricks and MongoDB Experience with the Multicloud environment (GCP, AWS, Azure), GCP is the preferred cloud provider. Experience with GitHub and GitHub Actions

Posted 4 weeks ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Project description We're seeking a solid and creative .NET Developer eager to solve scale problems and work on cutting-edge and open-source technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting-edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture strives to solve challenging problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. We do it with a truly flexible environment, high-impact projects in Agile environments, a culture focused on results, training, and strong support to grow your career. In this project, you will be a member of the Information Technology Team, within the Information Technology Division. This position supports and transforms existing and new mission-critical and highly-visible operational website(s) and applications - spanning multiple technology stacks - through all phases of SDLC, while working collaboratively across IT, business, and third-party suppliers from around the globe in a 24x7, fast-paced, and Agile based environment. Responsibilities Experience in .NET (Backend) Skills Must have 8-12 Years experience in .Net Technologies Hands-on service design, schema design and application integration design Hands-on software development using C#, .Net Core Use of multiple Cloud native database platforms including DynamoDB, SQL, Elastic cache, and others Conduct Code reviews and peer reviews Unit test and Unit test automation, defect resolution and software optimization Code deployment using CI/CD processes Understand business requirements and technical limitations Ability to learn new technologies and influence the team and leadership to constantly implement modern solutions Experience in using Elasticsearch, Logstash, Kibana (ELK) stack for Logging and Analytics Experience in container orchestration using Kubernetes Knowledge and Experience working with public cloud AWS services Knowledge of Cloud Architecture and Design Patterns Ability to prepare documentation for Microservices Monitoring tools such as Datadog, Logstash Excellent Communication skills Nice to have Airline industry knowledge is preferred but not required

Posted 4 weeks ago

Apply

0 years

10 - 40 Lacs

Gurugram, Haryana, India

On-site

DevOps Engineer AiSensy Gurugram, Haryana, India (On-site) About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀 What You’ll Do (Key Responsibilities) 🔹 CI/CD & Automation: Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins. Automate deployment processes to improve efficiency and reduce downtime. 🔹 Infrastructure Management Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code. Deploy and maintain Dockerized applications on Kubernetes clusters for scalability. 🔹 Cloud & Security Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure. Optimize cloud costs and ensure security best practices are in place. 🔹 Monitoring & Troubleshooting Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime. Proactively identify and resolve infrastructure-related issues. 🔹 Scripting & Automation Use Python or Bash scripting to automate repetitive DevOps tasks. Build internal tools for system health monitoring, logging, and debugging. What We’re Looking For (Must-Have Skills) ✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket) ✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins ✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi ✅ Containerization & Orchestration: Experience with Docker & Kubernetes ✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers ✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana ✅ Scripting Knowledge: Python or Bash for automation Bonus Skills (Good to Have, Not Mandatory) ➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking ➕ Experience with Microsoft/Linux/F5 Technologies ➕ Hands-on knowledge of Database servers Skills:- Amazon Web Services (AWS), GitHub, Jenkins, Terraform, Ansible, Kubernetes, prometheus, AWS Bedrock, Chef, Puppet, Docker, Google Cloud Platform (GCP) and Python

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary - IMMEDIATE JOINERS ONLY (ON-SITE IN CHENNAI) We are seeking ITSM manager to lead and evolve our change management strategy, ensuring software and infrastructure changes are delivered safely, reliably, and with minimal risk to business operations. You will collaborate with engineering, DevOps, SRE, security, and compliance teams to drive process maturity, automation, and cultural adoption of safe change practices. Key Responsibilities Change Governance Own and continuously improve the change management framework across the organization. Lead or participate daily/weekly Change Review Board (CRB) meetings and ensure timely approvals. Risk & Reliability Oversight Assess the risk of planned changes and verify readiness of rollout, rollback, and validation plans. Track key reliability metrics such as change failure rate, MTTR, and deployment lead time. Incident Correlation & Analysis Investigate change-related incidents and contribute to post-incident reviews. Identify patterns and systemic issues in failed or high-risk changes. Automation & Tooling Partner with DevOps/SRE teams to integrate change validation, canary rollouts, and automated approvals into CI/CD pipelines. Champion use of observability tools to monitor live changes and detect anomalies early. Stakeholder Communication Provide clear and actionable reporting to leadership on change success, risk trends, and improvement areas. Coordinate with product, engineering, and operations teams for major releases or changes during high-risk periods. Compliance & Audit Support Ensure adherence to regulatory or internal audit requirements (e.g., SOX, ISO, PCI-DSS). Maintain documentation and audit trails for all changes. Qualifications Required: 3+ years of experience in ITSM Strong knowledge of change management principles Experience with CI/CD platforms (e.g., Jenkins, Spinnaker, ArgoCD) Proficiency with monitoring and observability tools (e.g., Datadog, Splunk, Prometheus) Excellent stakeholder management and communication skills Preferred: Background in high-availability or regulated industries (e.g., fintech) Experience with automated risk scoring, canary analysis, or feature flag systems SRE training is a plus Key Metrics You’ll Drive Change Failure Rate (CFR) Successful Change Audits (SCAs) Mean Time to Recovery (MTTR) Lead Time for Changes % of Automated Change Validations Emergency Change Volume Pay: $15-17 USD

Posted 4 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AWS Staff-Senior The opportunity We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Key Responsibilities: Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka. Develop Python producers and AWS Glue jobs for batch data processing. Build and manage Spark streaming applications on Amazon EMR. Architect and maintain Medallion Architecture-based data lakes on Amazon S3. Develop and maintain data sinks in Redshift and Oracle. Automate and orchestrate workflows using Apache Airflow. Monitor, debug, and optimize data pipelines for performance and reliability. Collaborate with cross-functional teams including data analysts, scientists, and DevOps. Required Skills and Experience: Good programming skills in Python and Spark (Pyspark). Hands on Experience with Amazon S3, Glue, EMR. Good SQL knowledge on Amazon Redshift and Oracle Proven experience in handling streaming data with Kafka and building real-time pipelines. Good understanding of data modeling, ETL frameworks, and performance tuning. Experience with workflow orchestration tools like Airflow. Nice-to-Have Skills: Infrastructure as Code using Terraform. Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Familiarity with DataSync for file movement and medallion architecture for data lakes. Monitoring and alerting using CloudWatch, Datadog, or Splunk. Qualifications : BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 weeks ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Senior DevOps Engineer (GCP | DevSecOps | Monitoring) Employment Type: Full-time Experience: 7+ Years Job Summary: We are seeking a highly experienced and results-driven Senior DevOps Engineer to join our dynamic team. The ideal candidate will bring 7+ years of hands-on experience in cloud infrastructure, monitoring, security, and DevSecOps practices—especially within the Google Cloud Platform (GCP) ecosystem. This role demands strong expertise in designing, implementing, and leading complex DevSecOps and monitoring initiatives across cloud-native environments. Key Responsibilities: Lead the end-to-end design, implementation, and delivery of scalable and secure DevSecOps solutions. Implement and maintain monitoring and observability tools such as New Relic, Datadog, Grafana, and Prometheus. Manage and optimize GCP infrastructure for performance, security, and cost efficiency. Define and enforce DevSecOps best practices, integrating security at every stage of the development lifecycle. Work closely with Data Engineering teams to support data pipelines and infrastructure automation. Manage CI/CD pipelines using GitLab and ensure smooth deployment workflows. Maintain containerized environments using Docker and Kubernetes. Collaborate with cross-functional teams to ensure system reliability, scalability, and security. Required Skills & Experience: 7+ years of experience in a DevOps/DevSecOps role with a strong background in GCP. Proven experience with monitoring/observability tools: New Relic, Datadog, Grafana, Prometheus. Deep understanding of DevSecOps principles, cloud security, and compliance practices. Strong hands-on experience with Docker and Kubernetes. Proficiency with GitLab for CI/CD automation. Familiarity with infrastructure-as-code and configuration management tools. Solid scripting and automation skills (e.g., Bash, Python, Terraform, etc.). Experience collaborating with Data Engineers and supporting data-driven applications. Preferred Qualifications: GCP certifications (e.g., Professional Cloud DevOps Engineer, Cloud Architect). Experience with other cloud platforms (e.g., AWS, Azure) is a plus. Exposure to data pipeline tools and big data platforms is advantageous. About Encora Encora is the preferred digital engineering and modernization partner of some of the world's leading enterprises and digital native companies. With over 9,000 experts in 47+ offices and innovation labs worldwide, Encora's technology practices include Product Engineering & Development, Cloud Services, Quality Engineering, DevSecOps, Data & Analytics, Digital Experience, Cybersecurity, and AI & LLM Engineering. At Encora, we hire professionals based solely on their skills and qualifications, and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description As a Senior Platform Product Manager , you will be responsible for defining and managing the core platform services and APIs that power our technology ecosystem. You will work closely with Platform Engineering and Software Development teams to design and unify foundational business APIs and services that enable consistency, scalability, and efficiency across the company. Your role will focus on standardizing platform capabilities , improving system interoperability, and ensuring that all teams can leverage common infrastructure services . The ideal candidate has a strong technical background , understands API design principles , and is experienced in defining enterprise-wide platform strategies that reduce duplication and streamline engineering efforts. The role requires a strong understanding of the Fintech, Healthcare, and related business domains , including industry regulations, operational workflows, and technology trends. Responsibilities Define and execute the platform API strategy, ensuring a unified and scalable approach across teams. Partner with Product Teams, DevOps, Platform Engineering, and Software Engineering teams to design, develop, and maintain foundational APIs that support critical business functions. Implementing APIs that follow outlined governance and best practices, including security, versioning, monitoring, and lifecycle management. Collaborate with engineering teams to identify redundant or fragmented business capabilities and drive standardization. Own the product roadmap for core platform services, ensuring alignment with engineering and business objectives. Develop clear API documentation, usage guidelines, and adoption strategies to ensure cross-team consistency and efficiency. Work with stakeholders to prioritize platform improvements, balancing short-term needs with long-term scalability. Critical Skills & Experience 5+ years of experience in Product Management, with a focus on platform, infrastructure, or API-driven products. Strong understanding of API design, microservices architecture, and API lifecycle management. Ability to assess organizational priorities and capabilities while maintaining a strategic mindset and a long-term product vision. Technical background or hands-on experience with software development, DevOps, or infrastructure is highly preferred. Strong problem-solving and analytical skills, with the ability to translate technical challenges into platform solutions. Excellent stakeholder management skills, working across engineering, security, compliance, and business teams. Familiarity with observability and monitoring tools (e.g., Datadog, Prometheus, New Relic) is a plus. Experience with API management platforms (e.g., Kong, Apigee, AWS API Gateway) is a plus. Proficiency in Data Models, SQL, and database fundamentals is a plus.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Associate Director, Software engineering In this role, you will: A Lead Automation engineer with deep hands-on experience of Software Automation testing and Performance testing tools, practices, processes Need deep understanding of Desktop, Web, Data warehouse application, API Development, Design Patterns, SDLC, IAC tools, testing and site reliability engineering and related ways to design and develop automation framework Define and implement best practices for software automation testing, performance testing, framework, and patterns, including testing methodologies. Be a generalist with the breadth and depth of experience in CICD best practices and has core experience in testing (ie. TDD/BDD/Automated testing/Contract testing/API testing/Desktop/web apps, DW test automation) Able to see a problem or an opportunity with the ability to engineer a solution, be respected for what they deliver not just what they say, should think about the business impact of their work and has a holistic view to problem- solving Proven industry experience of running an Engineering team with focus on optimization of processes, introduction of new technologies, solving challenges, building strategy, business planning, governance and Stakeholder Management. Apply thinking to many problems across multiple technical domains and suggest way to solve the problems Contributes to architectural discussions by asking the right questions to ensure a solution matches the business needs Identify opportunities for system optimization, performance tuning, and scalability enhancements. Implement solutions to improve system efficiency and reliability. Excellent verbal and written communication skills to articulate technical concepts to both technical and non-technical stakeholders. Build Performance assurance procedures with the latest feasible tools and techniques, establish Performance test automation process to improve testing productivity. Responsible for end-to-end Software testing, performance testing and engineering life cycle - technical scoping, performance scripting, testing, and tuning. Analyse the test assessment results, provide recommendations to improve performance or save infrastructure costs. Represent at Scrum meetings and all other key project meetings and provide a single point of accountability and escalation for Performance testing within the scrum teams. Advise on needed infrastructure and Performance Engineering and testing guidelines & be responsible for performance risk assessment of various application features. Work with cross-functional team, opportunity to work with software product, development, and support teams, capable of handling tasks to accelerate the testing delivery and to improve the quality for Applications at HSBC. Able to provide support in product/application design from performance point of view. Able to communicate plans, status, and results as per target audience. Willing to adapt, learn innovative technologies/trades and be flexible to work on projects as demanded by business Define and implement best practices for software automation testing, including testing standards, test reviews, coverage, and testing methodologies, tractability between requirements and test cases. Prepare, develop and maintain test automation framework that can be used for software testing, performance testing., write automation test scripts, conduct reviews. Develop and execute regression, smoke, integration tests timely. Requirements To be successful in this role, you must meet the following requirements: Experience in software testing approaches on automation testing using Tosca, Selenium, cucumber BDD framework Experienced on writing test plans, test strategy, test data management includes test artifacts management for both automation and manual testing. Experience on setting up CI/CD pipeline and work experience on GitHub, Jenkins along with integration to cucumber and Jira. Experience in agile methodology and proven experience in working on agile projects. Experience in analysis of bug tracking, prioritizing and bug reporting with bug tracking tools. Experience in SQL, Unix, Control-M, ETL, Data Testing, API testing, API Automation using Rest Assured. Familiar with following performance testing tools. Micro Focus LoadRunner Enterprise (VuGen, Analysis, LRE OneLG), Protocols: HTTP/HTMP, CITRIX,JMETER, Postman, Insomnia Familiar with following observability tools -AppDynamics, New Relic, Splunk, Geneos., Datadog, Grafana Knowledge of following will be an added advantage -GitHub, Jenkins, Kubernetes, Jira & Confluence. Programming and scripting language skills in Java, Shell, Scala, Groovy, Python,WebLogic server administration. Familiar with BMC Control M tool. CICD tools – Ansible, AWS RO, G3 UNIX/Linux/Web monitors & performance analysis tools to diagnose and resolve performance issues. Experience of working in an Agile environment, "DevOps" team or a similar multi skilled team in a technically demanding function. Experience of working on performance testing and tuning of micro-services/APIs, Desktop applications, Webapps, Cloud Services, ETL Apps, database queries. Experience of writing/modifying performance testing scripts, Implementation & usage of automated tools for result analysis Experience of working on performance testing and tuning of Data warehouse applications doing batch processing on various stages of ETL and information delivery components Good to have skills: Knowledge on latest technology, tools like Python Scripting, Tricentis Toaca, Dataflow, Hive, DevOpS, REST API, Hadoop, Kafka framework, GCP, AWS, will be an added advantage You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

Requisition Number: 101578 Cloud Engineer III - Azure Infra/Migration/IaC/DevOps Shift: 2 PM- 11 PM IST Location: Delhi NCR, Hyderabad, Bangalore, Pune, Mumbai, Chennai, this is a hybrid work opportunity. Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20 on Fortune’s World's Best Workplaces™ list #14 on Forbes World's Best Employers in IT – 2023 #23 on Forbes Best Employers for Women in IT- 2023 Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organizations through complex digital decisions. About the role As a Cloud Engineer III, you will be part of the consulting practice, utilizing cutting-edge automation tools and provisioning in public cloud providers—preferably Azure, AWS, or GCP. You will be responsible for designing and deploying well-architected cloud solutions. The ideal candidate will have experience in customer-facing roles and a proven track record of delivering cloud solutions with Infrastructure as Code (IaC) automation on various projects. Along the way, you will: Design scalable, secure, and resilient cloud infrastructure (primarily on Azure, AWS, or GCP). Create architecture diagrams, deployment strategies, and cloud roadmaps. Deploy and configure cloud resources such as VMs, storage, networking, containers, and databases. Automate infrastructure provisioning using tools like Terraform, ARM templates, or Bicep. Set up CI/CD pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins. Implement Infrastructure as Code (IaC) and configuration management. Support microservices-based architecture designs. Set up application and infrastructure monitoring with tools like Prometheus, Grafana, Datadog, New Relic, or Azure Monitor. Perform cost optimization and performance tuning. Implement cloud security best practices, including identity and access management (IAM), encryption, firewall rules, and network security groups. Collaborate with Insight and client teams, following Agile/Scrum methodologies and ceremonies. Communicate effectively and professionally with teammates, client personnel, and stakeholders. What we’re looking for Bachelor’s degree in information technology, Computer Science, or related field preferred, or equivalent practical experience. 4-6 years of relevant experience in a similar or related role is required. Any relevant cloud certification is a plus. Hands-on experience with one or more cloud providers (AWS, Azure, GCP) is a must. Azure being the primary cloud. Familiarity with writing infrastructure as code (e.g., Terraform, Azure Bicep, ARM templates, CloudFormation) is a must. Working experience with at least one of the CI/CD tools and version control systems (e.g. Azure DevOps, GitHub Actions, Jenkins, Git, GitHub, Azure Repos) is required. Familiarity with Windows and Linux/Unix-based systems is a must. Proficiency in Azure infrastructure cloud services like Azure VM, VNET, Storage, Monitoring, Azure Functions, Load Balancers, Azure AD, Azure DNS, Traffic managers and Application Gateways for network optimization. Knowledge of Azure Kubernetes Service (AKS), Docker containers, and application monitoring services such as Prometheus, Grafana, Datadog, and New Relic is highly desirable. Experience in application deployment and management within cloud environments. Hands-on knowledge of Docker and container lifecycle management. Experience in deploying and managing distributed applications in production-grade environments What you can expect- We’re legendary for taking care of you, your family, and helping you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. Medical Insurance Health Benefits Professional Development: Learning Platform and Certificate Reimbursement Shift Allowance But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambitious journey starts here. Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process. At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you! Today's talent leads tomorrow's success. Learn more about Insight: https://www.linkedin.com/company/insight/ Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurgaon

On-site

Requisition Number: 101594 Cloud Solution Specialist/CSS - Azure Infra/DevOps/Migration Shift: 2 PM- 11 PM IST Location: Delhi NCR, Hyderabad, Bangalore, Pune, Mumbai, Chennai, this is a hybrid work opportunity. Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20 on Fortune’s World's Best Workplaces™ list #14 on Forbes World's Best Employers in IT #23 on Forbes Best Employers for Women in IT Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organizations through complex digital decisions. About the role As a Cloud Solution Specialist , you will be part of the consulting practice, utilizing cutting-edge automation tools and provisioning in public cloud providers—preferably Azure, AWS, or GCP. You will be responsible for designing and deploying well-architected cloud solutions. The ideal candidate will have experience in customer-facing roles and a proven track record of delivering cloud solutions with Infrastructure as Code (IaC) automation on various projects. Along the way, you will: Design scalable, secure, and resilient cloud infrastructure (primarily on Azure, AWS, or GCP). Create architecture diagrams, deployment strategies, and cloud roadmaps. Deploy and configure cloud resources such as VMs, storage, networking, containers, and databases. Automate infrastructure provisioning using tools like Terraform, ARM templates, or Bicep. Set up CI/CD pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins. Implement Infrastructure as Code (IaC) and configuration management. Set up application and infrastructure monitoring with tools like Prometheus, Grafana, Datadog, New Relic, or Azure Monitor. Perform cost optimization and performance tuning. Implement cloud security best practices, including identity and access management (IAM), encryption, firewall rules, and network security groups. Collaborate with Insight and client teams, following Agile/Scrum methodologies and ceremonies. Communicate effectively and professionally with teammates, client personnel, and stakeholders. What we’re looking for Bachelor’s degree in information technology, Computer Science, or related field preferred, or equivalent practical experience. 7-10 years of relevant experience in a similar or related role is required. Any relevant cloud certification is a plus. Hands-on experience with one or more cloud providers (AWS, Azure, GCP) is a must. Azure being the primary cloud. Familiarity with writing infrastructure as code (e.g., Terraform, Azure Bicep, ARM templates, CloudFormation) is a must. Working experience with at least one of the CI/CD tools and version control systems (e.g. Azure DevOps, GitHub Actions, Jenkins, Git, GitHub, Azure Repos) is required. Experience in Cloud Migration (Tool- Azure migrate or similar) Familiarity with Windows and Linux/Unix-based systems is a must. Proficiency in Azure infrastructure cloud services like Azure VM, VNET, Storage, Monitoring, Azure Functions, Load Balancers, Azure AD, Azure DNS, Traffic managers and Application Gateways for network optimization. Knowledge of Azure Kubernetes Service (AKS), Docker containers, and application monitoring services such as Prometheus, Grafana, Datadog, and New Relic is highly desirable. Experience in application deployment and management within cloud environments. Hands-on knowledge of Docker and container lifecycle management. Experience in deploying and managing distributed applications in production-grade environments What you can expect We’re legendary for taking care of you, your family, and helping you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. Medical Insurance Health Benefits Professional Development: Learning Platform and Certificate Reimbursement Shift Allowance But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambitious journey starts here. Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process. At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you! Today's talent leads tomorrow's success. Learn more about Insight: https://www.linkedin.com/company/insight/ Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India

Posted 1 month ago

Apply

3.0 - 6.0 years

4 - 8 Lacs

Gurgaon

Remote

Experience Required: 3-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services . Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform , ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling , and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations , handle outages with calm, and conduct postmortems. Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams

Posted 1 month ago

Apply

20.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Tech9 is shaking up a 20-year-old industry, and we're not slowing down. Recognized by Inc. 5000 as one of the nation's fastest-growing companies. We're also proud to be ranked as the 23rd fastest growing company in Utah and a recipient of the prestigious award for Forbes' Top 500 Startup Companies to Work For (second year in a row!). We invite you to interview with us, show us what you can do, and find out how Tech9 can provide you the AI career you're looking for! At Tech9 India, we offer the following benefits: Full health insurance for you and your immediate family 23 days of paid leave with 8 paid holidays 100% remote work (Candidate can opt to work 100% remote, hybrid, or in-person at our Pune office) Learning and Development Stipend Cloud Certification Reimbursement Laptop reimbursement program Generous Matching Contribution to PF If that sounds attractive please apply! We'd love to talk to you. Main Responsibilities: Develop, maintain, and optimize backend services using .NET 9 Design and build scalable APIs and microservices Work extensively with PostgreSQL, and occasionally with MySQL, Redis, and DynamoDB Deploy and maintain services in AWS (or be willing to quickly ramp up on cloud infrastructure) Participate in code reviews, testing, and quality assurance processes Collaborate cross-functionally with product managers, designers, and QA engineers Write clean, maintainable, and well-documented code Diagnose and resolve performance and reliability issues Minimum Qualifications 6+ years of professional experience in software development, with at least 3 years using .NET (C#) Experience with .NET 6+ (preferably .NET 9) and modern application architecture Solid understanding of relational databases, especially PostgreSQL Familiarity with microservices, API design, and distributed systems Experience deploying and operating applications in a cloud environment (AWS preferred; Azure or GCP acceptable) Comfortable with Git, CI/CD pipelines, and agile development practices Strong analytical and debugging skills Ability to ramp up quickly on new technologies and systems Preferred Qualifications Direct experience with AWS services (e.g., Lambda, ECS, RDS, S3, etc.) Familiarity with Redis and DynamoDB Exposure to infrastructure-as-code tools like Terraform or CloudFormation Basic understanding of containerization (Docker, Kubernetes) Experience with observability tools (e.g., Datadog, CloudWatch) Additional Information: Interview Process Overview Below, you'll find an outline of the interview plan for our Senior AI Engineer positions. Please note that this is what we expect the process to look like; we may ask you for supplemental information or require an additional step before making a final decision. 30-minute Recruiter Screen 1 hour Technical Interview with a Senior Engineer 1 Hour Technical Interview with a Lead Engineer 1-hour Hiring Manager Interview We recognize that this is a significant investment of time. We want you to know what to expect, and we believe that if you are looking for a great place to work, the time is worth it! Tech9 Values: Our success is not just a product of what we do, but how we do it. Our culture is defined by values that are vital to our collective and individual achievements. We believe in 'Quality by Choice,' 'Win Win is the Only Win,' 'Continuous Improvement,' 'Integrity and Transparency,' and 'Extreme Ownership,'. These core values guide the actions and decisions we make every day. They are not just words; they are the compass that guide our actions and define our commitment to one another and our customers. Quality by Choice: We choose quality in everything we do, owning our impact, exceeding expectations, and earning trust Win-Win is the Only Win: Every win is shared, built on collaboration, respect, and a belief that success thrives together. Continuous Improvement: We never stop growing, embracing feedback, learning from mistakes, and continuously crafting better together. Integrity and Transparency: We act with unwavering integrity, building trust through transparency, honesty, and open communication. Extreme Ownership: We own it all, taking extreme control, driving results, facing every challenge head-on, and innovating like entrepreneurs, because our actions ripple outward, building trust and collective success. To ensure you've received our notifications, please whitelist the domains jazz.co, jazz.com, and applytojob.com

Posted 1 month ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What You Will Do Be part in new Business case discussions and involving creating Test Plan/Test Strategy for Business case verification. Review test cases for complete functional coverage. Independently develop scalable and reliable automated tests and frameworks for testing applications. Develop regression suites , automated tests and test data for projects and move automation to an agile continuous testing model. Responsible for supporting and conducting Functional ,Non-Functional Testing ,ensuring that products meet SLA/SLOs. Collaborate with development teams to integrate automated tests into CI/CD pipeline. Ensure communications are thorough and accurate for all work documentation including status and project updates Conduct Bug Triage Meets and Work with Product Owners, QE and development team leads, to track and determine prioritization of defect fixes and support on Root Cause Analysis . Responsible for sharing /presenting Quality metrics across leadership teams/stakeholders on the status of Deliverables. Guide junior QEs in the team to accomplish business goals What Experience You Need BS or MS degree in Computer Science or Business or equivalent job experience required 5+ years of experience in Automation testing for both front-end and API testing Expertise and skilled in programming languages like core-Java ,python or Javascript. Understanding of SQL and experience working with any one of the following databases like MYSQL,POSTgreSQL, or Oracle. Good understanding of software development methodologies(preferably Agile) & testing methodologies. Expertise in creating test strategies and plans. Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Proficiency in Framework Design for WEB & API Automation using either Selenium, Appium, TestNG, Rest Assured, Karate, Gauge, Cucumber, Bruno Experience with performance testing tools -Jmeter , Gatling Deploy and release products using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts Knowledge of security testing concepts to coordinate with the team on analysing security vulnerabilities in the feature deployed. What Could Set You Apart Experience with cloud based testing environments(AWS,GCP) Knowledge of API testing tools(Bruno,Swagger) Cloud certification(GCP) Expertise with cross device testing strategies and automation via device clouds Experience in application monitoring and performance using monitoring tools like Grafana & Datadog Knowledge in Test Management tool : Zephyr

Posted 1 month ago

Apply

3.0 years

0 Lacs

Delhi

On-site

As an Enterprise Sales Engineer, you will provide technical expertise through sales presentations, product demonstrations, and supporting technical evaluations (POVs). Sales Engineers help qualify and close opportunities with customers and partners and have a voice with the product team to help prioritize features based on input from customers, competitors, and partners. At Datadog, we place value in our office culture - the relationships and collaboration it builds and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them. What You'll Do: Partner with the Sales team to articulate the overall Datadog value proposition, vision and strategy to customers Own technical engagement with customers during the trial phase. Communicate Datadog's value based on activities and work with customers on any identified issues or concerns to successful conclusion Technically close complex opportunities through advanced competitive knowledge, technical skill, and credibility Deliver product and technical briefings / presentations to potential clients Maintain accurate notes and feedback in CRM regarding customer input both wins and losses Proactively engage and communicate with customers and Datadog business/technical teams regarding product feedback and competitive landscape Who You Are: Passionate about educating customers on observability risks that are meaningful to their business, and able to build and execute an evaluation plan with a customer Someone with strong written and oral communication skills. This role requires an ability to understand and articulate both the business benefits (value proposition) and technical advantages of our offering Experienced in programming/scripting with any of the following: Java, Python, Ruby, Go, Node.JS, PHP, and .NET etc. Someone with a minimum of 3+ years in a Sales Engineering or DevOps Engineering role Able to sit up to 4 hours, traveling to and from client sites Able to travel via auto, train or air up to 45% of the time Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you're passionate about technology and want to grow your skills, we encourage you to apply. Benefits and Growth: Best-in-breed onboarding Generous global benefits Intra-departmental mentor and buddy program for in-house networking New hire stock equity (RSUs) and employee stock purchase plan (ESPP) Continuous professional development, product training, and career pathing An inclusive company culture, able to join our Community Guilds and Inclusion Talks Benefits and Growth listed above may vary based on the country of your employment and the nature of your employment with Datadog. About Datadog: Datadog (NASDAQ: DDOG) is a global SaaS business, delivering a rare combination of growth and profitability. We are on a mission to break down silos and solve complexity in the cloud age by enabling digital transformation, cloud migration, and infrastructure monitoring of our customers' entire technology stacks. Built by engineers, for engineers, Datadog is used by organizations of all sizes across a wide range of industries. Together, we champion professional development, diversity of thought, innovation, and work excellence to empower continuous growth. Join the pack and become part of a collaborative, pragmatic, and thoughtful people-first community where we solve tough problems, take smart risks, and celebrate one another. Learn more about #DatadogLife on Instagram, LinkedIn, and Datadog Learning Center. Equal Opportunity at Datadog: Datadog is an Affirmative Action and Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Here are our Candidate Legal Notices for your reference. Your Privacy: Any information you submit to Datadog as part of your application will be processed in accordance with Datadog's Applicant and Candidate Privacy Notice.

Posted 1 month ago

Apply

7.0 years

5 - 7 Lacs

Ahmedabad

On-site

Job Title: Senior DevOps Engineer (GCP | DevSecOps | Monitoring) Employment Type: Full-time Experience: 7+ Years Job Summary: We are seeking a highly experienced and results-driven Senior DevOps Engineer to join our dynamic team. The ideal candidate will bring 7+ years of hands-on experience in cloud infrastructure, monitoring, security, and DevSecOps practices—especially within the Google Cloud Platform (GCP) ecosystem. This role demands strong expertise in designing, implementing, and leading complex DevSecOps and monitoring initiatives across cloud-native environments. Key Responsibilities: Lead the end-to-end design, implementation, and delivery of scalable and secure DevSecOps solutions. Implement and maintain monitoring and observability tools such as New Relic, Datadog, Grafana, and Prometheus. Manage and optimize GCP infrastructure for performance, security, and cost efficiency. Define and enforce DevSecOps best practices , integrating security at every stage of the development lifecycle. Work closely with Data Engineering teams to support data pipelines and infrastructure automation. Manage CI/CD pipelines using GitLab and ensure smooth deployment workflows. Maintain containerized environments using Docker and Kubernetes . Collaborate with cross-functional teams to ensure system reliability, scalability, and security. Required Skills & Experience: 7+ years of experience in a DevOps/DevSecOps role with a strong background in GCP . Proven experience with monitoring/observability tools : New Relic, Datadog, Grafana, Prometheus. Deep understanding of DevSecOps principles , cloud security, and compliance practices. Strong hands-on experience with Docker and Kubernetes . Proficiency with GitLab for CI/CD automation. Familiarity with infrastructure-as-code and configuration management tools. Solid scripting and automation skills (e.g., Bash, Python, Terraform, etc.). Experience collaborating with Data Engineers and supporting data-driven applications. Preferred Qualifications: GCP certifications (e.g., Professional Cloud DevOps Engineer , Cloud Architect ). Experience with other cloud platforms (e.g., AWS, Azure) is a plus. Exposure to data pipeline tools and big data platforms is advantageous. About Encora Encora is the preferred digital engineering and modernization partner of some of the world's leading enterprises and digital native companies. With over 9,000 experts in 47+ offices and innovation labs worldwide, Encora's technology practices include Product Engineering & Development, Cloud Services, Quality Engineering, DevSecOps, Data & Analytics, Digital Experience, Cybersecurity, and AI & LLM Engineering. At Encora, we hire professionals based solely on their skills and qualifications, and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. Job Details As a senior SRE / Observability Engineer, you will be part of the Atlas Platform Engineering team and will: Create and maintain observability standards and best practices Review the current observability platform, identify areas for improvement, and guide the team in enhancing monitoring, logging, tracing, and alerting capabilities. Expand the observability stack across multiple clouds, regions, and clusters, managing all observability data. Design and implement monitoring solutions for complex distributed systems to provide deep insights into systems and services aiming at complete visibility of digital operations Supporting the ongoing evaluation of new capabilities in the observability stack, conducting proof of concepts, pilots, and tests to validate their suitability. Assist teams in creating clear, informative, and actionable dashboards to improve system visibility. Automate monitoring and alerting processes, including enrichment strategies and ML-driven anomaly detection where applicable. Provide technical leadership to the observability team with clear priorities ensuring agreed outcomes are achieved in a timely manner. Work closely with R&D and product development teams (understand their requirements and challenges) to ensure seamless visibility into system and service performance. Work closely with the Traffic Management team to identify and standardise on existing and new observability tools as part of a holistic solution Conduct training sessions and create documentation for internal teams Support the definition of SLI (service level indicators) and SLO (service level objectives) for the Atlas services. Keep track of the error budget of each service Participate in the emergency response process Conduct RCAs (root cause analysis) Help to automate repetitive tasks and reduce toil. Qualifications: People And Communication Qualifications Be a strong team player Have good collaboration and communication skills Ability to translate technical concepts for non-technical audiences Problem-solving and analytical thinking Technical qualifications - general: Familiarity with cloud platforms (Ideally Azure) Familiarity with Kubernetes and Istio as the architecture on which the observability and Atlas services run, and how they integrate and scale. Experience with infrastructure as code and automation Knowledge of common programming languages and debugging techniques Have a strong technical background and be hands on. Linux and scripting languages (Bash, Python, Golang). Significant Understanding of DevOps principles. Technical Qualifications - Observability Strong understanding of observability principles (metrics, logs, traces) Experience with APM tools and distributed tracing Proficiency in log aggregation and analysis Knowledge and hands-on experience with monitoring, logging, and tracing tools such as Prometheus, Prometheus, Grafana, Datadog, New Relic, Sumologic, ELK Stack, or others Knowledge of Open Telemetry, including OTEL collector and code instrumentation Experience designing and building unified observability platforms that enable the use of data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired. Technical Qualifications – SRE Understanding of the Google SRE principles Experience in defining SLIs and SLOs Experience in performing RCAs (root cause analysis) Experience in system performance Experience in incident response Knowledge of status tools, such as Atlassian Status Page or similar Knowledge of incident management and paging tools, such as PagerDuty or similar Knowledge of ITIL (Information Technology Infrastructure Library) processes Qualifications: People And Communication Qualifications Be a strong team player Have good collaboration and communication skills Ability to translate technical concepts for non-technical audiences Problem-solving and analytical thinking Technical qualifications - general: Familiarity with cloud platforms (Ideally Azure) Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Experience with infrastructure as code and automation Knowledge of common programming languages and debugging techniques Have a strong technical background and be hands on. Linux and scripting languages (Bash, Python, Golang). Significant Understanding of DevOps principles. Technical Qualifications - Observability Strong understanding of observability principles (metrics, logs, traces) Experience with APM tools and distributed tracing Proficiency in log aggregation and analysis Knowledge and hands-on experience with monitoring, logging, and tracing tools such as Prometheus, Prometheus, Grafana, Datadog, New Relic, Sumologic, ELK Stack, or others Knowledge of Open Telemetry, including OTEL collector and code instrumentation Experience designing and building unified observability platforms that enable the use of data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired. Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here."

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Site Reliability Engineer (SRE) - LEAD Department: COE Job Summary: We are seeking a highly skilled and motivated Site Reliability Engineer (SRE) to join our engineering team. As an SRE, you will be responsible for ensuring the reliability, availability, and performance of our systems and services. You will work closely with development and operations teams to build scalable infrastructure, automate processes, and respond to incidents effectively. Key Responsibilities: Strategic Leadership & Governance Define and evolve the SRE CoE vision, strategy, and roadmap. Establish enterprise-wide SRE standards, frameworks, and maturity models. Drive adoption of SRE principles across product and platform teams. Enablement Act as a subject matter expert and advisor to engineering teams on reliability, scalability, and performance. Conduct workshops, training sessions, and knowledge-sharing forums. Promote a culture of observability, automation, and continuous improvement. Collaboration & Mentorship Partner with engineering, product, and operations leaders to align reliability goals with business outcomes. Mentor SREs and engineers across teams, fostering a community of practice. Lead cross-functional reliability reviews and architecture assessments. Collaborate with development, operations, and network teams. Align infrastructure reliability with application SLOs/SLIs. Advocate for best practices in system architecture and operations. Infrastructure & Reliability Design, implement, and maintain scalable, reliable infrastructure. Ensure high availability and disaster recovery strategies. Improve reliability for legacy and hybrid (cloud/on-prem) systems. Monitoring & Incident Management Develop and maintain monitoring, alerting, and incident response systems. Conduct root cause analysis and post-mortems. Participate in on-call rotations and respond to production issues. Automation & Efficiency Automate repetitive tasks using scripting and tooling. Lead Infrastructure-as-Code (IaC) and automation for provisioning and scaling. Create sustainable systems through automation and continuous improvement. Evaluate and recommend tools for monitoring, alerting, incident management, and chaos engineering. Build reusable automation frameworks and templates for onboarding teams to SRE practices. Collaborate with DevOps and platform teams to integrate reliability tooling into CI/CD pipeline Support rigorous testing and release procedures. Performance & Capacity Lead capacity planning, system upgrades, and OS patching. Gather and analyze system/application metrics for performance tuning. Containerization & Cloud Support Kubernetes and container platforms in hybrid environments. Work with OpenShift, GCP, Azure and AWS for cloud-integrated services. Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience). 8+ years of experience in SRE. Proficiency in at least one programming/scripting language (e.g., Python). Experience with cloud platforms (AWS, GCP, Azure). Preferred Qualifications: Experience in setting up or leading a CoE or similar strategic function. Certifications in cloud, DevOps, or SRE-related domains. Experience with chaos engineering and resilience testing. Experience with observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Experience with incident management and SLO/SLI/SLA frameworks.

Posted 1 month ago

Apply

5.0 years

9 - 12 Lacs

New Delhi, Delhi, India

On-site

Job Title - DevOps Engineer – App Infrastructure & Scaling (Immediate Joiner) Role Overview We are seeking an experienced and highly motivated DevOps Engineer to join our growing technology team. In this role, you will be responsible for designing, implementing, and maintaining scalable and secure cloud infrastructure that powers our mobile and web applications. You will play a critical role in ensuring system reliability, performance, and cost efficiency across environments. Key Responsibilities Design, configure, and manage cloud infrastructure on Google Cloud Platform (GCP). Implement robust horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems Develop and maintain CI/CD pipelines using tools such as GitHub Actions, Jenkins, or GitLab CI Set up real-time monitoring, crash alerting, logging systems, and health dashboards using industry-leading tools Manage and optimize Redis, job queues, caching layers, and backend request loads Automate data backups, enforce secure access protocols, and implement disaster recovery systems Collaborate with the Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system load Conduct infrastructure security audits and recommend best practices to prevent downtime and security breaches Monitor and optimize cloud usage and billing, ensuring a cost-effective and scalable architecture Required Skills & Qualifications 3–5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP Strong proficiency with Docker, Kubernetes, NGINX, and load balancing strategies Proven experience with CI/CD pipelines and tools such as GitHub Actions, Jenkins, or GitLab CI Familiarity with monitoring tools like Grafana, Prometheus, NewRelic, or Datadog Deep understanding of API architecture, including rate limiting, error handling, and fallback mechanisms Experience working with PHP/Laravel backends, Firebase, and modern mobile app infrastructure Working knowledge of Redis, Socket.IO, and message queuing systems (e.g., RabbitMQ, Kafka) Preferred Qualifications Google Cloud Professional certification or equivalent is a plus Experience in optimizing systems for high-concurrency, low-latency environments Familiarity with Infrastructure as Code (IaC) tools such as Terraform or Ansible Skills: cloud,redis,laravel,gitlab,infrastructure as code (iac),architecture,firebase,docker,newrelic,app,ci,kubernetes,grafana,socket.io,rabbitmq,load,gcp,github actions,datadog,gitlab ci,cloud infrastructure,php,jenkins,ansible,prometheus,terraform,nginx,ci/cd,ci/cd pipelines,devops,api architecture,cd,google cloud platform (gcp),infrastructure,kafka,google cloud platform

Posted 1 month ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Position: AIOps Architect (Kolkata) Job Summary: We are seeking an experienced Lead AIOps Engineer to spearhead AIOps initiatives and drive transformative change for our customers. The role requires technical expertise in AIOps, IT infrastructure, and applications. The role will be responsible for assessing complex IT environments, developing comprehensive AIOps strategies and roadmaps, and leading the successful design, implementation, and adoption of AIOps solutions. The candidate will required to have experience in translating operational challenges into effective, AI-driven solutions that deliver measurable improvements in stability, efficiency, and performance. Key Responsibilities: Access existing IT landscapes, including infrastructure, server environments, applications, and existing operational tools and processes. Develop AIOps strategy and roadmap aligned, demonstrating clear ROI and value. Define AIOps architecture, select appropriate tools and technologies, and establish best practices for AIOps adoption. Architect end-to-end AIOps solutions, encompassing data ingestion, processing, AI/ML model application, automation, and visualization. Lead and mentor team in the deployment, configuration, and integration of AIOps platforms. Ensure solutions effectively address anomaly detection, event correlation, root cause analysis, predictive alerting, and automated remediation across diverse IT domains. Act as the primary AIOps subject matter expert and trusted advisor to customers, technical teams to executive leadership. Collaborate with IT operations, SRE, DevOps, application development, security, and business teams to gather requirements, ensure buy-in, and drive successful AIOps adoption. Define KPIs and metrics to measure the effectiveness and impact of AIOps initiatives (e.g., reduction in MTTR/MTTD, incident volume, operational costs). Oversee the AIOps program lifecycle, from initial assessment and planning through implementation, go-live, and continuous improvement. Stay at the forefront of AIOps advancements, emerging technologies, and industry best practices. Drive innovation by identifying new opportunities to apply AI/ML to solve operational challenges. Required Skills and Qualifications: Extensive experience (10+ years) in IT operations, enterprise architecture, or technology consulting, with a significant focus on AIOps. Proven track record (3+ years) in a lead role, designing, strategizing, and implementing AIOps solutions for complex enterprise environments. Deep understanding of AIOps principles, methodologies, and the full AIOps lifecycle. Expertise in one or more leading AIOps platforms (e.g., Dynatrace, Splunk ITSI, Datadog, ServiceNow ITOM Health/AI Ops, Moogsoft, BigPanda, Elastic Stack with ML) and their application to infrastructure, server, and application monitoring. Strong knowledge of IT infrastructure (networks, storage, virtualization, public/private/hybrid cloud environments - AWS, Azure, GCP), server operating systems (Linux, Windows), and application architectures (monolithic, microservices, containerization). Solid understanding of AI/ML concepts and Hands-on experience with event correlation, anomaly detection , and automated remediation .. Experience with data ingestion pipelines, log management, metrics collection, event correlation, and APM tools. Proficiency in scripting languages (e.g., Python, PowerShell) for automation and integration. Excellent analytical, strategic thinking, problem-solving, and decision-making skills. Strong communication, presentation, and interpersonal skills, with a strong ability to influence and build consensus.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies