Sr. Quality Assurance Engineer- AI

4 - 7 years

0 Lacs

Posted:1 day ago| Platform: Foundit logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Opentext - The Information Company

OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation.

AI-First. Future-Driven. Human-Centered.

At OpenText, AI is at the heart of everything we dopowering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us.

YOUR IMPACT

We are seeking a passionate and detail-oriented

Quality Assurance (QA) Engineer

to join our

AI Engineering and Enablement

team.In this role, you will be responsible for

validating Generative AI systems

,

multi-agent workflows

, and

Retrieval-Augmented Generation (RAG) pipelines

developed using frameworks like

LangGraph

,

LangChain

, and

Crew AI

.You will work closely with

AI engineers, data scientists, and product owners

to ensure the

accuracy, reliability, and performance

of LLM-powered enterprise applications.

What The Role Offers

  • Be part of a next-generation AI engineering team delivering enterprise-grade GenAI solutions.
  • Gain hands-on experience testing LangGraph-based agentic workflows and RAG pipelines.
  • Learn from senior AI engineers working on production-grade LLM systems.
  • Opportunity to grow into AI Quality Specialist or AI Evaluation Engineer roles as the team expands.
  • Develop and execute test cases for validating RAG pipelines, LLM integrations, and agentic workflows.
  • Validate context retrieval accuracy, prompt behaviour, and response relevance across different LLM configurations.
  • Conduct functional, integration, and regression testing for GenAI applications exposed via APIs and microservices.
  • Test Agent-to-Agent (A2A) & Model Context Protocol (MCP) communication flows for correctness, consistency, and task coordination.
  • Verify data flow and embedding accuracy between vector databases (Milvus, Weaviate, pgvector, Pinecone).
  • Build and maintain automated test scripts for evaluating AI pipelines using Python and PyTest.
  • Leverage LangSmith, Ragas, or TruLens for automated evaluation of LLM responses (factuality, coherence, grounding).
  • Integrate AI evaluation tests into CI/CD pipelines (GitLab/Jenkins) to ensure continuous validation of models and workflows.
  • Support performance testing of AI APIs and RAG retrieval endpoints for latency, accuracy, and throughput.
  • Assist in creating automated reports summarizing evaluation metrics such as Precision@K, Recall@K, grounding scores, and hallucination rates.
  • Validate guardrail mechanisms, response filters, and safety constraints to ensure secure and ethical model output.
  • Use OpenTelemetry (OTEL) and Grafana dashboards to monitor workflow health and identify anomalies.
  • Participate in bias detection and red teaming exercises to test AI behavior under adversarial conditions.
  • Work closely with AI engineers to understand system logic, prompts, and workflow configurations.
  • Document test plans, results, and evaluation methodologies for repeatability and governance audits.
  • Collaborate with Product and MLOps teams to streamline release readiness and model validation processes.

What You Need To Succeed

  • Education: Bachelor's degree in Computer Science, AI/ML, Software Engineering, or related field.
  • Experience: 47 years in Software QA or Test Automation, with at least 2 years exposure to AI/ML or GenAI systems.
  • Solid hands-on experience with Python and PyTest for automated testing.
  • Basic understanding of LLMs, RAG architecture, and vector database operations.
  • Exposure to LangChain, LangGraph, or other agentic AI frameworks.
  • Familiarity with FastAPI, Flask, or REST API testing tools (Postman, PyTest APIs).
  • Experience with CI/CD pipelines (GitLab, Jenkins) for test automation.
  • Working knowledge of containerized environments (Docker, Kubernetes).
  • Understanding of AI evaluation metrics (Precision@K, Recall@K, grounding, factual accuracy).
  • Exposure to AI evaluation frameworks like Ragas, TruLens, or OpenAI Evals.
  • Familiarity with AI observability and telemetry tools (OpenTelemetry, Grafana, Prometheus).
  • Experience testing LLM-powered chatbots, retrieval systems, or multi-agent applications.
  • Knowledge of guardrail frameworks (Guardrails.ai, NeMo Guardrails).
  • Awareness of AI governance principles, data privacy, and ethical AI testing.
  • Experience with cloud-based AI services (AWS Sagemaker, Azure OpenAI, GCP Vertex AI).
  • Curious and eager to learn emerging AI technologies.
  • Detail-oriented with strong problem-solving and analytical skills.
  • Excellent communicator who can work closely with engineers and product managers.
  • Passion for quality, reliability, and measurable AI performance.
  • Proactive mindset with ownership of test planning and execution.
OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws.
If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at [HIDDEN TEXT]. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You