OPENTEXT - THE INFORMATION COMPANY
OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation.
AI-First. Future-Driven. Human-Centered.
At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us.
YOUR IMPACT
We are seeking a passionate and detail-orientedQuality Assurance (QA) Engineer to join ourAI Engineering and Enablement team.
In this role, you will be responsible forvalidating Generative AI systems,multi-agent workflows, andRetrieval-Augmented Generation (RAG) pipelines developed using frameworks likeLangGraph,LangChain, andCrew AI.
You will work closely withAI engineers, data scientists, and product owners to ensure theaccuracy, reliability, and performance of LLM-powered enterprise applications.
What The Role Offers
- Be part of anext-generation AI engineering team delivering enterprise-grade GenAI solutions.
- Gain hands-on experience testingLangGraph-based agentic workflows andRAG pipelines.
- Learn fromsenior AI engineers working on production-grade LLM systems.
- Opportunity to grow intoAI Quality Specialist orAI Evaluation Engineer roles as the team expands.
- Develop and executetest cases for validating RAG pipelines, LLM integrations, and agentic workflows.
- Validatecontext retrieval accuracy,prompt behaviour, andresponse relevance across different LLM configurations.
- Conductfunctional, integration, and regression testing for GenAI applications exposed via APIs and microservices.
- TestAgent-to-Agent (A2A)&Model Context Protocol (MCP) communication flows for correctness, consistency, and task coordination.
- Verifydata flow and embedding accuracy between vector databases (Milvus, Weaviate, pgvector, Pinecone).
- Build and maintainautomated test scripts for evaluating AI pipelines usingPython and PyTest.
- LeverageLangSmith,Ragas, orTruLens for automated evaluation of LLM responses (factuality, coherence, grounding).
- Integrate AI evaluation tests intoCI/CD pipelines (GitLab/Jenkins) to ensure continuous validation of models and workflows.
- Supportperformance testing of AI APIs and RAG retrieval endpoints for latency, accuracy, and throughput.
- Assist in creatingautomated reports summarizing evaluation metrics such as Precision@K, Recall@K, grounding scores, and hallucination rates.
- Validateguardrail mechanisms,response filters, andsafety constraints to ensure secure and ethical model output.
- UseOpenTelemetry (OTEL) andGrafana dashboards to monitor workflow health and identify anomalies.
- Participate inbias detection and red teaming exercises to test AI behavior under adversarial conditions.
- Work closely withAI engineers to understand system logic, prompts, and workflow configurations.
- Document test plans, results, and evaluation methodologies for repeatability and governance audits.
- Collaborate withProduct and MLOps teams to streamline release readiness and model validation processes.
What You Need To Succeed
- Education: Bachelor’s degree in Computer Science, AI/ML, Software Engineering, or related field.
- Experience: 4–7 years in Software QA or Test Automation, with at least2 years exposure to AI/ML or GenAI systems.
- Solid hands-on experience withPython andPyTest for automated testing.
- Basic understanding ofLLMs,RAG architecture, andvector database operations.
- Exposure toLangChain,LangGraph, or otheragentic AI frameworks.
- Familiarity withFastAPI,Flask, or REST API testing tools (Postman, PyTest APIs).
- Experience withCI/CD pipelines (GitLab, Jenkins) for test automation.
- Working knowledge ofcontainerized environments (Docker, Kubernetes).
- Understanding ofAI evaluation metrics (Precision@K, Recall@K, grounding, factual accuracy).
- Exposure toAI evaluation frameworks likeRagas,TruLens, orOpenAI Evals.
- Familiarity withAI observability and telemetry tools (OpenTelemetry, Grafana, Prometheus).
- Experience testingLLM-powered chatbots,retrieval systems, ormulti-agent applications.
- Knowledge ofguardrail frameworks (Guardrails.ai, NeMo Guardrails).
- Awareness ofAI governance principles,data privacy, andethical AI testing.
- Experience withcloud-based AI services (AWS Sagemaker, Azure OpenAI, GCP Vertex AI).
- Curious and eager to learn emerging AI technologies.
- Detail-oriented with strong problem-solving and analytical skills.
- Excellent communicator who can work closely with engineers and product managers.
- Passion for quality, reliability, and measurable AI performance.
- Proactive mindset with ownership of test planning and execution.
OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws.
If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us athr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.