AI-First. Future-Driven. Human-Centered.
At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us.
YOUR IMPACT
We are seeking a highly skilledAI Systems Engineer to lead the design, development, and optimization ofRetrieval-Augmented Generation (RAG) pipelines andmulti-agent AI workflows within enterprise-scale environments.The role requires deep technical expertise acrossLLM orchestration,context engineering, andproduction-grade deployment practices. You will work cross-functionally with data, platform, and product teams to buildscalable, reliable, and context-aware AI systems that power next-generation enterprise intelligence solutions.
What The Role Offers
- Be part of anenterprise AI transformation team shaping the future ofLLM-driven applications.
- Work with cutting-edge technologies inAI orchestration, RAG, and multi-agent systems.
- Opportunity to architectscalable, secure, and context-aware AI systems deployed across global enterprise environments.
- Collaborative environment fosteringcontinuous learning and innovation in Generative AI systems engineering.
- Architect, implement, and optimize enterprise-gradeRAG pipelines covering data ingestion, embedding creation, and vector-based retrieval.
- Design, build, and orchestratemulti-agent workflows using frameworks such asLangGraph,Crew AI, orAI Development Kit (ADK) for collaborative task automation.
- Engineer prompts and contextual templates to enhance LLM performance, accuracy, and domain adaptability.
- Integrate and manage vector databases (pgvector, Milvus, Weaviate, Pinecone) for semantic search and hybrid retrieval.
- Develop and maintain data pipelines for structured and unstructured data usingSQL andNoSQL systems.
- Expose RAG workflows through APIs usingFastAPI orFlask, ensuring high reliability and performance.
- Containerize, deploy, and scale AI microservices usingDocker,Kubernetes, andHelm within enterprise-grade environments.
- Implement CI/CD automation pipelines viaGitLab or similar tools to streamline builds, testing, and deployments.
- Collaborate with cross-functional teams (Data, ML, DevOps, Product) to integrate retrieval, reasoning, and generation into end-to-end enterprise systems.
- Monitor and enhance AI system observability usingPrometheus,Grafana, andOpenTelemetry for real-time performance and reliability tracking.
- Integrate LLMs with enterprise data sources and knowledge graphs to deliver contextually rich, domain-specific outputs.
What You Need To Succeed
- Education: Bachelor’s or Master’s degree inComputer Science,Artificial Intelligence, or related technical discipline.
- Experience: 5–10 years inAI/ML system development,deployment, andoptimization within enterprise or large-scale environments.
- Deep understanding ofRetrieval-Augmented Generation (RAG) architecture andhybrid retrieval mechanisms.
- Proficiency in Python with hands-on expertise inFastAPI,Flask, andREST API design.
- Strong experience withvector databases (pgvector, Milvus, Weaviate, Pinecone).
- Proficiency inprompt engineering andcontext engineering for LLMs.
- Hands-on experience withcontainerization (Docker) andorchestration (Kubernetes, Helm) in production-grade deployments.
- Experience withCI/CD automation usingGitLab,Jenkins, or equivalent tools.
- Familiarity withLangChain,LangGraph,Google ADK, or similar frameworks for LLM-based orchestration.
- Knowledge ofAI observability,logging, andreliability engineering principles.
- Understanding ofenterprise data governance,security, andscalability in AI systems.
- Proven track record of building and maintainingproduction-grade AI applications with measurable business impact.
- Experience infine-tuning or parameter-efficient tuning (PEFT/LoRA) of open-source LLMs.
- Familiarity withopen-source model hosting,LLM governance frameworks, andmodel evaluation practices.
- Knowledge ofmulti-agent system design andAgent-to-Agent (A2A) communication frameworks.
- Exposure toLLMOps platforms such asLangSmith,Weights& Biases, orKubeflow.
- Experience withcloud-based AI infrastructure (AWS Sagemaker, Azure OpenAI, GCP Vertex AI).
- Working understanding ofdistributed systems,API gateway management, andservice mesh architectures.
- Strong analytical and problem-solving mindset with attention to detail.
- Effective communicator with the ability to collaborate across technical and business teams.
- Self-motivated, proactive, and capable of driving end-to-end ownership of AI system delivery.
- Passion for innovation inLLM orchestration,retrieval systems, andenterprise AI solutions.
OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws.
If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.