AI Evals & Test Engineer

5 years

0 Lacs

Posted:3 weeks ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Job Summary:

We are looking for an AI Evaluation & Test Engineer to join our growing team to ensure that our generative AI models and applications are safe, accurate, trustworthy, and deliver an elegant user experience. You will serve as the first customer of our AI systems. This role is ideal for product-minded engineers who obsess over product quality and customer-centricity, and are passionate about shaping the behavior of AI systems in the real world.


Key Responsibilities:

  • Build and maintain AI evaluation pipelines to test, measure, and evaluate the behavior and performance of AI systems.
  • Implement traces, spans, and session tracking for observability and identify error propagation in multi-step pipelines.
  • Define AI quality metrics and KPIs around factuality, faithfulness, toxicity, grounding precision/recall, latency, cost, etc., with clear acceptance bars.
  • Implement evaluation and testing automation to enable end-to-end system and regression testing at scale.
  • Define criteria for and implement release gates in the CI/CD pipeline.
  • Find creative ways to break products.
  • Assist in root cause analysis and troubleshooting of bugs and field issues.
  • Collaborate with cross-functional teammates from product, engineering, linguistics, and customer support to shape human-AI interaction paradigms and ensure that our AI models and applications deliver the desired outcome and user experience.


Minimum Qualifications and Experience:

  • Bachelor’s or Master’s degree in CS/CE/IT/EE/E&TC or related fields with 5+ years of experience in manual and automation testing of software products, with at least 2 years in evaluating and testing AI/ML products.


Required Expertise:

  • Strong software testing fundamentals and expertise in writing test plans, executing test cases, and generating detailed reports and dashboards.
  • Strong analytical and debugging skills, and attention to detail.
  • Proficiency in Python, scripting, and software testing automation frameworks and tools such as Pytest, Selenium, Robot Framework, etc.
  • Working knowledge of generative AI models, AI agents, and related concepts such as retrieval augmented generation (RAG), prompt engineering, context engineering, explainability, traceability, observability, guard rails, reasoning, specificity, etc.
  • Sound understanding of the fundamental differences in the approach for testing conventional software versus evaluating generative AI systems.
  • Team player with excellent interpersonal skills and the ability to collaborate effectively with remote and cross-functional team members.
  • Go-getter attitude and ability to flourish in a fast-paced, startup environment.
  • Experience in any of the following would be a big plus -

- AI evaluation frameworks such as Arize, Braintrust, DeepEval, LangSmith, Ragas

- AI safety and red teaming experience, e.g., prompt injection, jailbreak, adversarial and stress testing.

- Different types of AI evaluation methods, e.g, Human-in-the-loop, LLM-as-a-Judge.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You