We are skilled team at TechUnity, where we invest deeply in AI technologies and explore emerging fields with the backing of a global leader. As a AI Testing Enggineer, you will play a key role in testing and validating intelligent systems that power the next generation of expert solutions across legal, tax, risk, and compliance domains.
About The Role
As a Senior Artificial Intelligence Tester , you will collaborate with a cross-functional team of product managers, UX designers, AI engineers, and software developers to design, implement, and execute comprehensive testing strategies for AI-powered solutions.
Test AI-Driven Systems
Design and implement comprehensive test strategies for advanced AI systems, including multi-component pipelines, retrieval-augmented generation (RAG), and custom AI agents with multi-step reasoning.
- Develop automated testing frameworks for AI model performance, accuracy, and reliability in production environments.
- Create test cases specific to AI/ML systems, including model validation, output quality assessment, and edge case testing.
- Validate AI model integrations with production software through robust APIs and scalable data pipelines.
- Test AI systems adapted to specialized domains, ensuring quality for expert systems in areas such as legal, tax, and compliance.
Innovate In AI Testing
- Evaluate and prototype cutting-edge testing techniques for AI systems to address unique quality challenges.
- Develop proof-of-concept automated testing frameworks for new AI-driven features.
- Stay current with AI testing methodologies, quality metrics, and emerging technologies.
- Implement testing approaches for AI model drift, bias detection, and fairness validation.
Provide Technical Leadership
- Break down functional requirements into comprehensive test specifications and automation strategies.
- Mentor junior QA engineers and facilitate technical discussions on AI testing best practices.
- Contribute to MLOps and LLMOps practices with a focus on quality assurance, continuous testing, and monitoring.
- Act as a thought leader in AI quality assurance, sharing expertise in company-wide forums and representing the organization in emerging testing technologies.
Ensure Quality & Operations
Implement comprehensive automated testing frameworks and monitoring systems for AI model performance, accuracy, and reliability.
- Create and maintain test automation suites using industry-standard tools and frameworks (Selenium, pytest, etc.).
- Ensure compliance with ethical AI principles, security standards, and regulatory requirements through thorough testing.
- Conduct systems analysis and recommend operational improvements to testing processes.
- Report deviations of quality to development teams and create informative bug reports with effective follow-up.
- Create detailed test reports documenting all issues found during testing iterations
Collaborate Across Functions
- Work closely with AI researchers, engineers, designers, and product teams to translate AI quality requirements into effective test strategies.
- Participate in daily stand-ups and iteration meetings in an Agile/Scrum environment.
- Optimize testing processes considering factors like test coverage, execution time, and resource usage.
- Support application feature enhancements by ensuring robust quality assurance for AI capabilities.
Required Skills And Experience
- Bachelor's degree in Computer Science or equivalent experience.
- 5+ years of experience in software quality assurance and test automation; at least 2 years focused on testing AI/ML systems.
- 70% automation/development experience, 30% manual testing experience.
- Proficiency in Python and Java with strong coding skills for test automation.
- Experience with test automation tools and frameworks (e.g., Selenium, pytest, JUnit).
- Strong understanding of machine learning principles, evaluation metrics, and AI system testing methodologies.
- Knowledge of MLOps/LLMOps and the end-to-end lifecycle of AI-powered software applications.
- Experience testing AI model integrations in production systems using APIs and data pipelines.
- Good knowledge and experience working with relational databases (SQL, Oracle) and ability to write complex SQL scripts.
- Experience with cloud platforms (e.g., AWS, Azure, GCP) and containerization tools (e.g., Docker, Kubernetes).
- Hands-on experience with source control and CI/CD systems such as Git and Jenkins.
- Good knowledge and experience working with Web Services and RESTful APIs.
- Excellent problem-solving skills and ability to work independently in a fast-paced environment.
- Strong communication skills and experience working in cross-functional teams.
- Ability to work on multiple projects while setting priorities to complete tasks within set timelines.
Preferred Qualifications
- Experience testing AI-driven systems, agent-based architectures, or AI APIs from providers like OpenAI and Anthropic.
- Knowledge of testing vector databases, embeddings, or search-based AI systems.
- Experience with AI/ML frameworks (e.g., PyTorch, TensorFlow) for understanding model behavior.
- Familiarity with AI model evaluation techniques, including accuracy metrics, performance benchmarks, and bias testing.
- Experience creating effective end-to-end test plans for complex AI systems.
- Knowledge of NoSQL databases.
- Domain knowledge in legal, tax, or accounting.
- A portfolio of projects demonstrating expertise in testing AI/ML solutions and building quality frameworks for LLMs.
Job Type: Permanent
Pay: ₹8,086.00 - ₹54,236.52 per month
Benefits:
- Paid time off
- Provident Fund
Work Location: In person