Position: QA lead
Experience : 8 - 11 years
Location : Noida, Gurgaon & Bengaluru
Key Responsibilities
Quality Strategy & Governance
- Define QA strategy and test governance for AI/GenAI projects within SDLC and MLOps pipelines.
- Establish quality metrics for AI outputs (e.g., factual accuracy, precision, recall, hallucination rate).
- Develop Responsible AI validation guidelines aligned to organizational compliance and ethical frameworks.
Test Planning & Execution
- Design functional, non-functional, and model validation test plans for AI-driven features.
- Create robust regression suites combining classical QA and AI model testing approaches.
- Collaborate with data scientists and model developers to define performance and accuracy benchmarks.
- Validate prompt engineering logic and LLM response consistency across datasets.
Automation & Tools Integration
- Implement automation for test data generation, API validation, and result comparison using Python or Java-based frameworks.
- Integrate AI testing workflows with CI/CD and MLOps pipelines (e.g., MLflow, Kubeflow, Jenkins).
- Develop scripts to validate inference latency, model version accuracy, and output reliability.
Defect Management & RCA
- Lead triage and RCA for model anomalies or performance deviations.
- Track model quality issues through ITSM or defect tracking tools, ensuring closed-loop feedback into retraining cycles.
Collaboration & Leadership
- Mentor QA analysts and test automation engineers on AI test design methodologies.
- Serve as liaison between QA, Data Science, and DevOps teams.
- Contribute to governance reviews, audits, and continuous quality improvement programs.
Technical Skills
Category
Required Expertise
QA / Test Automation
Selenium, PyTest, Robot Framework, Postman, REST API testing, JMeter.
AI / ML Fundamentals
Basic understanding of model training, validation, and evaluation metrics (precision, recall, accuracy, F1, ROC).
Generative AI Frameworks
Experience with OpenAI APIs, LangChain, Hugging Face Transformers, or comparable LLM frameworks.
Data Quality & Validation
Data preparation, cleansing, and validation using Python (pandas, NumPy) or Spark.
Performance & Non-functional Testing
Load and stress testing of AI inference APIs and model endpoints.
MLOps / CI-CD Tools
Jenkins, Docker, Kubernetes, MLflow, GitHub Actions.
Monitoring & Logging
APM tools (Dynatrace, Grafana, Prometheus); anomaly detection and drift tracking.
Cloud Platforms
AWS, Azure ML, or Google Vertex AI; handling AI environment configurations.
Scripting
Python (preferred), SQL scripting for data validation.
Model Validation Tools
BLEU/ROUGE scoring, embedding similarity measurement, factual accuracy scoring utilities.
Behavioral & Leadership Skills
- Strong analytical and problem-solving approach with a focus on data-driven decisions.
- Excellent communication and documentation skills for stakeholder reporting.
- Ability to mentor cross-functional QA teams in AI specialties.
- Experience working within Agile/DevOps environments.
- Commitment to ethical and responsible AI practices.
Educational & Experience Requirements
- Bachelors or Master’s in Computer Science, Information Technology, or related field.
- 7–10 years of QA experience, with at least 2 years in AI, ML, or data-centric projects.
- Hands-on knowledge of automation frameworks, API testing, and cloud-based deployment validation.