Job Summary
Synechron is seeking a proactive and skilled Generative AI Testing & Prompt Engineer to join our innovative technology team. In this role, you will design and execute prompt strategies, develop automation workflows, and evaluate AI outputs to ensure high-quality, unbiased, and reliable generative models. Your expertise will help improve AI performance, safety, and ethical standards, enabling the organization to leverage cutting-edge AI capabilities effectively and responsibly.
Software Requirements
- Required:
- Python (including libraries such as NumPy, Pandas, and testing frameworks)
- Bash or shell scripting for automation workflows
- Version control tools (Git)
- AI evaluation metrics tools and frameworks for measuring quality, bias, and safety
- Preferred:
- Experience with AI/ML frameworks such as TensorFlow or PyTorch
- CI/CD automation tools for seamless testing and deployment workflows
- Data analysis tools and platforms for metrics development
Overall Responsibilities
- Develop and refine prompts to evaluate the capabilities, accuracy, creativity, and safety of Generative AI models, including NLP and multimodal models.
- Design comprehensive testing strategies across diverse scenarios using prompt variations to assess performance, bias, and ethical considerations.
- Automate testing workflows to streamline data collection, prompt execution, output analysis, and reporting.
- Analyze and interpret AI outputs, providing insights for improvements and bias mitigation.
- Collaborate closely with AI data scientists and developers to tune prompt techniques and enhance model behavior through automation and feedback.
- Document prompt scripts, testing procedures, automation workflows, and test results for transparency and reproducibility.
- Continuously refine prompt engineering and testing methodologies based on latest AI advancements and industry best practices.
- Stay informed about new trends, tools, and ethical considerations related to generative AI, prompt engineering, and automation.
Technical Skills (By Category)
- Programming Languages:
- Required: Python (advanced scripting and automation capabilities)
- Preferred: Bash, shell scripting, or other automation languages
- Databases/Data Management:
- Basic understanding of data storage and querying for managing testing datasets
- Experience with data labeling and quality control processes
- Cloud Technologies:
- Preferred: Basic familiarity with cloud platforms (AWS, Azure, GCP) for AI deployment and testing
- Frameworks and Libraries:
- Required: Generative AI evaluation tools, scripting frameworks for automation, prompt management tools
- Preferred: TensorFlow, PyTorch, or similar AI/ML libraries
- Development Tools and Methodologies:
- Required: Version control (Git), scripting for automation, reporting tools
- Preferred: CI/CD pipelines for automated testing and deployment in cloud or on-prem environments
- Security Protocols:
- Awareness of data privacy, security standards, and bias mitigation techniques
Experience Requirements
- 3-5 years of professional experience in testing generative AI models, including prompt writing and optimization.
- Proven experience designing prompt strategies for NLP and multimodal models.
- Hands-on experience automating evaluation workflows for AI outputs, bias detection, and safety assessment.
- Familiarity with model evaluation metrics, including relevance, coherence, bias, and safety.
- Industry experience in AI development, testing, or related research preferred; alternative experience with related automation and data analysis roles may be considered.
Day-to-Day Activities
- Develop, test, and optimize user prompts for AI model evaluations across multiple scenarios.
- Automate processes for prompt execution, data collection, and output assessment.
- Monitor and analyze AI model responses for quality, relevance, bias, and ethical compliance.
- Collaborate with data scientists, AI developers, and product teams to implement prompt tuning and automation improvements.
- Document testing strategies, automation scripts, and test outcomes thoroughly.
- Provide actionable feedback to improve model behavior and address bias or safety issues.
- Review industry trends and incorporate new testing tools or methodologies to advance testing capabilities.
Qualifications
- Bachelors degree in Computer Science, Data Science, AI, or related field.
- Relevant certifications in AI, data analysis, or automation practices are a plus.
- Demonstrated commitment to continuous learning of AI ethics, bias mitigation, and prompt engineering.
Professional Competencies
- Critical thinking and analytical skills to interpret complex AI outputs and identify issues.
- Problem-solving abilities to develop effective prompts, troubleshoot automation workflows, and improve testing strategies.
- Strong communication skills for documenting processes and collaborating with multidisciplinary teams.
- Ability to adapt rapidly to evolving AI technologies and industry standards.
- Interpersonal skills for stakeholder engagement and feedback provision.
- Effective time management to handle multiple test scenarios and documentation tasks efficiently.