The AI Assurance Pilot is a global initiative launched in February 2025 to help codify emerging norms and best practices around technical testing of Generative AI applications.
The pilot will:
Type of Use Cases
Risk dimensions to consider during testing (not exhaustive)
Learn more
Inputs into future standards for technical testing of Generative AI applications
Learn more
Greater awareness of the ways in which external assurance can help create greater trust in AI systems and enable adoption at scale
Learn more
Inputs into open-source and proprietary testing tools for Generative AI applications
Technical testing of Generative AI application
Consolidate insights and showcase at Asia Tech x Singapore 2025
Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?
Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?
Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?
Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?