Global AI Assurance Pilot

Technical Testing of Generative AI Applications

The AI Assurance Pilot is a global initiative to help codify emerging norms and best practices around technical testing of Generative AI applications.

The pilot will: 

  • Pair AI assurance and testing providers with organisations deploying Generative AI applications
  • Focus on technical testing of the real-life application (not the underlying foundation model)
  • Use the lessons learnt from specific examples to create generalisable insights on “what and how to test”

Timeline for 2025

Feb

Expression of interest and confirmation of participation

Mar/Apr

Technical testing of Generative AI application

May

Consolidate insights and showcase at Asia Tech x Singapore 2025

Important role of external AI assurance

Our mission is to help build a trusted AI ecosystem, in Singapore and beyond. Making AI testing reliable and accessible is fundamental to that mission. To achieve this, the Foundation has been working with:

  • Regulators and standard setters to help drive clarity of expectations around testing
  • AI testing practitioner community to enable sharing of (emerging) best practice
  • Open-source contributors and partners to create accessible AI testing libraries

Additionally, external AI assurance providers is important for AI testing to become truly ubiquitous and scalable. They can test AI systems developed or deployed by another organisation against a set of requirements, and creating confidence in AI.

Targeted outcomes

To improve the value proposition, feasibility and viability of external AI assurance. At the end of the pilot, recommendations could be in the following areas:

Pilot Scope

Type of Use Cases

  • Live or soon-to-launch Generative AI applications
  • Impacting individuals through automated or semi-automated recommendations or decisions

Risk dimensions to consider during testing (not exhaustive)

  • Safety and Health risks
  • Unfair treatment of staff/ customers/ citizens
  • (Lack of) Transparency and recourse
  • Inappropriate Data disclosure
  • Malicious use
  • Other security risks
  • Trust/ reputation concerns
  • Financial loss
  • (Lack of) Appropriate level of human oversight
  • Breach of other Industry-specific (non-AI) regulatory requirements
  • Breach of other Internal compliance requirements

What’s in it for you?

For AI assurance and testing
vendors

For firms deploying Generative AI applications

Keen to participate?

01

Read the summary here

02

Find out more details
(Coming soon)

Express your interest in the pilot

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.