News​

Your go-to hub for the latest updates and insights on ethical AI practices and AI testing tools.

Press Release – Global AI Assurance Pilot

The AI Verify Foundation and IMDA have launched a Global AI Assurance pilot. It is a global initiative to codify emerging norms and best practices around technical testing of Generative AI applications. The pilot was announced at the AI Action Summit in Paris, France.

11 Feb 2025

Press Release – Enhanced AI Verify Testing Framework and Crosswalk with US NIST

The AI Verify Testing Framework has undergone significant enhancements to address the challenges of Generative AI. This update by IMDA offers organisations a comprehensive tool to build trust and ensure responsible AI deployment – for both traditional and generative AI systems.

The AI Verify Testing Framework for Traditional AI was first released in 2022. It enables organisations to assess their AI systems against 11 internationally recognised AI governance principles. A crosswalk to US NIST AI Risk Management Framework was also released in 2023.

In continued collaboration with US NIST, IMDA also released a crosswalk that maps the enhanced Framework with NIST AI Risk Management Framework: Generative AI Profile today. This reaffirms the alignment between our two countries’ AI governance frameworks, and reflects our shared commitment to cooperation. As announced by the Minister for Digital Development and Information Mrs Josephine Teo at the Asia Tech x Summit 2025, the mapping of frameworks will make it easier for businesses operating in both Singapore and the US to meet their AI safety obligations in both countries.

28 May 2025

Press Release – Global AI Assurance Pilot

The AI Verify Foundation and IMDA have launched a Global AI Assurance pilot. It is a global initiative to codify emerging norms and best practices around technical testing of Generative AI applications. The pilot was announced at the AI Action Summit in Paris, France.

11 Feb 2025

Press Release – Project Moonshot

The AI Verify Foundation launches Project Moonshot, one of the world’s first Large Language Models (LLMs) Evaluation Toolkit, aimed at addressing the risk of biases and harmful content from unchecked LLMs. Now in beta and open-sourced on GitHub, it offers a seamless way to evaluate LLM applications’ performance, both pre- and post-deployment.

31 May 2024

Press Release – Launch of Model AI Governance Framework for Generative AI

As Generative AI evolves, so do the risks. AI Verify Foundation and IMDA have released a Model AI Governance Framework for Generative AI that outlines 9 dimensions to create a trusted environment. Stay tuned for what’s next as we co-develop implementation guidelines and resources with the members and the community, providing greater certainty in AI governance.

30 May 2024

Press Release – Launch of Generative AI Evaluation Sandbox

Singapore unveils the first of its kind Generative AI Evaluation Sandbox to develop testing capabilities and tools to encourage the responsible use of Generative AI.

31 Oct 2023

Press release - Release of new crosswalk with NIST AI RMF

Singapore launches AI Verify Foundation to shape the future of international AI standards through collaboration.

13 Oct 2023

Press release - Launch of AI Verify Foundation

Singapore launches AI Verify Foundation to shape the future of international AI standards through collaboration.

7 Jun 2023

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.