News and Resources​

Your go-to hub for the latest updates and insights on ethical AI practices and AI testing tools.

Press release​

Proposed Model Governance Framework for Generative AI

The Proposed Model Framework for Generative AI proposes nine dimensions to create a trusted environment – one that enables end-users to use Generative AI confidently and safely, while allowing space for cutting-edge innovation. It offers practical suggestions that apply as initial steps, expanding on existing Model Governance Framework that covers Traditional AI. It also aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally.

The AI Verify Foundation and IMDA welcomes comments and views from the international community, which can be sent to [email protected]. This will support the finalisation of the Model AI Governance Framework for Generative AI in mid-2024.

Press release​

Press Release – Launch of Generative AI Evaluation Sandbox

Singapore unveils the first of its kind Generative AI Evaluation Sandbox to develop testing capabilities and tools to encourage the responsible use of Generative AI.

Press release​

Catalogue of LLM Evaluations

The Catalogue of Large Language Model (LLM) Evaluations sets out a comprehensive taxonomy that organises different domains of LLM evaluations to provide organisations a holistic overview of the available tests today.

It seeks to contribute to global discussions on safety standards by recommending a minimum baseline set of safety evaluations that LLM developers should conduct prior to LLM release.

The AI Verify Foundation welcomes initial comments and feedback on this draft release, which can be sent to [email protected]. We are currently establishing a more convenient method to receive and incorporate feedback from the community, and will update in due course.

Press release​

Press release - Release of new crosswalk with NIST AI RMF

Singapore launches AI Verify Foundation to shape the future of international AI standards through collaboration.

AI Verify overview​

Crosswalk NIST AI Risk Management Framework and Singapore AI Verify testing framework

This crosswalk will allow companies to use AI Verify to achieve the desired outcomes of both AI Verify testing framework and US NIST AI Risk Management Framework in promoting trustworthy and responsible AI.

The development of the crosswalk is an important step towards harmonisation of international AI governance frameworks to reduce industry’s cost to meet multiple requirements. The joint effort also signals Singapore’s and the US’ common goal of balancing AI innovation, maximising benefits of AI technology while mitigating technology risks.

We invite interested organisations to use AI Verify to meet the common outcomes of both frameworks and share your experience with us.

Press release​

Press release - Launch of AI Verify Foundation

Singapore launches AI Verify Foundation to shape the future of international AI standards through collaboration.

AI Verify overview​

AI Verify overview

Read the primer of AI Verify to learn about our AI governance testing framework and software toolkit.

AI Verify Testing Framework ​

AI Verify Testing Framework

The framework is aligned with internationally recognised AI ethics principles, guidelines, and frameworks, such as those from the EU, OECD and Singapore. The framework comprises 11 AI ethics principles, namely transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness (i.e., mitigation of unintended discrimination), data governance, accountability, human agency & oversight, and inclusive growth, societal & environmental well-being.
The framework also contains testable criteria and testing processes for each of the principles. The scope of the testable criteria may overlap and could reinforce concepts that are important in ensuring trustworthy and transparent deployment of AI.

Discussion paper​

Discussion paper

This paper from Singapore’s IMDA and Aicadium raises for discussion policy ideas on building an ecosystem for trusted and responsible adoption of AI, in a way that encourages a positive loop – to spur innovation and tap on opportunities afforded AI, ever more so with the advent of Generative AI.

By sharing ideas on practical pathways for governance, the paper seeks to enhance discourse and foster greater global collaboration to ensure AI is used in a safe and responsible manner, and that the most critical outcome – trust – is sustained.

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.