Resource Library

Your go-to hub for the latest updates and insights on ethical AI practices and AI testing tools.
AI Playbook for Small States

Singapore and Rwanda have introduced the world’s first AI Playbook for Small States to shape inclusive global discourse on harnessing the potential of AI. First announced at the Asia Tech x Singapore Summit 2024 on 30 May 2024, the AI Playbook is developed by Singapore’s Infocomm Media Development Authority (IMDA) in collaboration with Rwanda’s Ministry of Information Communication Technology (ICT) and Innovation, with consultations taking place with Digital Forum of Small States (“Digital FOSS”) members since the start of the year.

The Playbook addresses common challenges small states face in adopting and harnessing the potential of AI, such as availability of resources and funding, access to data and AI talent, fostering a trusted environment such as developing holistic frameworks and practical testing tools such as AI Verify.

Given the rapidly evolving nature of AI, the Playbook is shaped as a living document that continuously pulls together the collective experiences and strategies from small states.

More information and the AI Playbook itself can be found in the following links.

Where is your Organisation on your Responsible AI Journey?

Understanding and implementing responsible AI is a continuous journey, and every organisation is at a different stage. Whether you’re just beginning to explore AI governance or you’re looking to refine an established framework, our resources are designed to guide you through every step of this important process.

Singapore’s collection of guidance documents and tools offers practical insights and actionable practices tailored to your organisation’s level of maturity in AI governance. From foundational frameworks for those starting out, to advanced practices and technical tests for more sophisticated AI implementations, our materials provide the support you need to develop and enhance your AI governance capabilities.

Use our flowchart to assess where your organisation currently stands in its responsible AI journey. This will help you identify the right resources and tools to move forward confidently and responsibly.

AI Verify and Project Moonshot at a Glance

Looking to test your AI models? Explore how AI Verify and Project Moonshot can help you assess and enhance your AI systems.

Find out which evaluation tools are best suited to your needs and how they can help you align with key AI governance principles and build trust with your stakeholders.

Tutorial Part I: How to use AI Verify
Uncover the principles of responsible AI and learn how to implement them with AI Verify. In this video, we provide a step-by-step walkthrough of AI Verify toolkit, designed to help you master its features and functionalities. Whether you are a beginner or looking to test your AI models, this workshop covers:
  1. Introduction to AI Verify
  2. Basic Operations and Features
By the end of video, you will have the knowledge to use AI Verify toolkit. If you are a developer looking to contribute to AI Verify, please watch Part II.
Tutorial Part II: Build Plugins and Extend AI Verify

Learn to create custom plugins to enhance AI Verify’s capabilities.

In this video, we will walk you through the entire process, from architecture to customising your first plugin. This video is tailored for developers of all levels who want to enhance and contribute to the open-source community. Here’s what we will cover:

  1. Introduction to AI Verify Architecture and Plugins
  2. Customising Your First Plugin
    1. Testing Algorithm
    2. Display Widget

By the end of this video, you will have the skills to customise plugins to fit your unique requirements and the knowledge to contribute effectively to AI Verify.

Model Governance Framework for Generative AI
The Model AI Governance Framework for Generative AI (MGF for GenAI) outlines 9 dimensions to create a trusted environment – one that enables end-users to use Generative AI confidently and safely, while allowing space for cutting-edge innovation. Recognising that no single intervention is enough to address existing and emerging AI risks, the framework offers a set of practical suggestions that apply as initial steps.
Project Moonshot Overview
Read the primer on Project Moonshot to learn about our evaluation toolkit for Large Language Models.
New crosswalk with ISO/IEC 42001: 2023 shows international alignment
AI Verify is an AI governance testing framework and software toolkit. AI Verify aims to help companies be more transparent about their AI systems, building trust through standardised tests and strengthening organisational processes.

“ISO/IEC 42001:2023 is a first of its kind international standards designed to ensure broad responsible adoption of AI. The novel approach, in conjunction with the family of standards that ISO/IEC JTC 1/SC 42 is developing, provides a portfolio of international standards that countries and regions can rely on to enable trustworthy and transparent AI.

This mapping between AI Verify Framework and ISO/IEC 42001 demonstrates Singapore’s strong support in advancing global harmonisation in a practical way. It also shows the growing number of countries that are leveraging ISO/IEC 42001 to enable trustworthy AI adoption.”

– Wael William Diab, Chair, ISO/IEC JTC1 / SC42 on AI standards

Both frameworks share a common goal in enabling organisations to strengthen their AI governance implementation. This crosswalk shows how the controls in ISO/IEC 42001:2023 are mapped to the process checks in AI Verify testing framework. Organisations can use AI Verify toolkit to strengthen their AI governance, and practically demonstrate alignment with ISO/IEC 42001:2023, without onerous cost.
We invite interested organisations to use AI Verify to meet the common outcomes of both frameworks and share your use case with us.
Catalogue of LLM Evaluations
The Catalogue of Large Language Model (LLM) Evaluations sets out a comprehensive taxonomy that organises different domains of LLM evaluations to provide organisations a holistic overview of the available tests today.
It seeks to contribute to global discussions on safety standards by recommending a minimum baseline set of safety evaluations that LLM developers should conduct prior to LLM release.

The AI Verify Foundation welcomes initial comments and feedback on this draft release, which can be sent to [email protected]. We are currently establishing a more convenient method to receive and incorporate feedback from the community, and will update in due course.

Crosswalk NIST AI Risk Management Framework and Singapore AI Verify testing framework
This crosswalk will allow companies to use AI Verify to achieve the desired outcomes of both AI Verify testing framework and US NIST AI Risk Management Framework in promoting trustworthy and responsible AI.
The development of the crosswalk is an important step towards harmonisation of international AI governance frameworks to reduce industry’s cost to meet multiple requirements. The joint effort also signals Singapore’s and the US’ common goal of balancing AI innovation, maximising benefits of AI technology while mitigating technology risks.
We invite interested organisations to use AI Verify to meet the common outcomes of both frameworks and share your experience with us.
AI Verify overview
Read the primer of AI Verify to learn about our AI governance testing framework and software toolkit.
Discussion paper
This paper from Singapore’s IMDA and Aicadium raises for discussion policy ideas on building an ecosystem for trusted and responsible adoption of AI, in a way that encourages a positive loop – to spur innovation and tap on opportunities afforded AI, ever more so with the advent of Generative AI.
By sharing ideas on practical pathways for governance, the paper seeks to enhance discourse and foster greater global collaboration to ensure AI is used in a safe and responsible manner, and that the most critical outcome – trust – is sustained.

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.