AI Verify is an AI governance testing framework to help companies assess the responsible implementation of their AI system against 11 internationally recognised AI governance principles.
AI Verify testing framework consists of 11 principles
Learn more
By disclosing to individuals that AI is used in the system, individuals will become aware and can make an informed choice of whether to use the AI-enabled system.
Principle: Transparency
Learn more
This allows individuals to know the factors contributing to the AI model’s output, which can be a decision or a recommendation. Individuals will also know that the AI model’s output will be consistent and performs at the level of claimed accuracy given similar conditions.
Principles: Explainability, Repeatability/Reproducibility
Learn more
Individuals know that the AI system will not cause harm, is reliable and will perform according to intended purpose even when encountering unexpected inputs.
Principles: Safety, Security, Robustness
Learn more
Individuals know that the data used to train the AI model is sufficiently representative, and that the AI system does not unintentionally discriminate.
Principles: Fairness, Data Governance
Learn more
Individuals know that there is human accountability and control in the development and/or deployment of AI systems and the AI system is for the good of humans and society.
Principles: Accountability, Human agency and oversight, Inclusive growth, Societal and Environmental well-being
Each principle has desired outcomes that can be achieved through specified processes. The implementation of these processes can be validated through documentary evidence.
Overarching consideration which AI applications should be able to adhere to
For every principle, there are desired outcomes. It could be technical and non-technical processes alongside with technical tests where applicable
Testing processes are actionable steps to be carried out to achieve the desired outcomes
These processes are validated by documentary evidence
You can download a copy of the testing framework here
AI Verify testing framework is consistent with international AI governance frameworks such as those from European Union, OECD and US. The framework is mapped to other international frameworks, including:
AI Verify was first developed in consultation with companies from different sectors and of different scale. These companies include – AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS (Part of Singtel Group)/Land Transport Authority, Standard Chartered Bank, UCARE.AI, and X0PA.AI.
On 25 May 2022, AI Verify testing framework and software toolkit was launched as a Minimum Viable Product (MVP) for international pilot and feedback by IMDA and PDPC.
On 7 June 2023, AI Verify testing framework and toolkit was open-sourced in GitHub
On 29 May 2025, the updated AI Verify testing framework was released. The testing framework has been enhanced to address risks posed by Gen AI. With this update, companies can now apply AI Verify testing framework for both traditional and Gen AI use cases
AI Verify is an AI governance testing framework to help companies assess the responsible implementation of their AI system against 11 internationally recognised AI governance principles.
Complementing the AI Verify testing framework is our open-source testing toolkits. Organisations can use our toolkits to implement the testing framework.
Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?
Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?
Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?
Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?