AI is fast evolving and a game changer, but it comes with risks. As the use of AI becomes more prevalent, it is increasingly important for AI developers and users to show that their AI systems are safe and will not result in unintended bias. To do so, AI testing is key. However, the sciences and technologies for AI testing are nascent and there are significant gaps in AI testing and evaluation. We cannot develop AI testing alone. Industry and research community need to come together and leverage their collective expertise to crowd-in on development efforts.
View the full list of members here. The premier members are Aicadium, Google, Microsoft, IBM, IMDA, Redhat, and Salesforce.
Anyone can use AI Verify:
AI Verify consists of a testing framework and a software toolkit.
The framework is aligned with internationally recognised AI ethics principles, guidelines, and frameworks, such as those from the EU, OECD and Singapore. The framework comprises 11 AI ethics principles, namely transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness (i.e., mitigation of unintended discrimination), data governance, accountability, human agency & oversight, and inclusive growth, societal & environmental well-being.
The framework also contains testable criteria and testing processes for each of the principles. The scope of the testable criteria may overlap and could reinforce concepts that are important in ensuring trustworthy and transparent deployment of AI.
The toolkit is designed to be extensible which provides the ability to test and evaluate AI models from a black-box perspective. It was primarily designed to serve as an audit and testing tool, supporting both third-party independent testing and self-assessment.
Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?
Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?
Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?
Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?