Partnerships

Collaboration with MLCommons​

AI Verify Foundation and MLCommons inked a Memorandum of Intent to develop a common set of benchmarks, tools, and testing approaches for generative AI models. Leveraging the expertise and network of both organisations, we aim to advance the recognition of these Safety Benchmarks globally.

This marks our first step to advance global testing standards for AI safety. The collaboration is a pioneering effort to develop safety benchmarks for generative AI.

Peter Mattson, President of MLCommons and Dr Ong Chen Hui, Chair of Foundation’s Governing Committee signed an MOI on 29 May 2024
By leveraging the expertise and resources of Foundation and MLCommons, we aim to develop robust benchmarks that will set a new standard for AI safety. Both organisations will contribute towards the Safety Benchmarks, collaborate to engage our network of partners and communities, and promote the Safety Benchmarks globally.

You can use Project Moonshot to access and test v0.5 of the Safety Benchmark as well as contribute benchmarks.

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.