AI Verify Foundation Celebrates First Anniversary with Renewed Commitment to AI Safety
The AI Verify Foundation marked its first anniversary with a celebratory event at Capella Singapore as part of ATxSG 2024. The event brought together over 200 Foundation members, experts and enthusiasts from the global open-source AI testing community, underscoring the Foundation’s growth and impact in just one year.
In his welcome remarks, Lew Chuen Hong, Chief Executive of IMDA, highlighted the Foundation’s mission to build a responsible, global AI community. He noted that the Foundation’s membership had doubled from 60 at launch to over 120 members, with diverse stakeholders such as end-user companies, model developers, app developers, and third-party AI testing companies—all committed to building safer and more responsible AI.
Dr Ong Chen Hui, Chair of the Foundation’s Governing Committee, shared new exciting developments, including the mapping of AI Verify with ISO/IEC: 42001 and the partnership with the Monetary Authority of Singapore (MAS) to merge the AI Verify and VERITAS toolkits. These enhancements aim to make AI testing more seamless for organisations across different sectors. Dr Ong emphasised the Foundation’s commitment to engaging members, stating, “In the coming year, the Foundation will be engaging members more actively to flesh out policy and tech products across relevant aspects of the Model AI Governance Framework for Generative AI.”
The event also featured insights from new premier members Amazon Web Services (AWS) and Dell Technologies, who pledged to play leading roles in the community. Elsie Tan, Country Manager for Worldwide Public Sector Singapore at AWS announced, “AWS is stepping up to contribute AI testing algorithms to the toolkit, helping to evaluate AI models more effectively.” Andy Sim, Vice President, and Managing Director at Dell Technologies Singapore, added, “With our guiding principles of The Three S’s – Shared, Secure and Sustainable in our approach to AI, Dell Technologies aims to work hand in hand with Foundation members to make AI safer and better for everyone.”
Lee Hickin from Microsoft Asia praised AI Verify’s open-source approach, stating, “Transparency is one of the key tenets of responsible AI, and in this regard, AI Verify is already winning the game.” Miguel Fernandes, Technical Partner at Resaro, highlighted the practical value of the AI Verify toolkit. He noted, “As AI adoption accelerates across Singapore, the AI Verify toolkit provides a critical foundation for organisations to deploy AI systems responsibly. We are excited to collaborate with the AI Verify Foundation to enhance the toolkit further and address the growing need for AI safety solutions.”
A key highlight of the event was the teaser of Project Moonshot, one of the world’s first Large Language Model (LLM) Evaluation Toolkits developed by the Foundation and IMDA. This open-source tool, hosted on GitHub, is designed to help companies test Generative AI models with confidence by providing red-teaming, benchmarking tests and scoring reports.
As the event drew to a triumphant close, attendees toasted to the Foundation’s first anniversary and its commitment to charting new frontiers in AI safety. The celebration echoed a thought-provoking quote Lew Chuen Hong shared in his opening address: “If your dreams do not scare you, they’re not big enough.”
The Foundation’s first-anniversary event showcased its role as a catalyst for collaboration and innovation in AI safety. With strong industry support and a clear vision for the future, the Foundation is poised to make an even greater impact in its second year and beyond.
No posts found!
Be part of the growing community shaping the future of AI safety and innovation.
Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?
Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?
Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?
Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?