You are here:

/

/

Veritas, AI Verify, and Beyond: Open source tools for AI testing

Veritas, AI Verify, and Beyond: Open source tools for AI testing

As AI systems become more embedded in the finance industry, so too does the responsibility to ensure they are fair, explainable, and robust. That’s why this finance community-focused event in Singapore marked a significant milestone — bringing together two of the most important open-source testing toolkits in the industry: AI Verify Toolkit and Veritas.

Co-hosted by the AI Verify Foundation (AIVF), the Monetary Authority of Singapore (MAS), and  Resaro, the event introduced a unified, practical framework designed specifically for finance professionals tasked with building or governing AI systems. The aim? To simplify and streamline the way developers, risk managers, and compliance teams can test and deploy trustworthy AI.

What’s New: Veritas x AI Verify Integration

The highlight of the event was the official launch of the AI Verify Toolkit and Veritas integration. Built for traditional machine learning applications in finance, Veritas now works hand-in-hand with AI Verify’s broader suite of testing tools. The result is a one-stop repository of modular libraries for fairness, explainability, and robustness — complete with integrated governance checklists.

Lead Product Manager from Resaro, Timothy Lin, took the audience through a live demo of the new setup, showcasing how a previously 15-minute configuration process now takes just two lines of code and under two minutes to install. That’s a huge win for developers looking for fast, accessible testing frameworks.

GenAI Testing: The Next Frontier

The second half of the session focused on the fast-evolving world of Generative AI (GenAI), which is already reshaping key workflows—from summarisation and document parsing to code generation and customer service chatbots. But GenAI also comes with a unique set of risks, which current regulatory frameworks are still catching up to.

Shameek Kundu, Executive Director of AI Verify Foundation, shared a preview of AIVF’s GenAI testing roadmap, including a new upcoming portal designed to compare and evaluate open-source GenAI testing tools. He also shared the initial insights from the Global AI Assurance Testing pilot, that brought together 16 pairs of real-world GenAI application deployers with world class GenAI application testing companies.

Miguel Fernandes, Technical Partner at Resaro, shared insights from their involvement in the Global AI Assurance Pilot. His presentation included a compelling case study on anti-money laundering, developed in collaboration with Tookitaki. Tookitaki’s Senior Research Director, Yuan L., joined the session to walk through their joint testing methodology and learnings so far.

Who Attended

The event brought together a diverse mix of AI developers, data scientists, risk managers, and compliance professionals from across the finance ecosystem. Many came to not only learn about the tools but also shape the standards around AI assurance for their industry.

What’s Next

With the toolkit now live and the GenAI roadmap in motion, the work continues. Resaro and AIVF welcome feedback from the community to co-create the next wave of trustworthy AI solutions.

Missed the event? Stay tuned for the upcoming public release of the Veritas x AI Verify integration and the GenAI testing portal — because when it comes to responsible AI, collaboration is key.

Related Events

No posts found!

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.