Taking Initiative to Contribute to AI Verify Development

UBS, an international financial institution, implements AI solutions in various areas, including fraud detection, trade, activity surveillance, and more, to improve operational efficiency. Demonstrating trustworthy AI is paramount to UBS’ clients and investors. The bank wanted to utilise AI in a trustworthy manner to manage risks to the bank’s best ability.

Besides being aligned with the bank’s values, demonstrating responsible AI builds trust with its clients and investors. Hence, UBS took the initiative to participate in the international pilot of AI Verify. The bank believed that it could:

  • Help shape the foundational work in responsible AI and develop its own AI governance definitions
  • Assess the requirements of AI governance and be more prepared by testing with the AI Verify framework
  • Gain reputational goodwill
UBS piloted AI Verify on an income prediction use case:
  • AI model: Binary classification using LightGBM (gradient-boosting machine)
  • Use Case: Simulated use case that predicts the income level of individuals based on a set of variables such as age, occupation, and gender
During testing, UBS developed a classification model using demographic information. As AI Verify was still in its early stages as a Minimum Viable Product, UBS encountered software quirks and bugs, such as having to reload the model every time the toolkit restarted. This caused delays in the testing process. However, UBS completed the testing and provided useful feedback to enhance the AI Verify toolkit. As a result of UBS’ early feedback, the bug has been fixed, and users no longer have to reload its AI model for subsequent tests.
To UBS, the AI Verify testing framework is a codified list of requirements that aligns with globally responsible AI principles. It represents a notable milestone – a first with regard to high-level guidelines from regulators around the world. The framework’s granularity has enabled UBS to perform gap assessments and compare them with its own internal guidelines on AI. The bank plans to incorporate relevant AI Verify requirements into its own AI guidelines moving forward.
Share:

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.