From beta testing to success: how SIA takes responsible AI implementation to the next level with AI Verify

Singapore Airlines (SIA), the national carrier of Singapore, is one of the world’s leading airlines dedicated to providing air transportation services of the highest quality. Innovation is at the heart of SIA’s brand promise, enabling them to offer world-leading products and services while creating new business opportunities. To enhance customer experience and operational efficiency, the airline developed AI-powered solutions for customer-facing and operational applications. One such solution is Joey.

SIA piloted AI Verify testing framework on Joey:

  • AI model: Question-Answering model
  • Use Case: Joey – HR Chatbot

Introducing Joey, SIA’s first chatbot developed in-house for HR-related queries. Launched in April 2021, Joey uses Smart Search technology to provide fast and accurate responses, improving efficiency, accuracy, and accessibility in HR processes by:

  • Providing staff with access to HR-related information to self-serve their queries anytime, anywhere in the world. This improves employee satisfaction
  • Freeing up time for HR staff to focus on more value-added tasks.

Early testers in beta testing

SIA was among the early users of AI Verify, participating in its beta testing even before the Minimum Viable Product was launched. Recognising the importance of responsible AI practices, SIA saw AI Verify as a strong foundation to kickstart its AI testing and aimed to:

  • Foster the development and use of AI systems that are fair, transparent, and accountable, while ensuring their practices adhere to AI ethical guidelines
  • Strengthen digital and data trust in AI by ensuring that AI systems are developed and deployed to maximise long-term value for stakeholders, considering the social, environmental, and economic impact of AI. This is particularly important due to potential risks such as algorithmic bias, personal data breaches, and safety issues that may undermine SIA’s customers’ confidence in AI systems
  • Demonstrate how trustworthy AI can mitigate business risks, while ensuring compliance with data protection regulations to protect sensitive information of SIA’s staff and customers

When SIA first started its journey in testing AI Verify, it was not without challenges as the organisation had to spend a substantial amount of time to:

  • Familiarise itself the way AI Verify organised the AI ethics principles and its respective testing criteria
  • Understand how these principles could be applied to technical tests and process checks
  • Set up an adequate environment and document all findings and results from the testing process
At the beta testing stage, AI Verify only supported limited AI model types. SIA used a state-of-the-art Natural Language Processing model to retrieve and rank the relevant HR documents and FAQs to answer user queries. However, this was not part of AI Verify’s supported algorithms.

Despite the limitations in technical testing tools, it did not stop SIA from pursuing further tests. SIA reviewed and tested its AI initiatives using the testing framework and process checks where possible. While the testing process was rigorous and intensive, SIA felt it was critical to streamline the implementation of responsible AI with its AI development process. SIA made several improvements, including: 

  • Recalibrating its AI development process to introduce relevant principle testing at different stages
  • Enhancing the model card and introducing a visual aid to quantify how well an AI model could be explained responsibly
  • Continuing to drive awareness and education on responsible AI to all staff
SIA believes that the AI Verify toolkit is a great starting point for organisations as it provides a robust testing framework checklist to conduct self-assessments on AI systems. This is key in contributing to the development of international standards and industry benchmarks.
Share:

Share your experience using AI Verify today

Share:

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.