What our general members say

At Accenture, we define Responsible AI as the practice of designing and deploying AI systems that prioritize safety, fairness, and positive impact on people and society. Our aim is to build trust with users affected by AI. When AI is ethically designed and implemented, it enhances the potential for responsible collaborative intelligence.

Our commitment to Responsible AI aligns with the government's broader efforts to harness the power of AI for the greater public good. We take pride in supporting the Foundation to assist organisations in scaling AI with confidence, ensuring compliance, and maintaining robust security measures.

AI represents the future of innovation, unlocking the potential to harness and leverage its transformative capabilities across industries, and redefining our work and lifestyles. Access Partnership understands and values the importance of working with expert groups like the AI Verify Foundation to help navigate areas such as the ethical use of AI, AI principles and standards, data security and privacy, intellectual property, and disinformation.

AI is transforming the way we work and create. At Adobe, AI has been instrumental in helping to further unleash the creativity and efficiency of our customers through our creative, document, and experience cloud solutions.

Adobe is proud to be one of the first to join the AI Verify Foundation to help foster, advance, and build a community to share best practices here in Singapore. Partnering with government bodies such as IMDA is an important opportunity to share ideas and ensure that the full potential of AI is realised responsibly.

As a leading RegTech company relying on cutting-edge AI, we want to be a part of a community with other like-minded companies that equally value building fair and explainable AI. We believe that the AI Verify Foundation will benefit all entities that employ AI through the adoption of a set of world-leading AI ethics principles.

Companies recognise the power of AI to create significant business impact, but they are also cognisant of the need to deploy AI in a responsible manner. We believe the recommended processes and tools developed by the AI Verify Foundation will significantly aid companies seeking to demonstrate compliance to a proper AI design standard, thus lowering the time and cost of getting to market.

The next generation of AI will be responsible AI. Our company is targeting the development of an all-in-one AI model and data diagnosis solution for responsible and trustworthy AI. The AI Verify Foundation provides us with a platform and opportunities to collaborate, learn, and make a meaningful impact in advancing responsible AI practices on a broader scale.

As an AI solution provider, we recognise the incredible power and potential that AI has, as it has started to deeply integrate with our day-to-day lives and transform the world around us. However, we also understand the importance of responsible and ethical adoption of this technology to ensure a safer and more equitable future for all.

Our vision is a world where AI is harnessed for the greater good, where businesses, governments, and individuals equally emphasize and allocate resources to the development and implementation of responsible AI tools, frameworks, and standards as much as for commercial gains. We are committed to being a key member of the AI Verify Foundation, working together to shape a future where technology and humanity can thrive in harmony.

Cybersecurity risks to AI can impede innovation, adoption, and digital trust, ultimately hampering the growth of organizations and society. AIShield provides comprehensive and self-service AI security products, serving as crucial tools for AI-first organizations across multiple industries and for AI auditors. These solutions ensure AI systems are secure, responsible, and compliant with global regulations. As part of the AI Verify Foundation, AIShield remains committed to advancing AI Security technology and expertise, while steadfastly pursuing its mission of "Securing AI Systems of the World.

Amdocs empowers the financial services and banking sectors with solutions to accelerate digital transformation in the digital-first world. It is critical for Amdocs to embedded responsible governance and framework in our work.

The AI Verify Foundation sets the baseline for AI Verify across industry and aligned to OECD, EU and Singapore’s governance framework. This sets a benchmark in the marketplace for responsible adoption.

There are many aspects in the world of AI including gaps in social and ethics considerations. Amdocs see this Foundation as critical to drive AI governance principles in the market place, and will be embedding AI Verify toolkit into our service offerings. Through this participation, we will strive to roll this to our counterparts in EMEA and US with Singapore at the heart of the operations.

Ant Group Logo

Ant Group focuses on building a robust technology governance framework as the fundamental guideline for our technological development. To us, it is crucial to ensure that technology and AI can be used in a way that benefits people in a fair, respectful, trustworthy and responsible manner. There is immense potential for technology to help underprivileged people but the key to sustainable technological development requires established standards around the basic principles for AI governance and the institutional framework for evaluating the governance gaps. We need to ensure that technology and AI development we deploy will be in line with ethical principles and societal values, ultimately bringing positive impacts to our communities.

Asia Verify is committed to leveraging technology to make trust easy when doing business with Asia. Effective governance and shared ethics principles are essential to effective AI, which in the words of Stephen Hawkins, could be the biggest event in the history of our civilisation. We are delighted to contribute to the AI Verify Foundation.

The Asia Internet Coalition would like to support our member companies in ensuring the ethical and safe development of artificial intelligence technologies and to promote user privacy and trust within the digital ecosystem.

Asurion is delighted to support the mission of the AI Verify Foundation in promoting trustworthy AI solutions. We recognise the importance of responsible AI development, and our commitment aligns with the efforts of IMDA in Singapore to establish robust AI governance frameworks and toolkits. By actively participating in Singapore's AI Governance Testing Framework and Toolkit, we aim to contribute to the adoption of best practices and accelerate the responsible development of AI technology. Asurion remains dedicated to harnessing the power of AI Verify to drive innovation while upholding ethical standards, ensuring a brighter future powered by trustworthy AI.

Avanade has a Responsible AI policy and governance framework as we believe that an AI-first culture is inherently people-first. We believe joining the AI Verify Foundation will highlight our commitment to be part of a robust platform for responsible AI to build trust and goodwill within communities and our customers. AI testing is important to Avanade as it demonstrates responsible AI via fundamental values which are ethical, legal and fair. In that manner, it respects human rights, values and complies with up-to-date regulations.

AI applied through machine learning (ML) will be one of the most transformational technologies of our generation, tackling some of humanity’s most challenging problems, augmenting human performance, and maximising productivity. Responsible use of these technologies is key to fostering continued innovation. AWS is committed to developing fair and accurate AI and ML services and providing customers with the tools and guidance needed to build AI and ML applications responsibly. This is why we support initiatives such as AI Verify.

BPP is delighted to join the AI Verify Foundation and contribute to the building of responsible AI, which is a key facet of our energy-efficient AI solutions.

Our mission at Beamery is to create equal access to meaningful work, skills and careers for all. Ethical, explainable AI powers our Talent Lifecycle Management platform, helping large businesses to reduce bias in hiring, get better recommendations, and stay compliant across all stages of the candidate and employee journey. We believe in transparency and take pride in being the first HR Tech company to undergo a third-party audit to demonstrate the fairness of our algorithms. We are excited to join the AI Verify Foundation as it works to foster greater trust and transparency in AI, which we believe will unlock potential across the global workforce.

One of the major barriers for AI commercialisation is the inability to explain it and testing AI models through data metrics is one way to facilitate understanding on how they work. Since no AI testing standards exist, the only way forward is to bring together regulators with technology providers, commercial institutions, and academia who can address this challenge in an open-source manner, and that is exactly what the AI Verify Foundation has set out to do.

At Bosch, it is our vision to take the connected and digitalized world to the next level with the help of AI making people's lives easier, safer and more comfortable. Being part of the AI Verify Foundation enables Bosch to collaborate and engage with other industry leaders, researchers, and experts in the field of AI. This collaborative environment allows for knowledge sharing, exchanging best practices, and staying up-to-date with the latest developments, so that we can deploy AI-enabled products that are "Invented For Life".

BGA supports the AI Verify Foundation as a pioneering path forward in bringing together important players to develop trustworthy AI. At BGA, we strive to promote constructive engagements between regulators, our partners, and the overall business community. The Foundation is one such platform that presents an opportunity for companies to shape the way AI technologies, testing, and regulation are co-developed. We hope to work closely with IMDA and our partners through the AI Verify Foundation so that Singapore can reap the full benefits of AI in the future to come.

As a governance tool that helps enterprise organizations document, manage, and monitor their AI models and datasets to ensure compliance with internal and external regulations, BreezeML is a staunch advocate for the responsible and ethical development and use of artificial intelligence. With our values aligning closely with AI Verify Foundation's mission of building trust through ethical AI, we are proud to join and support the AI Verify Foundation to promote governance and compliance in the greater AI community.

BrightRaven.ai's core corporate value is "AI For Good". This includes our supporting the formation of requisite AI Ethics, Regulation, Governance & Enforcement frameworks at the National, Regional & Global levels to ensure AI is used for Good and not Evil, in Singapore and around the world. IMDA's AI Verify platform is a key component of such frameworks in Singapore, our global headquarters.

Realizing the benefits of artificial intelligence requires public trust and confidence that these technologies can be developed and deployed responsibly. BSA | The Software Alliance has for years promoted the responsible development and deployment of AI, including through BSA's Framework to Build Trust in AI, which was published in 2021 and identifies concrete and actionable steps companies can take to identify and mitigate risks of bias in AI systems. BSA also works with governments worldwide toward establishing common rules to address the potential risks of AI while realizing the technology's many benefits. The AI Verify Foundation offers an important forum for industry, government, and other stakeholders to work together toward building trustworthy AI.

The AI Verify Foundation provides the essential platform for allowing Safe AI to branch development into fruition, connecting networks of all capabilities to ensure trustworthy AI usage for individuals, companies, and communities. At Calvin, we are proud to contribute our expertise to its core mission.

The dialogue of Responsible AI in all its facets is vital - we are proud to be a contributing factor to the AI Verify Foundation's mission and look forward to collaborating with leading innovators in the realm of Trustworthy AI.

AI testing is crucial for our company to showcase responsible AI, offering our customers reassurance that we are dedicated to ensuring our product aligns with responsible AI practice. Joining the AI Verify Foundation is of significant importance to our company as it allows us to contribute to and stay informed about the collective community efforts aimed at advancing the deployment of responsible and trustworthy AI.

Chartered Software Developer Association believes in promoting cross-cultural ethical & industry standards leading practices for the AI & ESG revolution. As a global professional association for technology professionals, we are confident that by joining AI Verify Foundation, our synergy will benefit the community on responsible AI practices.

With today's scale and revolution of AI innovation, we build towards having the foundational AI governance testing tools to be established for Responsible AI applications in society for public interest protection purposes, along with the frameworks, code base, standards and leading practices for AI.

Citadel AI is proud to be a member of the AI Verify Foundation. Our AI testing and monitoring technology is used by AI auditors and developers globally, and as part of the AI Verify Foundation, we hope to accelerate our shared mission of making the world's AI systems more reliable.

Responsible and ethical AI is the key to the future. CITYDATA.ai applies AI and machine learning to make our cities smarter, safer, equitable, and resilient. In joining the AI Verify Foundation, we hope to be able to contribute to the AI governance tools and frameworks in a neutral space for the AI ecosystem to thrive and produce outcomes for the betterment of humankind.

AI testing and EAI/XAI is important for any company adopting AI technology, this gives transparency.

Inspection without transparency is pointless. With transparency and accountability in mind, people deploying AI will be more ethical and responsible. Joining AI Verify Foundation is the responsibility of any AI capable company, promoting Ethical AI should be our CORE VALUE for a better tomorrow, a better Singapore.

Credo AI is thrilled to join the AI Verify Foundation, and we look forward to harnessing the collective power and contributions of the international open-source community to develop AI governance testing tools that can better enable the development and deployment of trustworthy AI.

We strongly believe in the importance of fostering a diverse community of developers who can collectively contribute to the development of AI testing frameworks and best practices, and we look forward to contributing our expertise and thought leadership to this pathfinding community, as we continue to work together to develop and maintain responsible AI tools, frameworks, and standards. This Foundation will nurture a diverse network of advocates for AI testing, which we believe is essential to driving the broad adoption of responsible AI globally.

As AI becomes more pervasive and will greatly impact to the way we work, it is our shared responsibility (with the IT community) to align to best practices and standards to enable responsible AI. Importantly, we want to ensure fairness and trust when it comes to AI adoptions, and joining the AI Verify Foundation will help CrimsonLogic do exactly that.

One of our priorities as a data science platform provider is ensuring our customers safely, responsibly, and effectively leverage and scale AI. In support of this we launched Govern - a dedicated workspace to govern AI and analytics projects - that sits alongside platform features that enable reliability, accountability, fairness, transparency, and explainability.

Tools like AI Verify can be extremely important to organisations investing in AI and analytics governance and how we work with them: they serve as a foundation that can help to give shape to strong and well-conceived AI governance practices that enable the responsible use of the technology.

AI Verify provides the much-needed gold standard for the responsible use of AI. It provides the yardstick that attests to the trustworthiness of the AI that we build. This is a ray of hope amidst mounting ethical AI concerns!

As organisations worldwide continue to drive increased adoption of AI-based solutions, it is more important than ever to establish the guardrails to ensure this is done responsibly. Singapore’s regulators have, for some time now, been at the forefront in ambitiously moving beyond high-level principles and guidelines towards developing frameworks and toolkits; to provide increased capability to organisations to better manage and govern their AI-based solutions.

DBS is proud to have been able to work closely with PDPC and IMDA in developing and testing some of their approaches over the years as a trusted partner; being part of the AI Verify Foundation will enhance this collaboration and help shape the emerging initiatives in this space.

Beneficial, equitable, transparent, responsible, and accountable. These guiding principles ensure that AI will benefit society and people now and in the future. By harnessing the collective power of the community, the AI Verify Foundation works to enable honest, fair, and equitable systems.

Our collaboration with the AI Verify Foundation exemplifies our belief in the transformative power of collective innovation to advance transparent, ethical, and reliable AI solutions. By joining this pivotal initiative, we can proactively shape the future of trustworthy AI, underscoring our commitment to fostering technologies that respect user privacy, fairness, and transparency. We look forward to setting new industry standards, inspiring trust, and encouraging responsible innovation in the AI ecosystem.

DXC collaborates with leading technology vendors within the AI domain, enabling us to offer impartial guidance on leveraging AI for expansion while adhering to established best practices for responsible AI implementation. The true potential of AI remains unrealized in the presence of lingering apprehension and unease among certain businesses and consumers. Through our affiliation with the AI Verify Foundation, we aim to proactively formulate and institute a conscientious AI framework in collaboration with our clients from the outset.

AI testing is a process that we welcome and appreciate as a way to showcase the extreme innovation and responsibility we put into our offering. Verification of AI is the key to its growing use and value to individuals, organizations, and society at large.

At ELGO, we take pride in helping businesses design and implement responsible AI systems. By being part of the AI Verify Foundation, we are committed to pioneering and contributing to the advancement of responsible AI use that elevate not just individual businesses, but also enrich the broader AI landscape with accessibility and trust.

At EngageRocket, we believe that joining the AI Verify Foundation enables us to deploy trustworthy and responsible AI in our products. It aligns perfectly with our vision of shaping better workplaces with credible technology.

Envision Digital is delighted to support the launch of IMDA’s AI Verify Foundation. Responsible AI has been our focus, as we recognise the need for responsible practices with the increasing deployment and limitless potential of AI innovation to support our customers. Together with IMDA, the time is now for us to advance responsible AI into action as we harness the power of AI to create a more sustainable world.

As a company that specialises in AI governance and risk management, adherence to rigorous standards is critical for our customers to demonstrate credibility, build trust with stakeholders, and ensure their AI systems are ethically developed and deployed. Joining the AI Verify Foundation will help us support that through the development of shared standards, best practices and quality tooling.

As organisations around the world continue to adopt AI solutions at the current pace and scale, they need to put proper controls and guardrails in place to ensure these solutions are safe and compliant with existing and upcoming regulations. Fairly AI is focused on accelerating responsible AI innovation, and our partnership with the AI Verify Foundation hopefully enables even more organisations to accelerate the safe and responsible adoption of AI.

FairNow is proud to join the AI Verify Foundation. We believe that building societal trust is crucial to achieving the positive, transformation potential of AI. FairNow's mission to simplify AI compliance and governance aligns with AI Verify's own goal to advance responsible AI through standards, open source, and public private partnerships. We look forward to contributing to and harnessing the work of the AI Verify Foundation.

The rapid adoption of AI technologies in the near future is undeniably going to change the contours of the way we work and engage our customers, employees and stakeholders. As such, focusing on working out the governance, ethical, and legal frameworks of how we use this technology is now more important than ever.

FairPrice Group is committed to partnering and working constructively with relevant stakeholders such as the AI Verify Foundation and IMDA. Our aim is to support the development of Singapore’s AI ecosystem and the resultant implementation of fair and practical frameworks and guidelines to regulate the technology appropriately and proportionately.

As disseminators of responsible technology, Fidutam recognizes the pivotal role of young people in advocating for and deploying responsible technology. Fidutam's innovative fin-tech and ed-tech products have been used by over 3,400 individuals in Latin America, Sub-Saharan Africa, and the United States, enabling upward economic and educational mobilization. By joining AI Verify, Fidutam aims to amplify the voice of the youth in shaping responsible AI practices globally.

Building a future with AI that is fair, explainable, accountable, and transparent is our collective responsibility. Finbots.AI is delighted to have collaborated with IMDA and PDPC to be one of the pioneering Singapore startups to complete the AI Verify toolkit. We look forward to continuing our partnership through the AI Verify Foundation by innovating on transformative use cases with the AI community and building ethical AI frameworks that are benchmarked to global standards.

GovTech leads the Singapore government's efforts to adopt AI and improve delivery of citizen-centric services as well as accelerate digital transformation. In doing so, GovTech is committed to ensuring that AI development is safe and secure to maximise its benefits and instil public trust. We are excited to join the AI Verify Foundation to develop responsible and trustworthy AI that will transform the everyday lives of people in Singapore.

The mobile industry is committed to nurturing the development of AI and big data analytics in a sustainable, ethical, and responsible manner while respecting individuals’ privacy. As part of this, the GSMA’s AI for Impact (AI4I) initiative supports members to implement products and services in a fully accountable way that is human-centric and rights-oriented. As an increasingly essential element of the infrastructure on which our society is built, AI needs to be fair, open, transparent, and explainable in its operations and customer interactions to protect customers and employees. Any entrenched inequality must be removed to ensure AI operates reliably for all stakeholders while minimising any environmental impact.

Handshakes can only help our clients do business safely when our AI is properly tested. Joining the AI Verify Foundation demonstrates that resolve.

Hanzo core principles are security, transparency, and defensibility, to empower legal teams to uncover risks and relevance, establishing a robust evidentiary foundation for efficient and confident decision-making based on AI.

Hewlett Packard Enterprise (HPE) believes that artificial intelligence (AI) holds enormous potential to advance the way people live and work, but we must ensure that we apply these powerful tools ethnically and sustainably. By joining the AI Verify Foundation and other like-minded partners, HPE would be able to support and contribute to the ongoing work to promote responsible AI, best practices and standards for AI in Singapore.

The governance of AI is a key issue for Hitachi, which recognises the significant societal impact associated with the use of this technology across its extensive business domains. We believe that the AI Verify Foundation will help businesses become more transparent to all the stakeholders in the use of AI. We are looking forward to working with you on co-creating frameworks and ecosystems to contribute to driving broad adoption in AI governance.

The scale & pace of AI Innovation in this new modern technology era requires, at the very core, foundational AI governance frameworks to be made mainstream in ensuring the appropriate guardrails are considered while implementing responsible AI algorithmic systems into applications. The AI Verify Foundation serves this core mission and, as we progress as an advancing tech society, substantiates the need to advocate for the deployment of greater trustworthy AI capabilities.

IFPI is the voice of the recording industry worldwide, representing over 8,000 record company members across the globe. We work to promote the value of recorded music, campaign for the rights of record producers and expand the commercial uses of recorded music around the world. We believe that progress in AI innovation and adequate copyright protection are not mutually exclusive, and that that the human creative expression and the human artist remains fundamental to the creation of music despite increasing AI capabilities.

impress.ai helps its customers improve the accuracy of their hiring decisions using AI. To make sure that we preserve and enhance the meritocratic nature of such decisions, it is vital that the AI behind the platform is robust, fair, responsible and explainable. AI adoption is growing at an exponential rate. As a company selling AI solutions that touch millions of professionals, we have a responsibility to help shape the industry in a way that is beneficial to humanity. AI Verify Foundation is a step in the right direction and we are glad to support its efforts.

As an AI-focused company, we understand the profound impact our technology can have on society. At iNextLabs, we believe that with great innovation comes great responsibility. And we are committed to responsible and ethical AI deployment. By implementing comprehensive testing protocols, we strive to mitigate biases, enhance fairness, and fortify the robustness of our AI solutions. By joining the AI Verify foundation we pledge to make this vision a reality, We look forward to learn, share and contribute to the best practices and standards in enabling responsible AI.

The discovery of industrial processes to mass produce nitrogen compounds led to the resolution of a deadly food crisis once faced by humanity. However, this discovery, arguably too, led to its applications in areas that may not be originally intended for. Learning from history, it is now or too late for us to align beliefs and principles in ethical and sustainable use of AI, as we witness a closer realisation towards autonomous enterprise. Being part of a foundation that gathers global industry leaders, the implementation of guiding principles will create a bigger impact.

We recognise the critical importance of trustworthy AI in improving patient and customer outcomes. Joining the AI Verify Foundation aligns with our mission to deliver safe and reliable virtual training solutions, and we believe in the power of open collaboration to advance responsible AI practices. We strongly support the mission of the AI Verify Foundation to foster a community dedicated to AI. By ensuring the trustworthy deployment of AI, we can drive innovation, build stakeholder trust, and create a more sustainable future for all.

The need for AI testing enables Invigilo to understand the behaviour and potential edge cases, allowing the team to intervene where the AI system is not performing before deploying it in real-world conditions. This allows better communication between AI developers and end users on how the AI systems arrive at their decisions and provides explanations when necessary.

As Kenek AI strive to foster connections that matters through the usage of responsible AI, our membership in the AI Verify Foundation enables us to collaborate with key stakeholders in the AI community to spread the adoption of ethical AI governance and responsible AI for a fairer future for all.

Joining independent organizations like AI Verify allows us to collaborate and share real-life practical experiences and knowledge with industry leaders, thereby advancing responsible AI practices. It also provides us with a platform to advocate for ethical AI principles to raise awareness among companies and the public.

Lazada has been at the forefront of driving and responding to technical advances, working with AI experts to unlock a new era of eCommerce and retail innovations to offer differentiated experiences and opportunities for our users, sellers and partners. Joining the AI Verify Foundation is an important step in ensuring we continue to develop high-quality AI-powered services and products in a way that safeguards our platform users, and is aligned with our trust and safety policies which include data privacy and the protection of intellectual property rights.

We are focusing on the development of AI for compliance use in the financial industry, so we value governance very much and see responsible and trustworthy AI as important in product development. We would like to join the AI Verify Foundation as it is a community that values responsible and trustworthy AI, where we can have a community with the same purpose to exchange ideas and co-create best practices for AI testing in the market.

We advocate for a world where people and wildlife thrive together. This purpose drives us to actively contribute to the conservation of species, habitats, wildlife science and research. Augmenting this at our destination of the Mandai Wildlife Reserve, we nurture people's connection with the natural world, by harnessing innovative technology to educate and engage. Embedding ethics in our adoption of AI is therefore key to ensuring our technologies respect and enhance the physical environment, while benefitting the animals in our care, our employees, and visitors.

In the digital age, the synergy between people and AI drives progress. At Mastercard, we have been using AI for years as part of transaction processing to protect against fraud and cyber risks, as well as to provide insights to our customers. We highly value ethics, transparency, and reliability in AI practices, and we believe in open dialogue between sectors and diverse viewpoints. We are delighted to join the AI Verify Foundation and eagerly anticipate innovating responsibly together, ensuring ethical AI guides us toward a brighter future.

There is immense potential within the media and broadcasting industry to leverage AI. Mediacorp is exploring the use of AI in areas such as content generation, marketing, and advertising and is honoured to be among the pioneer members of the AI Verify Foundation. We look forward to working with the community of AI practitioners to exchange knowledge, collaborate on initiatives, and drive the development of robust AI governance standards in Singapore.

Joining the AI Verify Foundation signifies our commitment to responsible and trustworthy AI, ensuring that our innovation in the beauty and personal care industry is not only cutting-edge but also ethical and transparent.

Our focus is on ensuring that AI at Meta benefits people and society, and we proactively promote responsible design and operation of AI systems by engaging with a wide range of stakeholders, including subject matter experts, policymakers, and people with lived experiences of our products. To that end, we look forward to participating in the AI Verify Foundation and contributing to this important dialogue in Singapore and across the entire Asia Pacific region.

MAIEI is delighted to be joining the AI Verify Foundation given its focus on operationalizing Responsible AI and making it easier for as many organizations as possible to adopt these practices throughout their design, development, and deployment of AI systems. It aligns with our mission of democratizing AI ethics literacy, ultimately seeking to make Responsible AI the norm rather than the exception.

Music Rights (Singapore) Public Limited, also known as MRSS, is a Not-For-Profit Collective Management Organisation (CMO) that represents the majority of Music Producers and administers the Copyrights for Karaoke, Music Videos, and Sound Recordings on their behalf. MRSS believes that even in this age of rapid AI development, the rights of Creators and Producers should be safeguarded, and the use of copyrighted works should require the full authorisation and licensing from rights holders.

AI testing demonstrates NCS’ commitment to delivering responsible, safe, and equitable AI solutions. We harness technology to provide right-sized cybersecurity solutions that future-proof cyber resiliency and shape the future of AI. Our clients trust us to safeguard their digital transformation journeys, leveraging our expertise and end-to-end capabilities to enhance their security posture, streamline processes, and strengthen governance. Joining the AI Verify Foundation underscores our dedication to ethical AI governance and building a secure and resilient digital future.

As a company developing Python-based rPPG software, AI testing is crucial to demonstrate responsible AI practices, ensuring the accuracy, fairness, and ethical considerations of our algorithms. Joining the AI Verify Foundation is vital as it allows us to contribute to advancing the deployment of responsible and trustworthy AI, aligning our commitment to ethical development with a community dedicated to fostering AI transparency and accountability.

As a pioneer in the AI field, OCBC Bank is committed to ensuring that the future of AI is fair to all. The AI Verify Foundation is a key enabler in achieving the goal of trustworthy AI.

Joining the AI Verify Foundation is a valuable opportunity for our company to contribute to the development of trustworthy AI and collaborate with a diverse network of advocates in the industry. We fully support the mission of the AI Verify Foundation to foster open collaboration, establish standards and best practices, and drive broad adoption of AI testing for responsible and trustworthy AI.

At Qualcomm we strive to create AI technologies that bring positive change to society. Our vision for on-device AI is based on transparency, accountability, fairness, managing environmental impact and being human-centric. We aim to act as a responsible steward of AI, considering the broader implications of our work and taking steps to mitigate any potential harm. Our on-device AI solutions are designed to enable enhanced privacy and security, essential to a robust and trustworthy AI ecosystem. Our hope is that with the AI Verify Foundation we can contribute to a broad, collaborative effort to build a common framework in support of internationally recognized AI governance principles, as an effective path towards building responsible, human-centric AI.

As a Venture Capital firm investing in data and AI companies, we believe that AI use must be ethical even as companies seek to innovate and deliver new technologies for the betterment of society. Being part of the Foundation will enable us to work with likeminded members, utilise and also contribute to the building of robust and practical AI toolkits and guidelines, with the goal of championing responsible use principles as the ground from which further AI technology is developed.

The Recording Industry Association Singapore (RIAS) comprises 25 leading major and independent record companies in Singapore, with a mission to promote recorded music and expand its market utilisation, and to safeguard the rights of record producers and their artistes. Our members understand that AI technology has empowered human expression, but human-created works will continue to play an essential role in the music industry and believe that copyright should protect the unique value of human intellectual creativity.

RegTank looks forward to contributing towards the evolving AI standards and testing methodologies through our participation as a member of the AI Verify Foundation to forge greater trust with clients, regulators, and other stakeholders.

As AI's impacts become increasingly widespread, the responsible AI community must have access to clear guidance on context-relevant AI testing methodologies, metrics, and intervals. The Responsible AI Institute is excited to support the AI Verify Foundation, given its proven leadership in AI testing, dedication to making its work accessible, and commitment to international collaboration.

As technologists and practitioners of AI, Responsible AI is a core principle at retrain.ai. From our involvement in shaping NYC's Law 144, extensive research about AI risks, launching the first-ever Responsible HR Forum, to embedding explainability, fairness algorithms and continuous testing, ensuring our AI models meet the highest standards for responsible methodology and regulatory compliance, we view Responsible AI as one of our main pillars. Joining the AI Verify Foundation is an extension of our dedication to responsible AI development, deployment, and practices in HR processes.

SAP is one of the first companies in the world to define trustworthy and ethical guiding principles for using AI in our software, and continues to be a leader in responsible business AI. Through our participation in the AI Verify Foundation we look forward to contributing our global expertise to support the development and deployment of responsible AI that will help the world run better and improve people's lives.

As more and more solutions and decisions are developed with the help of AI, there is a greater need to adopt responsible AI, and there is a greater responsibility on our shoulders to help customers to do that effectively and efficiently.

Scantist believes robust AI testing is crucial for responsible AI implementation - especially in cybersecurity. Joining the AI Verify Foundation amplifies our commitment to shaping a secure future where secure cyber-systems - including AI - are the standard, not the exception.

The AI Verify Foundation will advance the nation's commitment to fostering trustworthy AI as a cornerstone of Singapore's AI ecosystem. At SenseTime International, we look forward to co-creating a future with the Foundation where AI technologies are developed and deployed responsibly, advocate international best practices, and are credited for its positive Whole-of-Society impact.

As an early user of AI Verify, Singapore Airlines recognised the importance of responsible AI and AI governance as a strong foundation for our AI initiative. The testing framework of AI Verify facilitated our initiative and enabled us to further strengthen data trust among our stakeholders. Joining the AI Verify Foundation supports our digital transformation journey and enables us to be in a collaborative network promoting ethical AI.

AI has potential risks and if not proactively managed can create a significant negative impact on organizations and society as a whole, SigmaRed is committed to making AI more responsible and secure and would like to join with AI Verify Foundation.

Trust is essential for public acceptance of AI technologies, the community of developers and stakeholders the AI Verify Foundation will convene promises the development and deployment of a more trustworthy AI.

AI testing is paramount to SoftServe because it embodies our commitment to delivering responsible AI solutions. In an era where AI is evolving, we recognize the need to ensure our technologies are transparent, accountable, and beneficial for all stakeholders. By rigorously testing our AI solutions, we guarantee their functionality and ensure they align with ethical standards and values we uphold.


Joining the AI Verify Foundation is a strategic decision. Being part of the Foundation positions us at the forefront of global AI standards and best practices. It would also be a great way to further communicate our commitment to responsible AI and be a part of a community that contributes to regional initiatives in this space.

SPH Media's mission is to be the trusted source of news on Singapore and Asia, to represent the communities that make up Singapore, and to connect them to the world. We recognize the importance of AI and are committed to responsible AI practices. We strive to build up AI systems that are human-centric, fair, and free from unintended discrimination. This process will be enhanced by AI testing that allows us to identify and address potential risks associated with AI, and aid us in our mission.

The mission of the AI Verify Foundation resonates with Squirro’s belief in the responsible and transparent development and deployment of AI. We look forward to participating in this vibrant global community of AI professionals to collectively address the challenges and risks associated with AI.

We are heartened that IMDA is leading the way in ensuring AI systems adhere to ethical and principled standards. As a member, ST Engineering will do its part to advance AI solutions and to shape the future of AI in a positive and beneficial way.

The capabilities of AI-driven systems are increasing rapidly, as we have seen with large language models and generative AI. The democratisation of access will lead to the widespread deployment of AI capabilities at scale. Evaluating AI systems for alignment with our internal Responsible AI Standards is a key step in managing emerging risks, and testing is a critical component in the evaluation process.

The pace and scale of change concerning AI systems require risk management and governance to evolve accordingly so users can derive the benefits in a safe manner. This cannot be done independently, and it is better to collaborate with the wider industry and government agencies to advance the deployment of responsible AI. Standard Chartered has partnered with IMDA to launch the AI Verify framework, and joining the AI Verify Foundation is a logical next step to ensure we can collaboratively innovate and manage risks effectively.

StoreWise is on a mission to transform brick-and-mortar Retail to create memorable shopping experiences by infusing cutting-edge technology in their operations so they can strive. Becoming a member of the AI Verify Foundation shows our commitment to use AI technology responsibly and to contribute into building safeguards as it evolves, for the benefits of our clients, their customers, and the community.

Strides Digital is excited to join the AI Verify Foundation community to use and develop AI responsibly, as we help companies capture value on their decarbonisation and fleet electrification journey.

AI is seen as a transformative technology that offers opportunities for innovation to improve efficiency and productivity. As the need for AI-powered solutions continues to surge, the active engagement of the community in the development of best practices and standards will be pivotal in shaping the future of responsible AI. Tau Express wholeheartedly supports this initiative by IMDA, and we look forward to leveraging the available toolkits to continue building trust and user confidence in our technology solutions.

AI testing forms the bedrock of TeamSolve's commitment to responsible AI development. It serves as our unwavering assurance to the operational workforce that they can place their complete trust in our AI Co-pilot, Lily, knowing that it relies on trustworthy knowledge sources and provides recommendations firmly rooted in their domain.

The AI Verify Foundation and its members play a fundamentally pivotal role collectively in the advancement of the AI towards higher standards of accountability and trustworthiness for greater acceptance in society.

At Tech4Humanity, we believe responsible testing and validation are vital to developing trustworthy AI that uplifts society. By joining the AI Verify Foundation, we aim to collaborate with partners across sectors to create frameworks and methodologies that proactively address algorithmic harms and demonstrate AI's readiness for broad deployment. Our goal is to advance the creation of human-centric AI that augments our collective potential.

Temus Logo

AI testing is pivotal for deploying responsible AI, ensuring safety and risk management. Access to valuable resources and regulatory alignment supports transparency and continuous improvement, which in turn ensure reliability and scalability—all essential for building trust with our clients. Joining the AI Verify Foundation is important for Temus as we collaborate with enterprises on their digital transformation journeys. We aim to foster collaboration and mutual accountability, setting high standards of integrity in this frontier of innovation, so that we all might unlock social and economic value sustainably.

As Tictag is focused on producing the highest quality data for AI and machine learning, the AI Verify Foundation aligns perfectly with our mission of making AI trustworthy not just in purpose but in substance. AI ethics is at the core of what we do, being very human-centric, and the reputation of AI Verify will be important to rely on as we expand overseas. 

We think there is a value in networking and exchanging ideas with the industry leaders. As advanced AI is no longer a far future, industry leaders have more and more discussions about the guardrails, the safety measures, and what's next in store. Singapore is at the forefront of AI development, and Singaporean companies should join this conversation as well. So it is a very timely initiative.

Our project stands for governance and transparency - hallmarks of AI Verify's framework that we are proud to adopt ourselves and promote. We encourage testing as a means to achieving the overall mission of the Foundation.


The Foundation's movement to advance responsible and trustworthy AI is the rising tide that will lift all boats. We are inspired by its work and we want to be part of the movement to foster trust whilst advancing AI. We commit to responsible practices of development and deployment.

AI Verify is an important step towards enhancing trustworthiness and transparency in AI systems as we move up the learning curve. In order for AI to live up to its full potential, we need to build and earn this trust. We believe that developing specialised skilled talent and capabilities is the cornerstone of creating AI trust and governance guardrails and toolkits. Making the technology safer is key, and we are glad to support IMDA, who have taken the lead in nurturing future champions of responsible AI.

As a leading provider of testing and monitoring software for AI systems, we are delighted to be part of the AI Verify Foundation community and contribute practically to the Trustworthy AI conversation.

Trusted AI’s mission is to help organisations instill trust in the very DNA of their AI programs, as seen in our logo. We are excited at the opportunity to partner with the AI Verify Foundation as we are aligned with their mission, and together, we can bring the development and deployment of trustworthy AI globally.

UBS is proud to be one of IMDA’s inaugural AI Verify Foundation members and participate in the AI Verify pilot test. We will continue to engage leading fintechs, investors and companies to decode emerging AI trends. Through the AI Verify Foundation , we aim to promote the use of AI in an ethical and trustworthy manner.

UCARE.AI has supported IMDA since participating in the first publication of their AI Governance Framework in 2019 and has continued to align our processes when deploying AI solutions for our customers. We believe that the establishment of the Foundation will foster collaboration, transparency, and accessibility, which is crucial in promoting trustworthy AI.

Ethical use of data is an integral part of our operating DNA, and UOB has been recognised as a champion of AI ethics and governance. By joining the AI Verify Foundation, UOB hopes to contribute in the thought leadership in responsible AI.

VFlowTech employs AI in its EMS since it is the only method to enhance solar and energy storage efficiency. We also feel that an ethical code for responsible AI must be established due to cyber security issues.

At Vidreous, we employ GenAI models to classify data and provide insights to enhance our user experience. As AI is known to hallucinate on new information, it is necessary to establish a quality management process to ensure output accuracy. Joining AI Verify Foundation allows us to advance the quality management and deployment of our products as part of the larger collective community in practicing responsible AI. Together, we will make a difference in delivering trust to our people.

It is essential to ensure that the plethora of apps that use AI models today produce accurate and reliable results. Our vision at Virtusa is to establish ourselves as a strong capability hub for AI Testing and work in collaboration with likeminded communities. Hence, we are keen to work with AI Verify Foundation in promoting responsible, ethical and sustainable use of AI, thereby building trust with our clients and other stakeholders.

Trust remains at the core of everything we do and is the foundation upon which data-driven products and innovations are built. Creating a governance structure that prioritises the responsible stewardship of data and establishing robust measures such as consent management form a core foundation for responsible AI. We are excited and honoured to be a part of the AI Verify Foundation Committee and contribute towards the development and deployment of responsible AI in Singapore.

It is important for Warner Music Singapore to join AI Verify Foundation because it allows us to support the development and deployment of trustworthy AI. By joining, we can collaborate with developers, contribute to AI testing frameworks, and share ideas on governing AI. The Foundation provides a neutral platform for us to collaborate and aims to promote AI testing through marketing and education. Being part of this diverse network of advocates will help us drive broader adoption of reliable AI practices.

At WeBank, we champion techonology's role in inclusive finance and sustainable development. As the world's leading digital bank, we've seamlessly integrated advanced technologies like AI, blockchain, and big data. Our dedication has led to milestones such as innovative distributed system architectures and providing tailored financial solutions to millions. Joining the AI Verify Foundation positions us to shape robust governance and advance human-centric AI innovations. As we share our insights and learn from peers, we remain focused on cultivating an AI ecosystem grounded in trust, accountability, and equitable digital progression.

We see significant value in membership. It allows us to contribute to developing standards for AI governance, shape best practices, and signal our commitment to trustworthy AI. The open-source approach enables continuous progress through collaboration.

Workday welcomes the establishment of the AI Verify Foundation, which will serve as a community for like-minded stakeholders to contribute to the continued development of responsible and trustworthy AI and Machine Learning. We believe that for AI and ML to fully deliver on the possibilities it offers, we need more conversations around the tools and mechanisms that can support the development of responsible AI. Workday is excited to be a member of the Foundation, and we look forward to contributing to the Foundation’s work and initiatives.

Joining the AI Verify Foundation aligns with X0PA’s commitment to responsible AI practices, as we look to harness the power of AI to promote unbiased and equitable practices in hiring and selection.

In 2021, Zoloz proposed a trustworthy AI architecture system, including explainability, fairness, robustness, and privacy protection. Trustworthy AI is the core capability of resisting risks in the digital age. We hope that by joining the AI Verify Foundation, we can continuously polish our AI capabilities and build an open, responsible, and trustworthy AI technology ecosystem to empower the digital economy and the industry ecosystem. In the future, we hope that through continuous practice, we will continue to promote the implementation of AI and other technologies in the industry and create more value for society.

Many existing Zoom products that customers know and love already incorporate AI. As we continue to invest in AI, Zoom remains committed to ethical and responsible AI development; our AI approach puts user security and trust at the center of what we do. We will continue to build products that help ensure equity, privacy, and reliability.

The foundation shows the leading stance IMDA is taking to ensure that AI Governance becomes a core for all organisations and society, not limiting its availability, but ensuring that all actors using AI can benefit from AI Governance at this pivotal moment in AI's progression. 2021.AI will endeavour to be a core member with its AI Governance offering and expertise.

Thank you for completing the form. Your submission was successful.

Preview all the questions


Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?


Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?


Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?


Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?


Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?


Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.