AI safety and testing are crucial to the Analytics & AI Association of the Philippines (AAP) and its members because they ensure responsible AI deployment, aligning with our mission to foster ethical and inclusive innovation. Joining the AI Verify Foundation exemplifies our commitment to advancing trustworthy AI, promoting global standards, and enhancing collaboration. This partnership underscores our dedication to creating a robust AI ecosystem that benefits society while upholding integrity and transparency in AI practices.
ACCA is passionate about striking the right balance between harnessing the benefits of AI while doing so in a responsible way that considers the public interest. Our members have expertise in areas such as assurance, internal controls and risk/governance and can bring a business and finance lens to complement the work of technologists. We see joining the AI Verify Foundation as an important way to build this type of partnership – so that the pace of AI development doesn’t leave the users and public behind.
At Accenture, we define Responsible AI as the practice of designing and deploying AI systems that prioritize safety, fairness, and positive impact on people and society. Our aim is to build trust with users affected by AI. When AI is ethically designed and implemented, it enhances the potential for responsible collaborative intelligence.
Our commitment to Responsible AI aligns with the government's broader efforts to harness the power of AI for the greater public good. We take pride in supporting the Foundation to assist organisations in scaling AI with confidence, ensuring compliance, and maintaining robust security measures.
AI represents the future of innovation, unlocking the potential to harness and leverage its transformative capabilities across industries, and redefining our work and lifestyles. Access Partnership understands and values the importance of working with expert groups like the AI Verify Foundation to help navigate areas such as the ethical use of AI, AI principles and standards, data security and privacy, intellectual property, and disinformation.
AI is transforming the way we work and create. At Adobe, AI has been instrumental in helping to further unleash the creativity and efficiency of our customers through our creative, document, and experience cloud solutions.
Adobe is proud to be one of the first to join the AI Verify Foundation to help foster, advance, and build a community to share best practices here in Singapore. Partnering with government bodies such as IMDA is an important opportunity to share ideas and ensure that the full potential of AI is realised responsibly.
As a leading RegTech company relying on cutting-edge AI, we want to be a part of a community with other like-minded companies that equally value building fair and explainable AI. We believe that the AI Verify Foundation will benefit all entities that employ AI through the adoption of a set of world-leading AI ethics principles.
Aicadium is proud to be a member of the AI Verify Foundation. With the rapid growth of AI in business, government, and the daily lives of people, it is vitally important that AI is robust, fair, and safe.
We look forward to working with the Foundation to take AI governance to the next level. We are committed to the development of rigorous, technical algorithmic audits and third-party AI test lab capabilities, which we believe are an essential component of the AI ecosystem to help organisations deliver AI as a benefit to all.
Companies recognise the power of AI to create significant business impact, but they are also cognisant of the need to deploy AI in a responsible manner. We believe the recommended processes and tools developed by the AI Verify Foundation will significantly aid companies seeking to demonstrate compliance to a proper AI design standard, thus lowering the time and cost of getting to market.
The next generation of AI will be responsible AI. Our company is targeting the development of an all-in-one AI model and data diagnosis solution for responsible and trustworthy AI. The AI Verify Foundation provides us with a platform and opportunities to collaborate, learn, and make a meaningful impact in advancing responsible AI practices on a broader scale.
As an AI solution provider, we recognise the incredible power and potential that AI has, as it has started to deeply integrate with our day-to-day lives and transform the world around us. However, we also understand the importance of responsible and ethical adoption of this technology to ensure a safer and more equitable future for all.
Our vision is a world where AI is harnessed for the greater good, where businesses, governments, and individuals equally emphasize and allocate resources to the development and implementation of responsible AI tools, frameworks, and standards as much as for commercial gains. We are committed to being a key member of the AI Verify Foundation, working together to shape a future where technology and humanity can thrive in harmony.
At AIQURIS, ensuring the safety, reliability and ethical use of AI is central to our mission. We empower organisations to fully harness this transformative technology by identifying and managing risks and by ensuring the overall quality of AI systems.
The AI Verify Foundation offers a unique environment for developing and deploying AI responsibly, in collaboration with platform members. By promoting best practices and standards, it supports the entire ecosystem in delivering high-performance, compliant AI solutions that organisations can trust and confidently scale.
Cybersecurity risks to AI can impede innovation, adoption, and digital trust, ultimately hampering the growth of organizations and society. AIShield provides comprehensive and self-service AI security products, serving as crucial tools for AI-first organizations across multiple industries and for AI auditors. These solutions ensure AI systems are secure, responsible, and compliant with global regulations. As part of the AI Verify Foundation, AIShield remains committed to advancing AI Security technology and expertise, while steadfastly pursuing its mission of "Securing AI Systems of the World”.
Amdocs empowers the financial services and banking sectors with solutions to accelerate digital transformation in the digital-first world. It is critical for Amdocs to embedded responsible governance and framework in our work.
The AI Verify Foundation sets the baseline for AI Verify across industry and aligned to OECD, EU and Singapore’s governance framework. This sets a benchmark in the marketplace for responsible adoption.
There are many aspects in the world of AI including gaps in social and ethics considerations. Amdocs see this Foundation as critical to drive AI governance principles in the market place, and will be embedding AI Verify toolkit into our service offerings. Through this participation, we will strive to roll this to our counterparts in EMEA and US with Singapore at the heart of the operations.
At Armilla, we’re committed to advancing responsible AI by providing risk mitigation solutions backed by rigorous AI assessments. Our comprehensive evaluations identify potential vulnerabilities, ensuring compliance and fairness, while our risk mitigation solutions give businesses the confidence to deploy AI with the assurance that they’re reducing exposure to operational, reputational, and regulatory risks.
Ant Group focuses on building a robust technology governance framework as the fundamental guideline for our technological development. To us, it is crucial to ensure that technology and AI can be used in a way that benefits people in a fair, respectful, trustworthy and responsible manner. There is immense potential for technology to help underprivileged people but the key to sustainable technological development requires established standards around the basic principles for AI governance and the institutional framework for evaluating the governance gaps. We need to ensure that technology and AI development we deploy will be in line with ethical principles and societal values, ultimately bringing positive impacts to our communities.
Asia Verify is committed to leveraging technology to make trust easy when doing business with Asia. Effective governance and shared ethics principles are essential to effective AI, which in the words of Stephen Hawkins, could be the biggest event in the history of our civilisation. We are delighted to contribute to the AI Verify Foundation.
The Asia Internet Coalition would like to support our member companies in ensuring the ethical and safe development of artificial intelligence technologies and to promote user privacy and trust within the digital ecosystem.
Resilient and Safe AI is a key research area for A*STAR, as we believe that it is key to reap AI's full transformative potential. As a member of the AI Verify Foundation, A*STAR will harness its AI Governance Testing toolkit and its extensive ecosystem to continue developing AI technologies that are trusted by our industry partners and the community.
Asurion is delighted to support the mission of the AI Verify Foundation in promoting trustworthy AI solutions. We recognise the importance of responsible AI development, and our commitment aligns with the efforts of IMDA in Singapore to establish robust AI governance frameworks and toolkits. By actively participating in Singapore's AI Governance Testing Framework and Toolkit, we aim to contribute to the adoption of best practices and accelerate the responsible development of AI technology. Asurion remains dedicated to harnessing the power of AI Verify to drive innovation while upholding ethical standards, ensuring a brighter future powered by trustworthy AI.
Avanade has a Responsible AI policy and governance framework as we believe that an AI-first culture is inherently people-first. We believe joining the AI Verify Foundation will highlight our commitment to be part of a robust platform for responsible AI to build trust and goodwill within communities and our customers. AI testing is important to Avanade as it demonstrates responsible AI via fundamental values which are ethical, legal and fair. In that manner, it respects human rights, values and complies with up-to-date regulations.
BPP is delighted to join the AI Verify Foundation and contribute to the building of responsible AI, which is a key facet of our energy-efficient AI solutions.
At BCG X, rigorous AI safety testing and evaluation is critical to balance ethical standards with generating lasting business impact. It is our strong belief, that AI solutions cannot be built and scaled without a robust Responsible AI program to deliver transformative business value and at the same time mitigate financial, reputational, and regulatory risks for our clients and potential harms to individuals and society. With AI BCG X solves some of the biggest challenges our clients face, balancing business impact with strong ethical standards is critical for responsible AI deployment. Joining the AI Verify Foundation allows us to share our experience and underscores our commitment and beliefs in Singapore as a leading innovation hub.
BDO recognises AI's potential to revolutionise organisations and unlock human capabilities. We focus on responsible AI adoption, translating theory into practical solutions for business challenges. Our collaboration with AI Verify strengthens our approach in three crucial areas: implementing advanced threat detection for AI-generated attacks, ensuring ethical AI use aligned with governance frameworks, and partnering with AI experts to stay ahead of emerging threats. This comprehensive strategy allows BDO to effectively guide clients through AI-driven digital transformation, ensuring safety and innovation both in Singapore and globally.
Our mission at Beamery is to create equal access to meaningful work, skills and careers for all. Ethical, explainable AI powers our Talent Lifecycle Management platform, helping large businesses to reduce bias in hiring, get better recommendations, and stay compliant across all stages of the candidate and employee journey. We believe in transparency and take pride in being the first HR Tech company to undergo a third-party audit to demonstrate the fairness of our algorithms. We are excited to join the AI Verify Foundation as it works to foster greater trust and transparency in AI, which we believe will unlock potential across the global workforce.
One of the major barriers for AI commercialisation is the inability to explain it and testing AI models through data metrics is one way to facilitate understanding on how they work. Since no AI testing standards exist, the only way forward is to bring together regulators with technology providers, commercial institutions, and academia who can address this challenge in an open-source manner, and that is exactly what the AI Verify Foundation has set out to do.
As an AI cloud services provider, Bitdeer AI aims to make AI accessible to everyone by building robust infrastructure and fostering a vibrant ecosystem for researchers, developers, and consumers. Our commitment to trust, excellence, and responsible AI is exemplified by our partnership with the AI Verify Foundation. We recognise the critical importance of AI testing and look forward to contributing to the Foundation's mission to ensure AI is harnessed responsibly for the betterment of humanity.
At Bosch, it is our vision to take the connected and digitalized world to the next level with the help of AI making people's lives easier, safer and more comfortable. Being part of the AI Verify Foundation enables Bosch to collaborate and engage with other industry leaders, researchers, and experts in the field of AI. This collaborative environment allows for knowledge sharing, exchanging best practices, and staying up-to-date with the latest developments, so that we can deploy AI-enabled products that are "Invented For Life".
BGA supports the AI Verify Foundation as a pioneering path forward in bringing together important players to develop trustworthy AI. At BGA, we strive to promote constructive engagements between regulators, our partners, and the overall business community. The Foundation is one such platform that presents an opportunity for companies to shape the way AI technologies, testing, and regulation are co-developed. We hope to work closely with IMDA and our partners through the AI Verify Foundation so that Singapore can reap the full benefits of AI in the future to come.
As a governance tool that helps enterprise organizations document, manage, and monitor their AI models and datasets to ensure compliance with internal and external regulations, BreezeML is a staunch advocate for the responsible and ethical development and use of artificial intelligence. With our values aligning closely with AI Verify Foundation's mission of building trust through ethical AI, we are proud to join and support the AI Verify Foundation to promote governance and compliance in the greater AI community.
BrightRaven.ai's core corporate value is "AI For Good". This includes our supporting the formation of requisite AI Ethics, Regulation, Governance & Enforcement frameworks at the National, Regional & Global levels to ensure AI is used for Good and not Evil, in Singapore and around the world. IMDA's AI Verify platform is a key component of such frameworks in Singapore, our global headquarters.
Realizing the benefits of artificial intelligence requires public trust and confidence that these technologies can be developed and deployed responsibly. BSA | The Software Alliance has for years promoted the responsible development and deployment of AI, including through BSA's Framework to Build Trust in AI, which was published in 2021 and identifies concrete and actionable steps companies can take to identify and mitigate risks of bias in AI systems. BSA also works with governments worldwide toward establishing common rules to address the potential risks of AI while realizing the technology's many benefits. The AI Verify Foundation offers an important forum for industry, government, and other stakeholders to work together toward building trustworthy AI.
The AI Verify Foundation provides the essential platform for allowing Safe AI to branch development into fruition, connecting networks of all capabilities to ensure trustworthy AI usage for individuals, companies, and communities. At Calvin, we are proud to contribute our expertise to its core mission.
The dialogue of Responsible AI in all its facets is vital - we are proud to be a contributing factor to the AI Verify Foundation's mission and look forward to collaborating with leading innovators in the realm of Trustworthy AI.
AI testing is crucial for our company to showcase responsible AI, offering our customers reassurance that we are dedicated to ensuring our product aligns with responsible AI practice. Joining the AI Verify Foundation is of significant importance to our company as it allows us to contribute to and stay informed about the collective community efforts aimed at advancing the deployment of responsible and trustworthy AI.
Concordia AI aims to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We believe rigorous third-party testing throughout the AI lifecycle is vital to ensure that we can reap the benefits of AI while safeguarding against potential harms. Concordia AI is pleased to join the AI Verify Foundation to contribute to this global effort.
Chartered Software Developer Association believes in promoting cross-cultural ethical & industry standards leading practices for the AI & ESG revolution. As a global professional association for technology professionals, we are confident that by joining AI Verify Foundation, our synergy will benefit the community on responsible AI practices.
With today's scale and revolution of AI innovation, we build towards having the foundational AI governance testing tools to be established for Responsible AI applications in society for public interest protection purposes, along with the frameworks, code base, standards and leading practices for AI.
Citadel AI is proud to be a member of the AI Verify Foundation. Our AI testing and monitoring technology is used by AI auditors and developers globally, and as part of the AI Verify Foundation, we hope to accelerate our shared mission of making the world's AI systems more reliable.
Responsible and ethical AI is the key to the future. CITYDATA.ai applies AI and machine learning to make our cities smarter, safer, equitable, and resilient. In joining the AI Verify Foundation, we hope to be able to contribute to the AI governance tools and frameworks in a neutral space for the AI ecosystem to thrive and produce outcomes for the betterment of humankind.
AI testing and EAI/XAI is important for any company adopting AI technology, this gives transparency.
Inspection without transparency is pointless. With transparency and accountability in mind, people deploying AI will be more ethical and responsible. Joining AI Verify Foundation is the responsibility of any AI capable company, promoting Ethical AI should be our CORE VALUE for a better tomorrow, a better Singapore.
Credo AI is thrilled to join the AI Verify Foundation, and we look forward to harnessing the collective power and contributions of the international open-source community to develop AI governance testing tools that can better enable the development and deployment of trustworthy AI.
We strongly believe in the importance of fostering a diverse community of developers who can collectively contribute to the development of AI testing frameworks and best practices, and we look forward to contributing our expertise and thought leadership to this pathfinding community, as we continue to work together to develop and maintain responsible AI tools, frameworks, and standards. This Foundation will nurture a diverse network of advocates for AI testing, which we believe is essential to driving the broad adoption of responsible AI globally.
As AI becomes more pervasive and will greatly impact to the way we work, it is our shared responsibility (with the IT community) to align to best practices and standards to enable responsible AI. Importantly, we want to ensure fairness and trust when it comes to AI adoptions, and joining the AI Verify Foundation will help CrimsonLogic do exactly that.
One of our priorities as a data science platform provider is ensuring our customers safely, responsibly, and effectively leverage and scale AI. In support of this we launched Govern - a dedicated workspace to govern AI and analytics projects - that sits alongside platform features that enable reliability, accountability, fairness, transparency, and explainability.
Tools like AI Verify can be extremely important to organisations investing in AI and analytics governance and how we work with them: they serve as a foundation that can help to give shape to strong and well-conceived AI governance practices that enable the responsible use of the technology.
AI Verify provides the much-needed gold standard for the responsible use of AI. It provides the yardstick that attests to the trustworthiness of the AI that we build. This is a ray of hope amidst mounting ethical AI concerns!
As organisations worldwide continue to drive increased adoption of AI-based solutions, it is more important than ever to establish the guardrails to ensure this is done responsibly. Singapore’s regulators have, for some time now, been at the forefront in ambitiously moving beyond high-level principles and guidelines towards developing frameworks and toolkits; to provide increased capability to organisations to better manage and govern their AI-based solutions.
DBS is proud to have been able to work closely with PDPC and IMDA in developing and testing some of their approaches over the years as a trusted partner; being part of the AI Verify Foundation will enhance this collaboration and help shape the emerging initiatives in this space.
Our collaboration with the AI Verify Foundation exemplifies our belief in the transformative power of collective innovation to advance transparent, ethical, and reliable AI solutions. By joining this pivotal initiative, we can proactively shape the future of trustworthy AI, underscoring our commitment to fostering technologies that respect user privacy, fairness, and transparency. We look forward to setting new industry standards, inspiring trust, and encouraging responsible innovation in the AI ecosystem.
DigiFutures is committed in taking the lead in ethical and responsible innovation to create a better world. Partnering with the AI Verify Foundation supports our mission to empower businesses to harness the full potential of AI, while ensuring that AI is safe, trustworthy, and used responsibly.
DXC collaborates with leading technology vendors within the AI domain, enabling us to offer impartial guidance on leveraging AI for expansion while adhering to established best practices for responsible AI implementation. The true potential of AI remains unrealized in the presence of lingering apprehension and unease among certain businesses and consumers. Through our affiliation with the AI Verify Foundation, we aim to proactively formulate and institute a conscientious AI framework in collaboration with our clients from the outset.
AI testing is a process that we welcome and appreciate as a way to showcase the extreme innovation and responsibility we put into our offering. Verification of AI is the key to its growing use and value to individuals, organizations, and society at large.
At ELGO, we take pride in helping businesses design and implement responsible AI systems. By being part of the AI Verify Foundation, we are committed to pioneering and contributing to the advancement of responsible AI use that elevate not just individual businesses, but also enrich the broader AI landscape with accessibility and trust.
At EngageRocket, we believe that joining the AI Verify Foundation enables us to deploy trustworthy and responsible AI in our products. It aligns perfectly with our vision of shaping better workplaces with credible technology.
Envision Digital is delighted to support the launch of IMDA’s AI Verify Foundation. Responsible AI has been our focus, as we recognise the need for responsible practices with the increasing deployment and limitless potential of AI innovation to support our customers. Together with IMDA, the time is now for us to advance responsible AI into action as we harness the power of AI to create a more sustainable world.
EVYD Technology's vision is to build a future where everyone can access better health. Our platforms leverage the power of AI to drive better healthcare decisions, and we equally believe that users need an assurance of secure and responsible use of such data. EVYD believes that utilising a platform such as AI Verify not only supports our vision, but demonstrates our commitment to trustworthy AI that creates better health outcomes across populations, and assures of the safety and security of the underlying data.
As a company that specialises in AI governance and risk management, adherence to rigorous standards is critical for our customers to demonstrate credibility, build trust with stakeholders, and ensure their AI systems are ethically developed and deployed. Joining the AI Verify Foundation will help us support that through the development of shared standards, best practices and quality tooling.
As organisations around the world continue to adopt AI solutions at the current pace and scale, they need to put proper controls and guardrails in place to ensure these solutions are safe and compliant with existing and upcoming regulations. Fairly AI is focused on accelerating responsible AI innovation, and our partnership with the AI Verify Foundation hopefully enables even more organisations to accelerate the safe and responsible adoption of AI.
FairNow is proud to join the AI Verify Foundation. We believe that building societal trust is crucial to achieving the positive, transformation potential of AI. FairNow's mission to simplify AI compliance and governance aligns with AI Verify's own goal to advance responsible AI through standards, open source, and public private partnerships. We look forward to contributing to and harnessing the work of the AI Verify Foundation.
The rapid adoption of AI technologies in the near future is undeniably going to change the contours of the way we work and engage our customers, employees and stakeholders. As such, focusing on working out the governance, ethical, and legal frameworks of how we use this technology is now more important than ever.
FairPrice Group is committed to partnering and working constructively with relevant stakeholders such as the AI Verify Foundation and IMDA. Our aim is to support the development of Singapore’s AI ecosystem and the resultant implementation of fair and practical frameworks and guidelines to regulate the technology appropriately and proportionately.
As disseminators of responsible technology, Fidutam recognizes the pivotal role of young people in advocating for and deploying responsible technology. Fidutam's innovative fin-tech and ed-tech products have been used by over 3,400 individuals in Latin America, Sub-Saharan Africa, and the United States, enabling upward economic and educational mobilization. By joining AI Verify, Fidutam aims to amplify the voice of the youth in shaping responsible AI practices globally.
Building a future with AI that is fair, explainable, accountable, and transparent is our collective responsibility. Finbots.AI is delighted to have collaborated with IMDA and PDPC to be one of the pioneering Singapore startups to complete the AI Verify toolkit. We look forward to continuing our partnership through the AI Verify Foundation by innovating on transformative use cases with the AI community and building ethical AI frameworks that are benchmarked to global standards.
The use of data and AI within GCash is focused on how we can work towards financial inclusion for the Filipinos. Responsible AI is part of our DNA and we look forward to working together and learning from the AI Verify Foundation's community as we adopt best practices in AI testing.
As the first investment firm dedicated to promoting and supporting generative AI startups in ASEAN, we have witnessed various innovations in this space. We understand the critical importance of building safe AI products for users, which can serve as a competitive advantage for ASEAN startups seeking growth and scalability globally. Therefore, we strongly encourage startups to prioritize responsible AI from day one. Partnering with government agencies such as the AI Verify Foundation and IMDA is essential to staying informed and ensuring the responsible use of AI's full potential.
GovTech leads the Singapore government's efforts to adopt AI and improve delivery of citizen-centric services as well as accelerate digital transformation. In doing so, GovTech is committed to ensuring that AI development is safe and secure to maximise its benefits and instil public trust. We are excited to join the AI Verify Foundation to develop responsible and trustworthy AI that will transform the everyday lives of people in Singapore.
The mobile industry is committed to nurturing the development of AI and big data analytics in a sustainable, ethical, and responsible manner while respecting individuals’ privacy. As part of this, the GSMA’s AI for Impact (AI4I) initiative supports members to implement products and services in a fully accountable way that is human-centric and rights-oriented. As an increasingly essential element of the infrastructure on which our society is built, AI needs to be fair, open, transparent, and explainable in its operations and customer interactions to protect customers and employees. Any entrenched inequality must be removed to ensure AI operates reliably for all stakeholders while minimising any environmental impact.
Handshakes can only help our clients do business safely when our AI is properly tested. Joining the AI Verify Foundation demonstrates that resolve.
Hanzo core principles are security, transparency, and defensibility, to empower legal teams to uncover risks and relevance, establishing a robust evidentiary foundation for efficient and confident decision-making based on AI.
Hewlett Packard Enterprise (HPE) believes that artificial intelligence (AI) holds enormous potential to advance the way people live and work, but we must ensure that we apply these powerful tools ethnically and sustainably. By joining the AI Verify Foundation and other like-minded partners, HPE would be able to support and contribute to the ongoing work to promote responsible AI, best practices and standards for AI in Singapore.
The governance of AI is a key issue for Hitachi, which recognises the significant societal impact associated with the use of this technology across its extensive business domains. We believe that the AI Verify Foundation will help businesses become more transparent to all the stakeholders in the use of AI. We are looking forward to working with you on co-creating frameworks and ecosystems to contribute to driving broad adoption in AI governance.
Holistic AI is on a mission to empower organizations to adopt and scale AI with confidence. Our comprehensive AI Governance platform serves as the single source of truth on AI usage by discovering and controlling AI inventory, assessing and mitigating risk of AI systems, and ensuring compliance with the latest legislation. We are proud to be a member of the AI Verify Foundation, and strongly align with their mission to develop best practices and standards that help enable the development and deployment of trustworthy AI.
The scale & pace of AI Innovation in this new modern technology era requires, at the very core, foundational AI governance frameworks to be made mainstream in ensuring the appropriate guardrails are considered while implementing responsible AI algorithmic systems into applications. The AI Verify Foundation serves this core mission and, as we progress as an advancing tech society, substantiates the need to advocate for the deployment of greater trustworthy AI capabilities.
At H2O.ai, our mission is fundamentally focused on deploying AI responsibly. We are dedicated to ensuring that AI systems comply with applicable regulations and operate with transparency and ethical integrity. By joining the AI Verify Foundation, H2O.ai can collaborate with AIVF to contribute to the creation of AI governance toolkits together. This partnership underscores our commitment to responsible AI practices.
IFPI is the voice of the recording industry worldwide, representing over 8,000 record company members across the globe. We work to promote the value of recorded music, campaign for the rights of record producers and expand the commercial uses of recorded music around the world. We believe that progress in AI innovation and adequate copyright protection are not mutually exclusive, and that that the human creative expression and the human artist remains fundamental to the creation of music despite increasing AI capabilities.
impress.ai helps its customers improve the accuracy of their hiring decisions using AI. To make sure that we preserve and enhance the meritocratic nature of such decisions, it is vital that the AI behind the platform is robust, fair, responsible and explainable. AI adoption is growing at an exponential rate. As a company selling AI solutions that touch millions of professionals, we have a responsibility to help shape the industry in a way that is beneficial to humanity. AI Verify Foundation is a step in the right direction and we are glad to support its efforts.
As an AI-focused company, we understand the profound impact our technology can have on society. At iNextLabs, we believe that with great innovation comes great responsibility. And we are committed to responsible and ethical AI deployment. By implementing comprehensive testing protocols, we strive to mitigate biases, enhance fairness, and fortify the robustness of our AI solutions. By joining the AI Verify foundation we pledge to make this vision a reality, We look forward to learn, share and contribute to the best practices and standards in enabling responsible AI.
The discovery of industrial processes to mass produce nitrogen compounds led to the resolution of a deadly food crisis once faced by humanity. However, this discovery, arguably too, led to its applications in areas that may not be originally intended for. Learning from history, it is now or too late for us to align beliefs and principles in ethical and sustainable use of AI, as we witness a closer realisation towards autonomous enterprise. Being part of a foundation that gathers global industry leaders, the implementation of guiding principles will create a bigger impact.
Ensuring that AI systems are safe, reliable, and compliant is at the heart of Intelligible's mission. Partnering with the AI Verify Foundation allows us to both learn from and contribute to a community dedicated to robust AI governance and testing. Together, we aim to drive innovation, establish best practices, and set new benchmarks in AI safety and compliance, ensuring the highest levels of trust and reliability in AI systems for a better tomorrow.
We recognise the critical importance of trustworthy AI in improving patient and customer outcomes. Joining the AI Verify Foundation aligns with our mission to deliver safe and reliable virtual training solutions, and we believe in the power of open collaboration to advance responsible AI practices. We strongly support the mission of the AI Verify Foundation to foster a community dedicated to AI. By ensuring the trustworthy deployment of AI, we can drive innovation, build stakeholder trust, and create a more sustainable future for all.
The need for AI testing enables Invigilo to understand the behaviour and potential edge cases, allowing the team to intervene where the AI system is not performing before deploying it in real-world conditions. This allows better communication between AI developers and end users on how the AI systems arrive at their decisions and provides explanations when necessary.
As a General member with AI Verify, JJ Innovation Enterprise Pte Ltd can further align with best practices in A.I governance, collaboration with industry peers, enhance our solution credibility, and ensure our AI solutions are developed and deployed responsibly, thus contributing to the broader goal of advancing trustworthy AI as a trustworthy solution provider.
As Kenek AI strive to foster connections that matters through the usage of responsible AI, our membership in the AI Verify Foundation enables us to collaborate with key stakeholders in the AI community to spread the adoption of ethical AI governance and responsible AI for a fairer future for all.
Joining independent organizations like AI Verify allows us to collaborate and share real-life practical experiences and knowledge with industry leaders, thereby advancing responsible AI practices. It also provides us with a platform to advocate for ethical AI principles to raise awareness among companies and the public.
KPMG sets standards and benchmarks for AI and digital trust. By collaborating with the AI Verify Foundation, regulators, and industry leaders, we can build a trustworthy AI ecosystem by developing rigorous governance frameworks. This effort promotes trusted AI adoption among Singapore businesses, positioning Singapore as a global AI hub for scalable AI solutions that transform industries with integrity.
Lazada has been at the forefront of driving and responding to technical advances, working with AI experts to unlock a new era of eCommerce and retail innovations to offer differentiated experiences and opportunities for our users, sellers and partners. Joining the AI Verify Foundation is an important step in ensuring we continue to develop high-quality AI-powered services and products in a way that safeguards our platform users, and is aligned with our trust and safety policies which include data privacy and the protection of intellectual property rights.
We are focusing on the development of AI for compliance use in the financial industry, so we value governance very much and see responsible and trustworthy AI as important in product development. We would like to join the AI Verify Foundation as it is a community that values responsible and trustworthy AI, where we can have a community with the same purpose to exchange ideas and co-create best practices for AI testing in the market.
We advocate for a world where people and wildlife thrive together. This purpose drives us to actively contribute to the conservation of species, habitats, wildlife science and research. Augmenting this at our destination of the Mandai Wildlife Reserve, we nurture people's connection with the natural world, by harnessing innovative technology to educate and engage. Embedding ethics in our adoption of AI is therefore key to ensuring our technologies respect and enhance the physical environment, while benefitting the animals in our care, our employees, and visitors.
In the digital age, the synergy between people and AI drives progress. At Mastercard, we have been using AI for years as part of transaction processing to protect against fraud and cyber risks, as well as to provide insights to our customers. We highly value ethics, transparency, and reliability in AI practices, and we believe in open dialogue between sectors and diverse viewpoints. We are delighted to join the AI Verify Foundation and eagerly anticipate innovating responsibly together, ensuring ethical AI guides us toward a brighter future.
There is immense potential within the media and broadcasting industry to leverage AI. Mediacorp is exploring the use of AI in areas such as content generation, marketing, and advertising and is honoured to be among the pioneer members of the AI Verify Foundation. We look forward to working with the community of AI practitioners to exchange knowledge, collaborate on initiatives, and drive the development of robust AI governance standards in Singapore.
Joining the AI Verify Foundation signifies our commitment to responsible and trustworthy AI, ensuring that our innovation in the beauty and personal care industry is not only cutting-edge but also ethical and transparent.
Our focus is on ensuring that AI at Meta benefits people and society, and we proactively promote responsible design and operation of AI systems by engaging with a wide range of stakeholders, including subject matter experts, policymakers, and people with lived experiences of our products. To that end, we look forward to participating in the AI Verify Foundation and contributing to this important dialogue in Singapore and across the entire Asia Pacific region.
MAIEI is delighted to be joining the AI Verify Foundation given its focus on operationalizing Responsible AI and making it easier for as many organizations as possible to adopt these practices throughout their design, development, and deployment of AI systems. It aligns with our mission of democratizing AI ethics literacy, ultimately seeking to make Responsible AI the norm rather than the exception.
MLSecured is a platform dedicated to AI Governance, Risk, and Compliance, designed to assist companies and public sector organizations in responsibly adopting AI, managing AI risks, implementing best governance practices, and adhering to AI regulations.
Music Rights (Singapore) Public Limited, also known as MRSS, is a Not-For-Profit Collective Management Organisation (CMO) that represents the majority of Music Producers and administers the Copyrights for Karaoke, Music Videos, and Sound Recordings on their behalf. MRSS believes that even in this age of rapid AI development, the rights of Creators and Producers should be safeguarded, and the use of copyrighted works should require the full authorisation and licensing from rights holders.
AI testing demonstrates NCS’ commitment to delivering responsible, safe, and equitable AI solutions. We harness technology to provide right-sized cybersecurity solutions that future-proof cyber resiliency and shape the future of AI. Our clients trust us to safeguard their digital transformation journeys, leveraging our expertise and end-to-end capabilities to enhance their security posture, streamline processes, and strengthen governance. Joining the AI Verify Foundation underscores our dedication to ethical AI governance and building a secure and resilient digital future.
As a company developing Python-based rPPG software, AI testing is crucial to demonstrate responsible AI practices, ensuring the accuracy, fairness, and ethical considerations of our algorithms. Joining the AI Verify Foundation is vital as it allows us to contribute to advancing the deployment of responsible and trustworthy AI, aligning our commitment to ethical development with a community dedicated to fostering AI transparency and accountability.
As a pioneer in the AI field, OCBC Bank is committed to ensuring that the future of AI is fair to all. The AI Verify Foundation is a key enabler in achieving the goal of trustworthy AI.
Ensuring AI safety and rigorous testing is paramount to OneDegree Global's commitment to helping enterprises deploy responsible AI technology. Joining the AI Verify Foundation aligns with our mission to advance the development of trustworthy AI, enabling innovation while safeguarding ethical standards and public trust. We are proud to contribute to the Foundation in its work to advance responsible AI adoption and innovation.
At OpenAI, we believe that AI has huge potential to improve people's lives - but only if it is safe and its benefits are broadly shared.
That’s why we’re proud to support AI Verify and the Singapore government’s efforts to promote best practices and standards for safe, beneficial AI.
We look forward to working with the Foundation towards our shared goal of the development and deployment of AI that benefits all of humanity.
Joining the AI Verify Foundation is a valuable opportunity for our company to contribute to the development of trustworthy AI and collaborate with a diverse network of advocates in the industry. We fully support the mission of the AI Verify Foundation to foster open collaboration, establish standards and best practices, and drive broad adoption of AI testing for responsible and trustworthy AI.
With the emergence of AI, Parasoft is proud to be a member of AI Verify Foundation.
It is of importance that the created AI environment is safe and robust, responsible and ethically adopted for all in our digital world today.
We applaud Singapore Government's efforts in taking up the heavy-lifting of collaborating, building trust and governance in the AI community.
By leveraging on rich integrations with AI Verify toolkit, our customers can now benefit from this partnership and get the most comprehensive, value-driven approach to testing.
We believe there is a need to ensure AI Service development is in line with ethical principles and societal values, ultimately bringing positive impacts to our communities.
At Patsnap, we recognise that ensuring AI safety and rigorous testing are not just technical requirements but a fundamental responsibility – with more than 12,000 global companies across diverse industries trusting us to innovate better and faster. Joining the AI Verify Foundation demonstrates our commitment to advancing the deployment of responsible AI, fostering innovation while prioritising the ethical and safe application of AI technologies. This collaboration also underscores our dedication to leading in development of AI applications for enterprises with integrity and transparency.
At Prudential, we are constantly looking at ways of using data and AI to deliver an exceptional customer experience - while building an insurance landscape that is inclusive and equitable. We apply our responsible AI principles to safeguard our customers' health and financial well-being.
In partnership with the AI Verify Foundation, we’re crafting AI ethics toolkits that align with these core principles. Our customers can trust in our commitment to building robust and secure systems, which are rigorously tested for transparency and accountability.
At Qualcomm we strive to create AI technologies that bring positive change to society. Our vision for on-device AI is based on transparency, accountability, fairness, managing environmental impact and being human-centric. We aim to act as a responsible steward of AI, considering the broader implications of our work and taking steps to mitigate any potential harm. Our on-device AI solutions are designed to enable enhanced privacy and security, essential to a robust and trustworthy AI ecosystem. Our hope is that with the AI Verify Foundation we can contribute to a broad, collaborative effort to build a common framework in support of internationally recognized AI governance principles, as an effective path towards building responsible, human-centric AI.
As a Venture Capital firm investing in data and AI companies, we believe that AI use must be ethical even as companies seek to innovate and deliver new technologies for the betterment of society. Being part of the Foundation will enable us to work with likeminded members, utilise and also contribute to the building of robust and practical AI toolkits and guidelines, with the goal of championing responsible use principles as the ground from which further AI technology is developed.
AI safety and testing are crucial to us and our clients. We're committed to rigorous bias and safety testing to prevent our LLM from suggesting or containing malicious content. By refining our processes, we aim to stay ahead of risks and deliver reliable results. Joining the AI Verify Foundation allows us to contribute to Project Moonshot, which aligns with our focus on responsible AI. Through this collaboration, we help companies navigate the opportunities and risks of generative AI, making their systems innovative and secure.
The Recording Industry Association Singapore (RIAS) comprises 25 leading major and independent record companies in Singapore, with a mission to promote recorded music and expand its market utilisation, and to safeguard the rights of record producers and their artistes. Our members understand that AI technology has empowered human expression, but human-created works will continue to play an essential role in the music industry and believe that copyright should protect the unique value of human intellectual creativity.
RegTank looks forward to contributing towards the evolving AI standards and testing methodologies through our participation as a member of the AI Verify Foundation to forge greater trust with clients, regulators, and other stakeholders.
As AI's impacts become increasingly widespread, the responsible AI community must have access to clear guidance on context-relevant AI testing methodologies, metrics, and intervals. The Responsible AI Institute is excited to support the AI Verify Foundation, given its proven leadership in AI testing, dedication to making its work accessible, and commitment to international collaboration.
As technologists and practitioners of AI, Responsible AI is a core principle at retrain.ai. From our involvement in shaping NYC's Law 144, extensive research about AI risks, launching the first-ever Responsible HR Forum, to embedding explainability, fairness algorithms and continuous testing, ensuring our AI models meet the highest standards for responsible methodology and regulatory compliance, we view Responsible AI as one of our main pillars. Joining the AI Verify Foundation is an extension of our dedication to responsible AI development, deployment, and practices in HR processes.
Our commitment to AI security and governance stems from the belief in AI's potential for positive impact. We aim to contribute to a future where AI benefits humanity with minimized risks. Our objective is to empower organizations to achieve their goals through trustworthy and safe AI systems. Joining the AI Verify Foundation allows us to rigorously test our AI Governance framework, promoting the safe adoption of AI.
SAP is one of the first companies in the world to define trustworthy and ethical guiding principles for using AI in our software, and continues to be a leader in responsible business AI. Through our participation in the AI Verify Foundation we look forward to contributing our global expertise to support the development and deployment of responsible AI that will help the world run better and improve people's lives.
As more and more solutions and decisions are developed with the help of AI, there is a greater need to adopt responsible AI, and there is a greater responsibility on our shoulders to help customers to do that effectively and efficiently.
Scantist believes robust AI testing is crucial for responsible AI implementation - especially in cybersecurity. Joining the AI Verify Foundation amplifies our commitment to shaping a secure future where secure cyber-systems - including AI - are the standard, not the exception.
Facticity.AI, a Singaporean-American LLM app, is dedicated to improving AI safety by contributing a localized, multilingual dataset for factuality—an initiative valuable to Singapore and the region. By joining the AI Verify Foundation, we aim to promote trustworthy AI through transparency and accountability. Facticity.AI prioritizes explainability from credible sources and supports a more equitable, accountable, and transparent AI ecosystem for all stakeholders.
Sekuro is committed to offering assurance services to AI companies with a focus on boosting their credibility, managing risks, and supporting their decision-making.
As a seasoned consultancy firm with expertise in NIST CSF, ISO 27001, and ISO 42001, we value the chance to contribute to our partners' Integrated Management Systems (IMS).
Our goal is to help ensure the ethical, responsible, and trustworthy development and deployment of AI as well as ensuring confidentiality, integrity, and availability of the company's information.
The AI Verify Foundation will advance the nation's commitment to fostering trustworthy AI as a cornerstone of Singapore's AI ecosystem. At SenseTime International, we look forward to co-creating a future with the Foundation where AI technologies are developed and deployed responsibly, advocate international best practices, and are credited for its positive Whole-of-Society impact.
AI has potential risks and if not proactively managed can create a significant negative impact on organizations and society as a whole, SigmaRed is committed to making AI more responsible and secure and would like to join with AI Verify Foundation.
As an early user of AI Verify, Singapore Airlines recognised the importance of responsible AI and AI governance as a strong foundation for our AI initiative. The testing framework of AI Verify facilitated our initiative and enabled us to further strengthen data trust among our stakeholders. Joining the AI Verify Foundation supports our digital transformation journey and enables us to be in a collaborative network promoting ethical AI.
As a leading communications technology company, Singtel's committed to empowering people and businesses and creating a more sustainable future for all. We see AI as a key enabler in the development of new innovations that will transform industries and consumer experiences. Through our collaboration with the AI Verify Foundation, we’re helping to advance the transparent, ethical, and trustworthy deployment of AI so everyone can enjoy the next generation of technologies safely.
AI testing is paramount to SoftServe because it embodies our commitment to delivering responsible AI solutions. In an era where AI is evolving, we recognize the need to ensure our technologies are transparent, accountable, and beneficial for all stakeholders. By rigorously testing our AI solutions, we guarantee their functionality and ensure they align with ethical standards and values we uphold.
Joining the AI Verify Foundation is a strategic decision. Being part of the Foundation positions us at the forefront of global AI standards and best practices. It would also be a great way to further communicate our commitment to responsible AI and be a part of a community that contributes to regional initiatives in this space.
Trust is essential for public acceptance of AI technologies, the community of developers and stakeholders the AI Verify Foundation will convene promises the development and deployment of a more trustworthy AI.
SPH Media's mission is to be the trusted source of news on Singapore and Asia, to represent the communities that make up Singapore, and to connect them to the world. We recognize the importance of AI and are committed to responsible AI practices. We strive to build up AI systems that are human-centric, fair, and free from unintended discrimination. This process will be enhanced by AI testing that allows us to identify and address potential risks associated with AI, and aid us in our mission.
The mission of the AI Verify Foundation resonates with Squirro’s belief in the responsible and transparent development and deployment of AI. We look forward to participating in this vibrant global community of AI professionals to collectively address the challenges and risks associated with AI.
We are heartened that IMDA is leading the way in ensuring AI systems adhere to ethical and principled standards. As a member, ST Engineering will do its part to advance AI solutions and to shape the future of AI in a positive and beneficial way.
The capabilities of AI-driven systems are increasing rapidly, as we have seen with large language models and generative AI. The democratisation of access will lead to the widespread deployment of AI capabilities at scale. Evaluating AI systems for alignment with our internal Responsible AI Standards is a key step in managing emerging risks, and testing is a critical component in the evaluation process.
The pace and scale of change concerning AI systems require risk management and governance to evolve accordingly so users can derive the benefits in a safe manner. This cannot be done independently, and it is better to collaborate with the wider industry and government agencies to advance the deployment of responsible AI. Standard Chartered has partnered with IMDA to launch the AI Verify framework, and joining the AI Verify Foundation is a logical next step to ensure we can collaboratively innovate and manage risks effectively.
StoreWise is on a mission to transform brick-and-mortar Retail to create memorable shopping experiences by infusing cutting-edge technology in their operations so they can strive. Becoming a member of the AI Verify Foundation shows our commitment to use AI technology responsibly and to contribute into building safeguards as it evolves, for the benefits of our clients, their customers, and the community.
Strides Digital is excited to join the AI Verify Foundation community to use and develop AI responsibly, as we help companies capture value on their decarbonisation and fleet electrification journey.
AI is seen as a transformative technology that offers opportunities for innovation to improve efficiency and productivity. As the need for AI-powered solutions continues to surge, the active engagement of the community in the development of best practices and standards will be pivotal in shaping the future of responsible AI. Tau Express wholeheartedly supports this initiative by IMDA, and we look forward to leveraging the available toolkits to continue building trust and user confidence in our technology solutions.
AI testing forms the bedrock of TeamSolve's commitment to responsible AI development. It serves as our unwavering assurance to the operational workforce that they can place their complete trust in our AI Co-pilot, Lily, knowing that it relies on trustworthy knowledge sources and provides recommendations firmly rooted in their domain.
The AI Verify Foundation and its members play a fundamentally pivotal role collectively in the advancement of the AI towards higher standards of accountability and trustworthiness for greater acceptance in society.
At Tech4Humanity, we believe responsible testing and validation are vital to developing trustworthy AI that uplifts society. By joining the AI Verify Foundation, we aim to collaborate with partners across sectors to create frameworks and methodologies that proactively address algorithmic harms and demonstrate AI's readiness for broad deployment. Our goal is to advance the creation of human-centric AI that augments our collective potential.
At Telenor, we are committed to using AI technologies in a way that is lawful, ethical, trustworthy, and beneficial for our customers, our employees and society in general. Telenor has defined a set of guiding principles to support the responsible development and use of AI in a consistent way across our companies, to ensure it is aligned with our Responsible Business goals.
At Temasek Polytechnic, AI testing isn't solely about functionality; it's about demonstrating our commitment to responsible AI. We understand the imperative of ensuring that our AI systems operate ethically and reliably. Joining the AI Verify Foundation underscores our dedication to advancing the deployment of trustworthy AI. It's not merely about progress; it's about ensuring that progress is rooted in principles of responsibility and trustworthiness through education and implementation.
AI testing is pivotal for deploying responsible AI, ensuring safety and risk management. Access to valuable resources and regulatory alignment supports transparency and continuous improvement, which in turn ensure reliability and scalability—all essential for building trust with our clients. Joining the AI Verify Foundation is important for Temus as we collaborate with enterprises on their digital transformation journeys. We aim to foster collaboration and mutual accountability, setting high standards of integrity in this frontier of innovation, so that we all might unlock social and economic value sustainably.
As Tictag is focused on producing the highest quality data for AI and machine learning, the AI Verify Foundation aligns perfectly with our mission of making AI trustworthy not just in purpose but in substance. AI ethics is at the core of what we do, being very human-centric, and the reputation of AI Verify will be important to rely on as we expand overseas.
We think there is a value in networking and exchanging ideas with the industry leaders. As advanced AI is no longer a far future, industry leaders have more and more discussions about the guardrails, the safety measures, and what's next in store. Singapore is at the forefront of AI development, and Singaporean companies should join this conversation as well. So it is a very timely initiative.
Our project stands for governance and transparency - hallmarks of AI Verify's framework that we are proud to adopt ourselves and promote. We encourage testing as a means to achieving the overall mission of the Foundation.
The Foundation's movement to advance responsible and trustworthy AI is the rising tide that will lift all boats. We are inspired by its work and we want to be part of the movement to foster trust whilst advancing AI. We commit to responsible practices of development and deployment.
AI Verify is an important step towards enhancing trustworthiness and transparency in AI systems as we move up the learning curve. In order for AI to live up to its full potential, we need to build and earn this trust. We believe that developing specialised skilled talent and capabilities is the cornerstone of creating AI trust and governance guardrails and toolkits. Making the technology safer is key, and we are glad to support IMDA, who have taken the lead in nurturing future champions of responsible AI.
Trusted AI’s mission is to help organisations instill trust in the very DNA of their AI programs, as seen in our logo. We are excited at the opportunity to partner with the AI Verify Foundation as we are aligned with their mission, and together, we can bring the development and deployment of trustworthy AI globally.
UBS is proud to be one of IMDA’s inaugural AI Verify Foundation members and participate in the AI Verify pilot test. We will continue to engage leading fintechs, investors and companies to decode emerging AI trends. Through the AI Verify Foundation , we aim to promote the use of AI in an ethical and trustworthy manner.
UCARE.AI has supported IMDA since participating in the first publication of their AI Governance Framework in 2019 and has continued to align our processes when deploying AI solutions for our customers. We believe that the establishment of the Foundation will foster collaboration, transparency, and accessibility, which is crucial in promoting trustworthy AI.
AI safety and testing are vital for demonstrating responsible AI by ensuring ethical use, building trust, and complying with regulations. Rigorous testing mitigates risks, enhances user experience, and ensures systems perform reliably and fairly. It supports long-term sustainability and provides a competitive edge by differentiating us in the market. Prioritizing AI safety and testing aligns with our commitment to ethical standards, fostering trust and ensuring our AI solutions benefit society responsibly.
Joining the AI Verify Foundation is important to us because it advances the deployment of responsible and trustworthy AI. It allows us to collaborate on setting industry standards, share best practices, and contribute to the development of tools for AI transparency and accountability. Being part of this foundation reinforces our commitment to ethical AI, fosters innovation, and helps build public trust by ensuring our AI systems are safe, fair, and reliable.
Ethical use of data is an integral part of our operating DNA, and UOB has been recognised as a champion of AI ethics and governance. By joining the AI Verify Foundation, UOB hopes to contribute in the thought leadership in responsible AI.
Vectice is excited to join the AI Verify Foundation. Aligning with our mission to accelerate enterprise AI adoption and value creation, with less risk, this collaboration marks a significant step towards our commitment to promoting safe, responsible, and ethical AI development. At Vectice, we bring a wealth of expertise in data science management, AI system design, and model documentation, which are critical for establishing robust standards and governance practices in AI.
VFlowTech employs AI in its EMS since it is the only method to enhance solar and energy storage efficiency. We also feel that an ethical code for responsible AI must be established due to cyber security issues.
At Vidreous, we employ GenAI models to classify data and provide insights to enhance our user experience. As AI is known to hallucinate on new information, it is necessary to establish a quality management process to ensure output accuracy. Joining AI Verify Foundation allows us to advance the quality management and deployment of our products as part of the larger collective community in practicing responsible AI. Together, we will make a difference in delivering trust to our people.
It is essential to ensure that the plethora of apps that use AI models today produce accurate and reliable results. Our vision at Virtusa is to establish ourselves as a strong capability hub for AI Testing and work in collaboration with likeminded communities. Hence, we are keen to work with AI Verify Foundation in promoting responsible, ethical and sustainable use of AI, thereby building trust with our clients and other stakeholders.
Trust remains at the core of everything we do and is the foundation upon which data-driven products and innovations are built. Creating a governance structure that prioritises the responsible stewardship of data and establishing robust measures such as consent management form a core foundation for responsible AI. We are excited and honoured to be a part of the AI Verify Foundation Committee and contribute towards the development and deployment of responsible AI in Singapore.
Walled AI's mission is to make AI controllable and predictable through research-backed governance tools, emphasizing safety and cultural alignment. In collaboration with the AI Verify Foundation, we aim to establish safety benchmarks and responsible AI pipelines for the safe adoption of AI in Singapore. This partnership will allow us to share our expertise in AI safety evaluations and contribute through governance talks, safety tools, and data collection methods to identify potential harms and biases in AI systems.
It is important for Warner Music Singapore to join AI Verify Foundation because it allows us to support the development and deployment of trustworthy AI. By joining, we can collaborate with developers, contribute to AI testing frameworks, and share ideas on governing AI. The Foundation provides a neutral platform for us to collaborate and aims to promote AI testing through marketing and education. Being part of this diverse network of advocates will help us drive broader adoption of reliable AI practices.
At WeBank, we champion techonology's role in inclusive finance and sustainable development. As the world's leading digital bank, we've seamlessly integrated advanced technologies like AI, blockchain, and big data. Our dedication has led to milestones such as innovative distributed system architectures and providing tailored financial solutions to millions. Joining the AI Verify Foundation positions us to shape robust governance and advance human-centric AI innovations. As we share our insights and learn from peers, we remain focused on cultivating an AI ecosystem grounded in trust, accountability, and equitable digital progression.
We see significant value in membership. It allows us to contribute to developing standards for AI governance, shape best practices, and signal our commitment to trustworthy AI. The open-source approach enables continuous progress through collaboration.
Workday welcomes the establishment of the AI Verify Foundation, which will serve as a community for like-minded stakeholders to contribute to the continued development of responsible and trustworthy AI and Machine Learning. We believe that for AI and ML to fully deliver on the possibilities it offers, we need more conversations around the tools and mechanisms that can support the development of responsible AI. Workday is excited to be a member of the Foundation, and we look forward to contributing to the Foundation’s work and initiatives.
At WPH Digital, we recognize that building trust in AI systems is crucial for their widespread and responsible deployment. Joining the AI Verify Foundation places us at the forefront of advancing AI governance through the standardized implementation of a recognized framework and testing tools. This partnership reinforces our commitment to ethical AI practices, ensuring that our AI-driven solutions not only meet but exceed industry expectations for integrity, transparency, and societal benefit.
Joining the AI Verify Foundation aligns with X0PA’s commitment to responsible AI practices, as we look to harness the power of AI to promote unbiased and equitable practices in hiring and selection.
Being a pioneer analytics consultancy firm in the Philippines in 2013, and as we advise on data and AI strategies for organizations, we have the responsibility to continuously seek best practices and standards, as well as contribute to improve the communities of practice. AI safety is a critical piece that we have started to incorporate in our methodology to ensure trustworthy AI for clients. Joining AI Verify Foundation enables us with the tools and provides us with a venue to contribute to the bigger community.
In 2021, Zoloz proposed a trustworthy AI architecture system, including explainability, fairness, robustness, and privacy protection. Trustworthy AI is the core capability of resisting risks in the digital age. We hope that by joining the AI Verify Foundation, we can continuously polish our AI capabilities and build an open, responsible, and trustworthy AI technology ecosystem to empower the digital economy and the industry ecosystem. In the future, we hope that through continuous practice, we will continue to promote the implementation of AI and other technologies in the industry and create more value for society.
Many existing Zoom products that customers know and love already incorporate AI. As we continue to invest in AI, Zoom remains committed to ethical and responsible AI development; our AI approach puts user security and trust at the center of what we do. We will continue to build products that help ensure equity, privacy, and reliability.
At Zühlke, we work with highly regulated clients to implement data and AI. The power of AI and data with insights enable impact on decisions to valuable actions. We approach such problem space by focusing on the core components of our responsible AI framework: to be human-centered, ethical, interpretable and sustainable.
Along with AI Verify's vision to harness the collective power in approaching trust through ethical AI, we contribute and collaborate with organisations to adopt AI safely, backed by our industry experience in highly regulated industries.
The foundation shows the leading stance IMDA is taking to ensure that AI Governance becomes a core for all organisations and society, not limiting its availability, but ensuring that all actors using AI can benefit from AI Governance at this pivotal moment in AI's progression. 2021.AI will endeavour to be a core member with its AI Governance offering and expertise.
AI safety and testing are crucial to the Analytics & AI Association of the Philippines (AAP) and its members because they ensure responsible AI deployment, aligning with our mission to foster ethical and inclusive innovation. Joining the AI Verify Foundation exemplifies our commitment to advancing trustworthy AI, promoting global standards, and enhancing collaboration. This partnership underscores our dedication to creating a robust AI ecosystem that benefits society while upholding integrity and transparency in AI practices.
ACCA is passionate about striking the right balance between harnessing the benefits of AI while doing so in a responsible way that considers the public interest. Our members have expertise in areas such as assurance, internal controls and risk/governance and can bring a business and finance lens to complement the work of technologists. We see joining the AI Verify Foundation as an important way to build this type of partnership – so that the pace of AI development doesn’t leave the users and public behind.
At Accenture, we define Responsible AI as the practice of designing and deploying AI systems that prioritize safety, fairness, and positive impact on people and society. Our aim is to build trust with users affected by AI. When AI is ethically designed and implemented, it enhances the potential for responsible collaborative intelligence.
Our commitment to Responsible AI aligns with the government's broader efforts to harness the power of AI for the greater public good. We take pride in supporting the Foundation to assist organisations in scaling AI with confidence, ensuring compliance, and maintaining robust security measures.
AI represents the future of innovation, unlocking the potential to harness and leverage its transformative capabilities across industries, and redefining our work and lifestyles. Access Partnership understands and values the importance of working with expert groups like the AI Verify Foundation to help navigate areas such as the ethical use of AI, AI principles and standards, data security and privacy, intellectual property, and disinformation.
AI is transforming the way we work and create. At Adobe, AI has been instrumental in helping to further unleash the creativity and efficiency of our customers through our creative, document, and experience cloud solutions.
Adobe is proud to be one of the first to join the AI Verify Foundation to help foster, advance, and build a community to share best practices here in Singapore. Partnering with government bodies such as IMDA is an important opportunity to share ideas and ensure that the full potential of AI is realised responsibly.
As a leading RegTech company relying on cutting-edge AI, we want to be a part of a community with other like-minded companies that equally value building fair and explainable AI. We believe that the AI Verify Foundation will benefit all entities that employ AI through the adoption of a set of world-leading AI ethics principles.
Aicadium is proud to be a member of the AI Verify Foundation. With the rapid growth of AI in business, government, and the daily lives of people, it is vitally important that AI is robust, fair, and safe.
We look forward to working with the Foundation to take AI governance to the next level. We are committed to the development of rigorous, technical algorithmic audits and third-party AI test lab capabilities, which we believe are an essential component of the AI ecosystem to help organisations deliver AI as a benefit to all.
Companies recognise the power of AI to create significant business impact, but they are also cognisant of the need to deploy AI in a responsible manner. We believe the recommended processes and tools developed by the AI Verify Foundation will significantly aid companies seeking to demonstrate compliance to a proper AI design standard, thus lowering the time and cost of getting to market.
The next generation of AI will be responsible AI. Our company is targeting the development of an all-in-one AI model and data diagnosis solution for responsible and trustworthy AI. The AI Verify Foundation provides us with a platform and opportunities to collaborate, learn, and make a meaningful impact in advancing responsible AI practices on a broader scale.
As an AI solution provider, we recognise the incredible power and potential that AI has, as it has started to deeply integrate with our day-to-day lives and transform the world around us. However, we also understand the importance of responsible and ethical adoption of this technology to ensure a safer and more equitable future for all.
Our vision is a world where AI is harnessed for the greater good, where businesses, governments, and individuals equally emphasize and allocate resources to the development and implementation of responsible AI tools, frameworks, and standards as much as for commercial gains. We are committed to being a key member of the AI Verify Foundation, working together to shape a future where technology and humanity can thrive in harmony.
At AIQURIS, ensuring the safety, reliability and ethical use of AI is central to our mission. We empower organisations to fully harness this transformative technology by identifying and managing risks and by ensuring the overall quality of AI systems.
The AI Verify Foundation offers a unique environment for developing and deploying AI responsibly, in collaboration with platform members. By promoting best practices and standards, it supports the entire ecosystem in delivering high-performance, compliant AI solutions that organisations can trust and confidently scale.
Cybersecurity risks to AI can impede innovation, adoption, and digital trust, ultimately hampering the growth of organizations and society. AIShield provides comprehensive and self-service AI security products, serving as crucial tools for AI-first organizations across multiple industries and for AI auditors. These solutions ensure AI systems are secure, responsible, and compliant with global regulations. As part of the AI Verify Foundation, AIShield remains committed to advancing AI Security technology and expertise, while steadfastly pursuing its mission of "Securing AI Systems of the World”.
Amdocs empowers the financial services and banking sectors with solutions to accelerate digital transformation in the digital-first world. It is critical for Amdocs to embedded responsible governance and framework in our work.
The AI Verify Foundation sets the baseline for AI Verify across industry and aligned to OECD, EU and Singapore’s governance framework. This sets a benchmark in the marketplace for responsible adoption.
There are many aspects in the world of AI including gaps in social and ethics considerations. Amdocs see this Foundation as critical to drive AI governance principles in the market place, and will be embedding AI Verify toolkit into our service offerings. Through this participation, we will strive to roll this to our counterparts in EMEA and US with Singapore at the heart of the operations.
Ant Group focuses on building a robust technology governance framework as the fundamental guideline for our technological development. To us, it is crucial to ensure that technology and AI can be used in a way that benefits people in a fair, respectful, trustworthy and responsible manner. There is immense potential for technology to help underprivileged people but the key to sustainable technological development requires established standards around the basic principles for AI governance and the institutional framework for evaluating the governance gaps. We need to ensure that technology and AI development we deploy will be in line with ethical principles and societal values, ultimately bringing positive impacts to our communities.
Asia Verify is committed to leveraging technology to make trust easy when doing business with Asia. Effective governance and shared ethics principles are essential to effective AI, which in the words of Stephen Hawkins, could be the biggest event in the history of our civilisation. We are delighted to contribute to the AI Verify Foundation.
The Asia Internet Coalition would like to support our member companies in ensuring the ethical and safe development of artificial intelligence technologies and to promote user privacy and trust within the digital ecosystem.
Resilient and Safe AI is a key research area for A*STAR, as we believe that it is key to reap AI's full transformative potential. As a member of the AI Verify Foundation, A*STAR will harness its AI Governance Testing toolkit and its extensive ecosystem to continue developing AI technologies that are trusted by our industry partners and the community.
Asurion is delighted to support the mission of the AI Verify Foundation in promoting trustworthy AI solutions. We recognise the importance of responsible AI development, and our commitment aligns with the efforts of IMDA in Singapore to establish robust AI governance frameworks and toolkits. By actively participating in Singapore's AI Governance Testing Framework and Toolkit, we aim to contribute to the adoption of best practices and accelerate the responsible development of AI technology. Asurion remains dedicated to harnessing the power of AI Verify to drive innovation while upholding ethical standards, ensuring a brighter future powered by trustworthy AI.
Avanade has a Responsible AI policy and governance framework as we believe that an AI-first culture is inherently people-first. We believe joining the AI Verify Foundation will highlight our commitment to be part of a robust platform for responsible AI to build trust and goodwill within communities and our customers. AI testing is important to Avanade as it demonstrates responsible AI via fundamental values which are ethical, legal and fair. In that manner, it respects human rights, values and complies with up-to-date regulations.
BPP is delighted to join the AI Verify Foundation and contribute to the building of responsible AI, which is a key facet of our energy-efficient AI solutions.
At BCG X, rigorous AI safety testing and evaluation is critical to balance ethical standards with generating lasting business impact. It is our strong belief, that AI solutions cannot be built and scaled without a robust Responsible AI program to deliver transformative business value and at the same time mitigate financial, reputational, and regulatory risks for our clients and potential harms to individuals and society. With AI BCG X solves some of the biggest challenges our clients face, balancing business impact with strong ethical standards is critical for responsible AI deployment. Joining the AI Verify Foundation allows us to share our experience and underscores our commitment and beliefs in Singapore as a leading innovation hub.
BDO recognises AI's potential to revolutionise organisations and unlock human capabilities. We focus on responsible AI adoption, translating theory into practical solutions for business challenges. Our collaboration with AI Verify strengthens our approach in three crucial areas: implementing advanced threat detection for AI-generated attacks, ensuring ethical AI use aligned with governance frameworks, and partnering with AI experts to stay ahead of emerging threats. This comprehensive strategy allows BDO to effectively guide clients through AI-driven digital transformation, ensuring safety and innovation both in Singapore and globally.
Our mission at Beamery is to create equal access to meaningful work, skills and careers for all. Ethical, explainable AI powers our Talent Lifecycle Management platform, helping large businesses to reduce bias in hiring, get better recommendations, and stay compliant across all stages of the candidate and employee journey. We believe in transparency and take pride in being the first HR Tech company to undergo a third-party audit to demonstrate the fairness of our algorithms. We are excited to join the AI Verify Foundation as it works to foster greater trust and transparency in AI, which we believe will unlock potential across the global workforce.
One of the major barriers for AI commercialisation is the inability to explain it and testing AI models through data metrics is one way to facilitate understanding on how they work. Since no AI testing standards exist, the only way forward is to bring together regulators with technology providers, commercial institutions, and academia who can address this challenge in an open-source manner, and that is exactly what the AI Verify Foundation has set out to do.
As an AI cloud services provider, Bitdeer AI aims to make AI accessible to everyone by building robust infrastructure and fostering a vibrant ecosystem for researchers, developers, and consumers. Our commitment to trust, excellence, and responsible AI is exemplified by our partnership with the AI Verify Foundation. We recognise the critical importance of AI testing and look forward to contributing to the Foundation's mission to ensure AI is harnessed responsibly for the betterment of humanity.
At Bosch, it is our vision to take the connected and digitalized world to the next level with the help of AI making people's lives easier, safer and more comfortable. Being part of the AI Verify Foundation enables Bosch to collaborate and engage with other industry leaders, researchers, and experts in the field of AI. This collaborative environment allows for knowledge sharing, exchanging best practices, and staying up-to-date with the latest developments, so that we can deploy AI-enabled products that are "Invented For Life".
BGA supports the AI Verify Foundation as a pioneering path forward in bringing together important players to develop trustworthy AI. At BGA, we strive to promote constructive engagements between regulators, our partners, and the overall business community. The Foundation is one such platform that presents an opportunity for companies to shape the way AI technologies, testing, and regulation are co-developed. We hope to work closely with IMDA and our partners through the AI Verify Foundation so that Singapore can reap the full benefits of AI in the future to come.
As a governance tool that helps enterprise organizations document, manage, and monitor their AI models and datasets to ensure compliance with internal and external regulations, BreezeML is a staunch advocate for the responsible and ethical development and use of artificial intelligence. With our values aligning closely with AI Verify Foundation's mission of building trust through ethical AI, we are proud to join and support the AI Verify Foundation to promote governance and compliance in the greater AI community.
BrightRaven.ai's core corporate value is "AI For Good". This includes our supporting the formation of requisite AI Ethics, Regulation, Governance & Enforcement frameworks at the National, Regional & Global levels to ensure AI is used for Good and not Evil, in Singapore and around the world. IMDA's AI Verify platform is a key component of such frameworks in Singapore, our global headquarters.
Realizing the benefits of artificial intelligence requires public trust and confidence that these technologies can be developed and deployed responsibly. BSA | The Software Alliance has for years promoted the responsible development and deployment of AI, including through BSA's Framework to Build Trust in AI, which was published in 2021 and identifies concrete and actionable steps companies can take to identify and mitigate risks of bias in AI systems. BSA also works with governments worldwide toward establishing common rules to address the potential risks of AI while realizing the technology's many benefits. The AI Verify Foundation offers an important forum for industry, government, and other stakeholders to work together toward building trustworthy AI.
The AI Verify Foundation provides the essential platform for allowing Safe AI to branch development into fruition, connecting networks of all capabilities to ensure trustworthy AI usage for individuals, companies, and communities. At Calvin, we are proud to contribute our expertise to its core mission.
The dialogue of Responsible AI in all its facets is vital - we are proud to be a contributing factor to the AI Verify Foundation's mission and look forward to collaborating with leading innovators in the realm of Trustworthy AI.
AI testing is crucial for our company to showcase responsible AI, offering our customers reassurance that we are dedicated to ensuring our product aligns with responsible AI practice. Joining the AI Verify Foundation is of significant importance to our company as it allows us to contribute to and stay informed about the collective community efforts aimed at advancing the deployment of responsible and trustworthy AI.
Concordia AI aims to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We believe rigorous third-party testing throughout the AI lifecycle is vital to ensure that we can reap the benefits of AI while safeguarding against potential harms. Concordia AI is pleased to join the AI Verify Foundation to contribute to this global effort.
Chartered Software Developer Association believes in promoting cross-cultural ethical & industry standards leading practices for the AI & ESG revolution. As a global professional association for technology professionals, we are confident that by joining AI Verify Foundation, our synergy will benefit the community on responsible AI practices.
With today's scale and revolution of AI innovation, we build towards having the foundational AI governance testing tools to be established for Responsible AI applications in society for public interest protection purposes, along with the frameworks, code base, standards and leading practices for AI.
Citadel AI is proud to be a member of the AI Verify Foundation. Our AI testing and monitoring technology is used by AI auditors and developers globally, and as part of the AI Verify Foundation, we hope to accelerate our shared mission of making the world's AI systems more reliable.
Responsible and ethical AI is the key to the future. CITYDATA.ai applies AI and machine learning to make our cities smarter, safer, equitable, and resilient. In joining the AI Verify Foundation, we hope to be able to contribute to the AI governance tools and frameworks in a neutral space for the AI ecosystem to thrive and produce outcomes for the betterment of humankind.
AI testing and EAI/XAI is important for any company adopting AI technology, this gives transparency.
Inspection without transparency is pointless. With transparency and accountability in mind, people deploying AI will be more ethical and responsible. Joining AI Verify Foundation is the responsibility of any AI capable company, promoting Ethical AI should be our CORE VALUE for a better tomorrow, a better Singapore.
Credo AI is thrilled to join the AI Verify Foundation, and we look forward to harnessing the collective power and contributions of the international open-source community to develop AI governance testing tools that can better enable the development and deployment of trustworthy AI.
We strongly believe in the importance of fostering a diverse community of developers who can collectively contribute to the development of AI testing frameworks and best practices, and we look forward to contributing our expertise and thought leadership to this pathfinding community, as we continue to work together to develop and maintain responsible AI tools, frameworks, and standards. This Foundation will nurture a diverse network of advocates for AI testing, which we believe is essential to driving the broad adoption of responsible AI globally.
As AI becomes more pervasive and will greatly impact to the way we work, it is our shared responsibility (with the IT community) to align to best practices and standards to enable responsible AI. Importantly, we want to ensure fairness and trust when it comes to AI adoptions, and joining the AI Verify Foundation will help CrimsonLogic do exactly that.
One of our priorities as a data science platform provider is ensuring our customers safely, responsibly, and effectively leverage and scale AI. In support of this we launched Govern - a dedicated workspace to govern AI and analytics projects - that sits alongside platform features that enable reliability, accountability, fairness, transparency, and explainability.
Tools like AI Verify can be extremely important to organisations investing in AI and analytics governance and how we work with them: they serve as a foundation that can help to give shape to strong and well-conceived AI governance practices that enable the responsible use of the technology.
AI Verify provides the much-needed gold standard for the responsible use of AI. It provides the yardstick that attests to the trustworthiness of the AI that we build. This is a ray of hope amidst mounting ethical AI concerns!
As organisations worldwide continue to drive increased adoption of AI-based solutions, it is more important than ever to establish the guardrails to ensure this is done responsibly. Singapore’s regulators have, for some time now, been at the forefront in ambitiously moving beyond high-level principles and guidelines towards developing frameworks and toolkits; to provide increased capability to organisations to better manage and govern their AI-based solutions.
DBS is proud to have been able to work closely with PDPC and IMDA in developing and testing some of their approaches over the years as a trusted partner; being part of the AI Verify Foundation will enhance this collaboration and help shape the emerging initiatives in this space.
Our collaboration with the AI Verify Foundation exemplifies our belief in the transformative power of collective innovation to advance transparent, ethical, and reliable AI solutions. By joining this pivotal initiative, we can proactively shape the future of trustworthy AI, underscoring our commitment to fostering technologies that respect user privacy, fairness, and transparency. We look forward to setting new industry standards, inspiring trust, and encouraging responsible innovation in the AI ecosystem.
DigiFutures is committed in taking the lead in ethical and responsible innovation to create a better world. Partnering with the AI Verify Foundation supports our mission to empower businesses to harness the full potential of AI, while ensuring that AI is safe, trustworthy, and used responsibly.
DXC collaborates with leading technology vendors within the AI domain, enabling us to offer impartial guidance on leveraging AI for expansion while adhering to established best practices for responsible AI implementation. The true potential of AI remains unrealized in the presence of lingering apprehension and unease among certain businesses and consumers. Through our affiliation with the AI Verify Foundation, we aim to proactively formulate and institute a conscientious AI framework in collaboration with our clients from the outset.
AI testing is a process that we welcome and appreciate as a way to showcase the extreme innovation and responsibility we put into our offering. Verification of AI is the key to its growing use and value to individuals, organizations, and society at large.
At ELGO, we take pride in helping businesses design and implement responsible AI systems. By being part of the AI Verify Foundation, we are committed to pioneering and contributing to the advancement of responsible AI use that elevate not just individual businesses, but also enrich the broader AI landscape with accessibility and trust.
At EngageRocket, we believe that joining the AI Verify Foundation enables us to deploy trustworthy and responsible AI in our products. It aligns perfectly with our vision of shaping better workplaces with credible technology.
Envision Digital is delighted to support the launch of IMDA’s AI Verify Foundation. Responsible AI has been our focus, as we recognise the need for responsible practices with the increasing deployment and limitless potential of AI innovation to support our customers. Together with IMDA, the time is now for us to advance responsible AI into action as we harness the power of AI to create a more sustainable world.
As a company that specialises in AI governance and risk management, adherence to rigorous standards is critical for our customers to demonstrate credibility, build trust with stakeholders, and ensure their AI systems are ethically developed and deployed. Joining the AI Verify Foundation will help us support that through the development of shared standards, best practices and quality tooling.
As organisations around the world continue to adopt AI solutions at the current pace and scale, they need to put proper controls and guardrails in place to ensure these solutions are safe and compliant with existing and upcoming regulations. Fairly AI is focused on accelerating responsible AI innovation, and our partnership with the AI Verify Foundation hopefully enables even more organisations to accelerate the safe and responsible adoption of AI.
FairNow is proud to join the AI Verify Foundation. We believe that building societal trust is crucial to achieving the positive, transformation potential of AI. FairNow's mission to simplify AI compliance and governance aligns with AI Verify's own goal to advance responsible AI through standards, open source, and public private partnerships. We look forward to contributing to and harnessing the work of the AI Verify Foundation.
The rapid adoption of AI technologies in the near future is undeniably going to change the contours of the way we work and engage our customers, employees and stakeholders. As such, focusing on working out the governance, ethical, and legal frameworks of how we use this technology is now more important than ever.
FairPrice Group is committed to partnering and working constructively with relevant stakeholders such as the AI Verify Foundation and IMDA. Our aim is to support the development of Singapore’s AI ecosystem and the resultant implementation of fair and practical frameworks and guidelines to regulate the technology appropriately and proportionately.
As disseminators of responsible technology, Fidutam recognizes the pivotal role of young people in advocating for and deploying responsible technology. Fidutam's innovative fin-tech and ed-tech products have been used by over 3,400 individuals in Latin America, Sub-Saharan Africa, and the United States, enabling upward economic and educational mobilization. By joining AI Verify, Fidutam aims to amplify the voice of the youth in shaping responsible AI practices globally.
Building a future with AI that is fair, explainable, accountable, and transparent is our collective responsibility. Finbots.AI is delighted to have collaborated with IMDA and PDPC to be one of the pioneering Singapore startups to complete the AI Verify toolkit. We look forward to continuing our partnership through the AI Verify Foundation by innovating on transformative use cases with the AI community and building ethical AI frameworks that are benchmarked to global standards.
The use of data and AI within GCash is focused on how we can work towards financial inclusion for the Filipinos. Responsible AI is part of our DNA and we look forward to working together and learning from the AI Verify Foundation's community as we adopt best practices in AI testing.
As the first investment firm dedicated to promoting and supporting generative AI startups in ASEAN, we have witnessed various innovations in this space. We understand the critical importance of building safe AI products for users, which can serve as a competitive advantage for ASEAN startups seeking growth and scalability globally. Therefore, we strongly encourage startups to prioritize responsible AI from day one. Partnering with government agencies such as the AI Verify Foundation and IMDA is essential to staying informed and ensuring the responsible use of AI's full potential.
GovTech leads the Singapore government's efforts to adopt AI and improve delivery of citizen-centric services as well as accelerate digital transformation. In doing so, GovTech is committed to ensuring that AI development is safe and secure to maximise its benefits and instil public trust. We are excited to join the AI Verify Foundation to develop responsible and trustworthy AI that will transform the everyday lives of people in Singapore.
The mobile industry is committed to nurturing the development of AI and big data analytics in a sustainable, ethical, and responsible manner while respecting individuals’ privacy. As part of this, the GSMA’s AI for Impact (AI4I) initiative supports members to implement products and services in a fully accountable way that is human-centric and rights-oriented. As an increasingly essential element of the infrastructure on which our society is built, AI needs to be fair, open, transparent, and explainable in its operations and customer interactions to protect customers and employees. Any entrenched inequality must be removed to ensure AI operates reliably for all stakeholders while minimising any environmental impact.
Handshakes can only help our clients do business safely when our AI is properly tested. Joining the AI Verify Foundation demonstrates that resolve.
Hanzo core principles are security, transparency, and defensibility, to empower legal teams to uncover risks and relevance, establishing a robust evidentiary foundation for efficient and confident decision-making based on AI.
Hewlett Packard Enterprise (HPE) believes that artificial intelligence (AI) holds enormous potential to advance the way people live and work, but we must ensure that we apply these powerful tools ethnically and sustainably. By joining the AI Verify Foundation and other like-minded partners, HPE would be able to support and contribute to the ongoing work to promote responsible AI, best practices and standards for AI in Singapore.
The governance of AI is a key issue for Hitachi, which recognises the significant societal impact associated with the use of this technology across its extensive business domains. We believe that the AI Verify Foundation will help businesses become more transparent to all the stakeholders in the use of AI. We are looking forward to working with you on co-creating frameworks and ecosystems to contribute to driving broad adoption in AI governance.
Holistic AI is on a mission to empower organizations to adopt and scale AI with confidence. Our comprehensive AI Governance platform serves as the single source of truth on AI usage by discovering and controlling AI inventory, assessing and mitigating risk of AI systems, and ensuring compliance with the latest legislation. We are proud to be a member of the AI Verify Foundation, and strongly align with their mission to develop best practices and standards that help enable the development and deployment of trustworthy AI.
The scale & pace of AI Innovation in this new modern technology era requires, at the very core, foundational AI governance frameworks to be made mainstream in ensuring the appropriate guardrails are considered while implementing responsible AI algorithmic systems into applications. The AI Verify Foundation serves this core mission and, as we progress as an advancing tech society, substantiates the need to advocate for the deployment of greater trustworthy AI capabilities.
At H2O.ai, our mission is fundamentally focused on deploying AI responsibly. We are dedicated to ensuring that AI systems comply with applicable regulations and operate with transparency and ethical integrity. By joining the AI Verify Foundation, H2O.ai can collaborate with AIVF to contribute to the creation of AI governance toolkits together. This partnership underscores our commitment to responsible AI practices.
IFPI is the voice of the recording industry worldwide, representing over 8,000 record company members across the globe. We work to promote the value of recorded music, campaign for the rights of record producers and expand the commercial uses of recorded music around the world. We believe that progress in AI innovation and adequate copyright protection are not mutually exclusive, and that that the human creative expression and the human artist remains fundamental to the creation of music despite increasing AI capabilities.
impress.ai helps its customers improve the accuracy of their hiring decisions using AI. To make sure that we preserve and enhance the meritocratic nature of such decisions, it is vital that the AI behind the platform is robust, fair, responsible and explainable. AI adoption is growing at an exponential rate. As a company selling AI solutions that touch millions of professionals, we have a responsibility to help shape the industry in a way that is beneficial to humanity. AI Verify Foundation is a step in the right direction and we are glad to support its efforts.
As an AI-focused company, we understand the profound impact our technology can have on society. At iNextLabs, we believe that with great innovation comes great responsibility. And we are committed to responsible and ethical AI deployment. By implementing comprehensive testing protocols, we strive to mitigate biases, enhance fairness, and fortify the robustness of our AI solutions. By joining the AI Verify foundation we pledge to make this vision a reality, We look forward to learn, share and contribute to the best practices and standards in enabling responsible AI.
The discovery of industrial processes to mass produce nitrogen compounds led to the resolution of a deadly food crisis once faced by humanity. However, this discovery, arguably too, led to its applications in areas that may not be originally intended for. Learning from history, it is now or too late for us to align beliefs and principles in ethical and sustainable use of AI, as we witness a closer realisation towards autonomous enterprise. Being part of a foundation that gathers global industry leaders, the implementation of guiding principles will create a bigger impact.
Ensuring that AI systems are safe, reliable, and compliant is at the heart of Intelligible's mission. Partnering with the AI Verify Foundation allows us to both learn from and contribute to a community dedicated to robust AI governance and testing. Together, we aim to drive innovation, establish best practices, and set new benchmarks in AI safety and compliance, ensuring the highest levels of trust and reliability in AI systems for a better tomorrow.
We recognise the critical importance of trustworthy AI in improving patient and customer outcomes. Joining the AI Verify Foundation aligns with our mission to deliver safe and reliable virtual training solutions, and we believe in the power of open collaboration to advance responsible AI practices. We strongly support the mission of the AI Verify Foundation to foster a community dedicated to AI. By ensuring the trustworthy deployment of AI, we can drive innovation, build stakeholder trust, and create a more sustainable future for all.
The need for AI testing enables Invigilo to understand the behaviour and potential edge cases, allowing the team to intervene where the AI system is not performing before deploying it in real-world conditions. This allows better communication between AI developers and end users on how the AI systems arrive at their decisions and provides explanations when necessary.
As a General member with AI Verify, JJ Innovation Enterprise Pte Ltd can further align with best practices in A.I governance, collaboration with industry peers, enhance our solution credibility, and ensure our AI solutions are developed and deployed responsibly, thus contributing to the broader goal of advancing trustworthy AI as a trustworthy solution provider.
As Kenek AI strive to foster connections that matters through the usage of responsible AI, our membership in the AI Verify Foundation enables us to collaborate with key stakeholders in the AI community to spread the adoption of ethical AI governance and responsible AI for a fairer future for all.
Joining independent organizations like AI Verify allows us to collaborate and share real-life practical experiences and knowledge with industry leaders, thereby advancing responsible AI practices. It also provides us with a platform to advocate for ethical AI principles to raise awareness among companies and the public.
KPMG sets standards and benchmarks for AI and digital trust. By collaborating with the AI Verify Foundation, regulators, and industry leaders, we can build a trustworthy AI ecosystem by developing rigorous governance frameworks. This effort promotes trusted AI adoption among Singapore businesses, positioning Singapore as a global AI hub for scalable AI solutions that transform industries with integrity.
Lazada has been at the forefront of driving and responding to technical advances, working with AI experts to unlock a new era of eCommerce and retail innovations to offer differentiated experiences and opportunities for our users, sellers and partners. Joining the AI Verify Foundation is an important step in ensuring we continue to develop high-quality AI-powered services and products in a way that safeguards our platform users, and is aligned with our trust and safety policies which include data privacy and the protection of intellectual property rights.
We are focusing on the development of AI for compliance use in the financial industry, so we value governance very much and see responsible and trustworthy AI as important in product development. We would like to join the AI Verify Foundation as it is a community that values responsible and trustworthy AI, where we can have a community with the same purpose to exchange ideas and co-create best practices for AI testing in the market.
We advocate for a world where people and wildlife thrive together. This purpose drives us to actively contribute to the conservation of species, habitats, wildlife science and research. Augmenting this at our destination of the Mandai Wildlife Reserve, we nurture people's connection with the natural world, by harnessing innovative technology to educate and engage. Embedding ethics in our adoption of AI is therefore key to ensuring our technologies respect and enhance the physical environment, while benefitting the animals in our care, our employees, and visitors.
In the digital age, the synergy between people and AI drives progress. At Mastercard, we have been using AI for years as part of transaction processing to protect against fraud and cyber risks, as well as to provide insights to our customers. We highly value ethics, transparency, and reliability in AI practices, and we believe in open dialogue between sectors and diverse viewpoints. We are delighted to join the AI Verify Foundation and eagerly anticipate innovating responsibly together, ensuring ethical AI guides us toward a brighter future.
There is immense potential within the media and broadcasting industry to leverage AI. Mediacorp is exploring the use of AI in areas such as content generation, marketing, and advertising and is honoured to be among the pioneer members of the AI Verify Foundation. We look forward to working with the community of AI practitioners to exchange knowledge, collaborate on initiatives, and drive the development of robust AI governance standards in Singapore.
Joining the AI Verify Foundation signifies our commitment to responsible and trustworthy AI, ensuring that our innovation in the beauty and personal care industry is not only cutting-edge but also ethical and transparent.
Our focus is on ensuring that AI at Meta benefits people and society, and we proactively promote responsible design and operation of AI systems by engaging with a wide range of stakeholders, including subject matter experts, policymakers, and people with lived experiences of our products. To that end, we look forward to participating in the AI Verify Foundation and contributing to this important dialogue in Singapore and across the entire Asia Pacific region.
MAIEI is delighted to be joining the AI Verify Foundation given its focus on operationalizing Responsible AI and making it easier for as many organizations as possible to adopt these practices throughout their design, development, and deployment of AI systems. It aligns with our mission of democratizing AI ethics literacy, ultimately seeking to make Responsible AI the norm rather than the exception.
MLSecured is a platform dedicated to AI Governance, Risk, and Compliance, designed to assist companies and public sector organizations in responsibly adopting AI, managing AI risks, implementing best governance practices, and adhering to AI regulations.
Music Rights (Singapore) Public Limited, also known as MRSS, is a Not-For-Profit Collective Management Organisation (CMO) that represents the majority of Music Producers and administers the Copyrights for Karaoke, Music Videos, and Sound Recordings on their behalf. MRSS believes that even in this age of rapid AI development, the rights of Creators and Producers should be safeguarded, and the use of copyrighted works should require the full authorisation and licensing from rights holders.
AI testing demonstrates NCS’ commitment to delivering responsible, safe, and equitable AI solutions. We harness technology to provide right-sized cybersecurity solutions that future-proof cyber resiliency and shape the future of AI. Our clients trust us to safeguard their digital transformation journeys, leveraging our expertise and end-to-end capabilities to enhance their security posture, streamline processes, and strengthen governance. Joining the AI Verify Foundation underscores our dedication to ethical AI governance and building a secure and resilient digital future.
As a company developing Python-based rPPG software, AI testing is crucial to demonstrate responsible AI practices, ensuring the accuracy, fairness, and ethical considerations of our algorithms. Joining the AI Verify Foundation is vital as it allows us to contribute to advancing the deployment of responsible and trustworthy AI, aligning our commitment to ethical development with a community dedicated to fostering AI transparency and accountability.
As a pioneer in the AI field, OCBC Bank is committed to ensuring that the future of AI is fair to all. The AI Verify Foundation is a key enabler in achieving the goal of trustworthy AI.
Ensuring AI safety and rigorous testing is paramount to OneDegree Global's commitment to helping enterprises deploy responsible AI technology. Joining the AI Verify Foundation aligns with our mission to advance the development of trustworthy AI, enabling innovation while safeguarding ethical standards and public trust. We are proud to contribute to the Foundation in its work to advance responsible AI adoption and innovation.
At OpenAI, we believe that AI has huge potential to improve people's lives - but only if it is safe and its benefits are broadly shared.
That’s why we’re proud to support AI Verify and the Singapore government’s efforts to promote best practices and standards for safe, beneficial AI.
We look forward to working with the Foundation towards our shared goal of the development and deployment of AI that benefits all of humanity.
Joining the AI Verify Foundation is a valuable opportunity for our company to contribute to the development of trustworthy AI and collaborate with a diverse network of advocates in the industry. We fully support the mission of the AI Verify Foundation to foster open collaboration, establish standards and best practices, and drive broad adoption of AI testing for responsible and trustworthy AI.
With the emergence of AI, Parasoft is proud to be a member of AI Verify Foundation.
It is of importance that the created AI environment is safe and robust, responsible and ethically adopted for all in our digital world today.
We applaud Singapore Government's efforts in taking up the heavy-lifting of collaborating, building trust and governance in the AI community.
By leveraging on rich integrations with AI Verify toolkit, our customers can now benefit from this partnership and get the most comprehensive, value-driven approach to testing.
We believe there is a need to ensure AI Service development is in line with ethical principles and societal values, ultimately bringing positive impacts to our communities.
At Patsnap, we recognise that ensuring AI safety and rigorous testing are not just technical requirements but a fundamental responsibility – with more than 12,000 global companies across diverse industries trusting us to innovate better and faster. Joining the AI Verify Foundation demonstrates our commitment to advancing the deployment of responsible AI, fostering innovation while prioritising the ethical and safe application of AI technologies. This collaboration also underscores our dedication to leading in development of AI applications for enterprises with integrity and transparency.
At Prudential, we are constantly looking at ways of using data and AI to deliver an exceptional customer experience - while building an insurance landscape that is inclusive and equitable. We apply our responsible AI principles to safeguard our customers' health and financial well-being.
In partnership with the AI Verify Foundation, we’re crafting AI ethics toolkits that align with these core principles. Our customers can trust in our commitment to building robust and secure systems, which are rigorously tested for transparency and accountability.
At Qualcomm we strive to create AI technologies that bring positive change to society. Our vision for on-device AI is based on transparency, accountability, fairness, managing environmental impact and being human-centric. We aim to act as a responsible steward of AI, considering the broader implications of our work and taking steps to mitigate any potential harm. Our on-device AI solutions are designed to enable enhanced privacy and security, essential to a robust and trustworthy AI ecosystem. Our hope is that with the AI Verify Foundation we can contribute to a broad, collaborative effort to build a common framework in support of internationally recognized AI governance principles, as an effective path towards building responsible, human-centric AI.
As a Venture Capital firm investing in data and AI companies, we believe that AI use must be ethical even as companies seek to innovate and deliver new technologies for the betterment of society. Being part of the Foundation will enable us to work with likeminded members, utilise and also contribute to the building of robust and practical AI toolkits and guidelines, with the goal of championing responsible use principles as the ground from which further AI technology is developed.
The Recording Industry Association Singapore (RIAS) comprises 25 leading major and independent record companies in Singapore, with a mission to promote recorded music and expand its market utilisation, and to safeguard the rights of record producers and their artistes. Our members understand that AI technology has empowered human expression, but human-created works will continue to play an essential role in the music industry and believe that copyright should protect the unique value of human intellectual creativity.
RegTank looks forward to contributing towards the evolving AI standards and testing methodologies through our participation as a member of the AI Verify Foundation to forge greater trust with clients, regulators, and other stakeholders.
As AI's impacts become increasingly widespread, the responsible AI community must have access to clear guidance on context-relevant AI testing methodologies, metrics, and intervals. The Responsible AI Institute is excited to support the AI Verify Foundation, given its proven leadership in AI testing, dedication to making its work accessible, and commitment to international collaboration.
As technologists and practitioners of AI, Responsible AI is a core principle at retrain.ai. From our involvement in shaping NYC's Law 144, extensive research about AI risks, launching the first-ever Responsible HR Forum, to embedding explainability, fairness algorithms and continuous testing, ensuring our AI models meet the highest standards for responsible methodology and regulatory compliance, we view Responsible AI as one of our main pillars. Joining the AI Verify Foundation is an extension of our dedication to responsible AI development, deployment, and practices in HR processes.
Our commitment to AI security and governance stems from the belief in AI's potential for positive impact. We aim to contribute to a future where AI benefits humanity with minimized risks. Our objective is to empower organizations to achieve their goals through trustworthy and safe AI systems. Joining the AI Verify Foundation allows us to rigorously test our AI Governance framework, promoting the safe adoption of AI.
SAP is one of the first companies in the world to define trustworthy and ethical guiding principles for using AI in our software, and continues to be a leader in responsible business AI. Through our participation in the AI Verify Foundation we look forward to contributing our global expertise to support the development and deployment of responsible AI that will help the world run better and improve people's lives.
As more and more solutions and decisions are developed with the help of AI, there is a greater need to adopt responsible AI, and there is a greater responsibility on our shoulders to help customers to do that effectively and efficiently.
Scantist believes robust AI testing is crucial for responsible AI implementation - especially in cybersecurity. Joining the AI Verify Foundation amplifies our commitment to shaping a secure future where secure cyber-systems - including AI - are the standard, not the exception.
Facticity.AI, a Singaporean-American LLM app, is dedicated to improving AI safety by contributing a localized, multilingual dataset for factuality—an initiative valuable to Singapore and the region. By joining the AI Verify Foundation, we aim to promote trustworthy AI through transparency and accountability. Facticity.AI prioritizes explainability from credible sources and supports a more equitable, accountable, and transparent AI ecosystem for all stakeholders.
Sekuro is committed to offering assurance services to AI companies with a focus on boosting their credibility, managing risks, and supporting their decision-making.
As a seasoned consultancy firm with expertise in NIST CSF, ISO 27001, and ISO 42001, we value the chance to contribute to our partners' Integrated Management Systems (IMS).
Our goal is to help ensure the ethical, responsible, and trustworthy development and deployment of AI as well as ensuring confidentiality, integrity, and availability of the company's information.
The AI Verify Foundation will advance the nation's commitment to fostering trustworthy AI as a cornerstone of Singapore's AI ecosystem. At SenseTime International, we look forward to co-creating a future with the Foundation where AI technologies are developed and deployed responsibly, advocate international best practices, and are credited for its positive Whole-of-Society impact.
AI has potential risks and if not proactively managed can create a significant negative impact on organizations and society as a whole, SigmaRed is committed to making AI more responsible and secure and would like to join with AI Verify Foundation.
As an early user of AI Verify, Singapore Airlines recognised the importance of responsible AI and AI governance as a strong foundation for our AI initiative. The testing framework of AI Verify facilitated our initiative and enabled us to further strengthen data trust among our stakeholders. Joining the AI Verify Foundation supports our digital transformation journey and enables us to be in a collaborative network promoting ethical AI.
As a leading communications technology company, Singtel's committed to empowering people and businesses and creating a more sustainable future for all. We see AI as a key enabler in the development of new innovations that will transform industries and consumer experiences. Through our collaboration with the AI Verify Foundation, we’re helping to advance the transparent, ethical, and trustworthy deployment of AI so everyone can enjoy the next generation of technologies safely.
AI testing is paramount to SoftServe because it embodies our commitment to delivering responsible AI solutions. In an era where AI is evolving, we recognize the need to ensure our technologies are transparent, accountable, and beneficial for all stakeholders. By rigorously testing our AI solutions, we guarantee their functionality and ensure they align with ethical standards and values we uphold.
Joining the AI Verify Foundation is a strategic decision. Being part of the Foundation positions us at the forefront of global AI standards and best practices. It would also be a great way to further communicate our commitment to responsible AI and be a part of a community that contributes to regional initiatives in this space.
Trust is essential for public acceptance of AI technologies, the community of developers and stakeholders the AI Verify Foundation will convene promises the development and deployment of a more trustworthy AI.
SPH Media's mission is to be the trusted source of news on Singapore and Asia, to represent the communities that make up Singapore, and to connect them to the world. We recognize the importance of AI and are committed to responsible AI practices. We strive to build up AI systems that are human-centric, fair, and free from unintended discrimination. This process will be enhanced by AI testing that allows us to identify and address potential risks associated with AI, and aid us in our mission.
The mission of the AI Verify Foundation resonates with Squirro’s belief in the responsible and transparent development and deployment of AI. We look forward to participating in this vibrant global community of AI professionals to collectively address the challenges and risks associated with AI.
We are heartened that IMDA is leading the way in ensuring AI systems adhere to ethical and principled standards. As a member, ST Engineering will do its part to advance AI solutions and to shape the future of AI in a positive and beneficial way.
The capabilities of AI-driven systems are increasing rapidly, as we have seen with large language models and generative AI. The democratisation of access will lead to the widespread deployment of AI capabilities at scale. Evaluating AI systems for alignment with our internal Responsible AI Standards is a key step in managing emerging risks, and testing is a critical component in the evaluation process.
The pace and scale of change concerning AI systems require risk management and governance to evolve accordingly so users can derive the benefits in a safe manner. This cannot be done independently, and it is better to collaborate with the wider industry and government agencies to advance the deployment of responsible AI. Standard Chartered has partnered with IMDA to launch the AI Verify framework, and joining the AI Verify Foundation is a logical next step to ensure we can collaboratively innovate and manage risks effectively.
StoreWise is on a mission to transform brick-and-mortar Retail to create memorable shopping experiences by infusing cutting-edge technology in their operations so they can strive. Becoming a member of the AI Verify Foundation shows our commitment to use AI technology responsibly and to contribute into building safeguards as it evolves, for the benefits of our clients, their customers, and the community.
Strides Digital is excited to join the AI Verify Foundation community to use and develop AI responsibly, as we help companies capture value on their decarbonisation and fleet electrification journey.
AI is seen as a transformative technology that offers opportunities for innovation to improve efficiency and productivity. As the need for AI-powered solutions continues to surge, the active engagement of the community in the development of best practices and standards will be pivotal in shaping the future of responsible AI. Tau Express wholeheartedly supports this initiative by IMDA, and we look forward to leveraging the available toolkits to continue building trust and user confidence in our technology solutions.
AI testing forms the bedrock of TeamSolve's commitment to responsible AI development. It serves as our unwavering assurance to the operational workforce that they can place their complete trust in our AI Co-pilot, Lily, knowing that it relies on trustworthy knowledge sources and provides recommendations firmly rooted in their domain.
The AI Verify Foundation and its members play a fundamentally pivotal role collectively in the advancement of the AI towards higher standards of accountability and trustworthiness for greater acceptance in society.
At Tech4Humanity, we believe responsible testing and validation are vital to developing trustworthy AI that uplifts society. By joining the AI Verify Foundation, we aim to collaborate with partners across sectors to create frameworks and methodologies that proactively address algorithmic harms and demonstrate AI's readiness for broad deployment. Our goal is to advance the creation of human-centric AI that augments our collective potential.
At Telenor, we are committed to using AI technologies in a way that is lawful, ethical, trustworthy, and beneficial for our customers, our employees and society in general. Telenor has defined a set of guiding principles to support the responsible development and use of AI in a consistent way across our companies, to ensure it is aligned with our Responsible Business goals.
At Temasek Polytechnic, AI testing isn't solely about functionality; it's about demonstrating our commitment to responsible AI. We understand the imperative of ensuring that our AI systems operate ethically and reliably. Joining the AI Verify Foundation underscores our dedication to advancing the deployment of trustworthy AI. It's not merely about progress; it's about ensuring that progress is rooted in principles of responsibility and trustworthiness through education and implementation.
AI testing is pivotal for deploying responsible AI, ensuring safety and risk management. Access to valuable resources and regulatory alignment supports transparency and continuous improvement, which in turn ensure reliability and scalability—all essential for building trust with our clients. Joining the AI Verify Foundation is important for Temus as we collaborate with enterprises on their digital transformation journeys. We aim to foster collaboration and mutual accountability, setting high standards of integrity in this frontier of innovation, so that we all might unlock social and economic value sustainably.
As Tictag is focused on producing the highest quality data for AI and machine learning, the AI Verify Foundation aligns perfectly with our mission of making AI trustworthy not just in purpose but in substance. AI ethics is at the core of what we do, being very human-centric, and the reputation of AI Verify will be important to rely on as we expand overseas.
We think there is a value in networking and exchanging ideas with the industry leaders. As advanced AI is no longer a far future, industry leaders have more and more discussions about the guardrails, the safety measures, and what's next in store. Singapore is at the forefront of AI development, and Singaporean companies should join this conversation as well. So it is a very timely initiative.
Our project stands for governance and transparency - hallmarks of AI Verify's framework that we are proud to adopt ourselves and promote. We encourage testing as a means to achieving the overall mission of the Foundation.
The Foundation's movement to advance responsible and trustworthy AI is the rising tide that will lift all boats. We are inspired by its work and we want to be part of the movement to foster trust whilst advancing AI. We commit to responsible practices of development and deployment.
AI Verify is an important step towards enhancing trustworthiness and transparency in AI systems as we move up the learning curve. In order for AI to live up to its full potential, we need to build and earn this trust. We believe that developing specialised skilled talent and capabilities is the cornerstone of creating AI trust and governance guardrails and toolkits. Making the technology safer is key, and we are glad to support IMDA, who have taken the lead in nurturing future champions of responsible AI.
Trusted AI’s mission is to help organisations instill trust in the very DNA of their AI programs, as seen in our logo. We are excited at the opportunity to partner with the AI Verify Foundation as we are aligned with their mission, and together, we can bring the development and deployment of trustworthy AI globally.
UBS is proud to be one of IMDA’s inaugural AI Verify Foundation members and participate in the AI Verify pilot test. We will continue to engage leading fintechs, investors and companies to decode emerging AI trends. Through the AI Verify Foundation , we aim to promote the use of AI in an ethical and trustworthy manner.
UCARE.AI has supported IMDA since participating in the first publication of their AI Governance Framework in 2019 and has continued to align our processes when deploying AI solutions for our customers. We believe that the establishment of the Foundation will foster collaboration, transparency, and accessibility, which is crucial in promoting trustworthy AI.
Ethical use of data is an integral part of our operating DNA, and UOB has been recognised as a champion of AI ethics and governance. By joining the AI Verify Foundation, UOB hopes to contribute in the thought leadership in responsible AI.
VFlowTech employs AI in its EMS since it is the only method to enhance solar and energy storage efficiency. We also feel that an ethical code for responsible AI must be established due to cyber security issues.
At Vidreous, we employ GenAI models to classify data and provide insights to enhance our user experience. As AI is known to hallucinate on new information, it is necessary to establish a quality management process to ensure output accuracy. Joining AI Verify Foundation allows us to advance the quality management and deployment of our products as part of the larger collective community in practicing responsible AI. Together, we will make a difference in delivering trust to our people.
It is essential to ensure that the plethora of apps that use AI models today produce accurate and reliable results. Our vision at Virtusa is to establish ourselves as a strong capability hub for AI Testing and work in collaboration with likeminded communities. Hence, we are keen to work with AI Verify Foundation in promoting responsible, ethical and sustainable use of AI, thereby building trust with our clients and other stakeholders.
Trust remains at the core of everything we do and is the foundation upon which data-driven products and innovations are built. Creating a governance structure that prioritises the responsible stewardship of data and establishing robust measures such as consent management form a core foundation for responsible AI. We are excited and honoured to be a part of the AI Verify Foundation Committee and contribute towards the development and deployment of responsible AI in Singapore.
Walled AI's mission is to make AI controllable and predictable through research-backed governance tools, emphasizing safety and cultural alignment. In collaboration with the AI Verify Foundation, we aim to establish safety benchmarks and responsible AI pipelines for the safe adoption of AI in Singapore. This partnership will allow us to share our expertise in AI safety evaluations and contribute through governance talks, safety tools, and data collection methods to identify potential harms and biases in AI systems.
It is important for Warner Music Singapore to join AI Verify Foundation because it allows us to support the development and deployment of trustworthy AI. By joining, we can collaborate with developers, contribute to AI testing frameworks, and share ideas on governing AI. The Foundation provides a neutral platform for us to collaborate and aims to promote AI testing through marketing and education. Being part of this diverse network of advocates will help us drive broader adoption of reliable AI practices.
At WeBank, we champion techonology's role in inclusive finance and sustainable development. As the world's leading digital bank, we've seamlessly integrated advanced technologies like AI, blockchain, and big data. Our dedication has led to milestones such as innovative distributed system architectures and providing tailored financial solutions to millions. Joining the AI Verify Foundation positions us to shape robust governance and advance human-centric AI innovations. As we share our insights and learn from peers, we remain focused on cultivating an AI ecosystem grounded in trust, accountability, and equitable digital progression.
We see significant value in membership. It allows us to contribute to developing standards for AI governance, shape best practices, and signal our commitment to trustworthy AI. The open-source approach enables continuous progress through collaboration.
Workday welcomes the establishment of the AI Verify Foundation, which will serve as a community for like-minded stakeholders to contribute to the continued development of responsible and trustworthy AI and Machine Learning. We believe that for AI and ML to fully deliver on the possibilities it offers, we need more conversations around the tools and mechanisms that can support the development of responsible AI. Workday is excited to be a member of the Foundation, and we look forward to contributing to the Foundation’s work and initiatives.
At WPH Digital, we recognize that building trust in AI systems is crucial for their widespread and responsible deployment. Joining the AI Verify Foundation places us at the forefront of advancing AI governance through the standardized implementation of a recognized framework and testing tools. This partnership reinforces our commitment to ethical AI practices, ensuring that our AI-driven solutions not only meet but exceed industry expectations for integrity, transparency, and societal benefit.
Joining the AI Verify Foundation aligns with X0PA’s commitment to responsible AI practices, as we look to harness the power of AI to promote unbiased and equitable practices in hiring and selection.
Being a pioneer analytics consultancy firm in the Philippines in 2013, and as we advise on data and AI strategies for organizations, we have the responsibility to continuously seek best practices and standards, as well as contribute to improve the communities of practice. AI safety is a critical piece that we have started to incorporate in our methodology to ensure trustworthy AI for clients. Joining AI Verify Foundation enables us with the tools and provides us with a venue to contribute to the bigger community.
In 2021, Zoloz proposed a trustworthy AI architecture system, including explainability, fairness, robustness, and privacy protection. Trustworthy AI is the core capability of resisting risks in the digital age. We hope that by joining the AI Verify Foundation, we can continuously polish our AI capabilities and build an open, responsible, and trustworthy AI technology ecosystem to empower the digital economy and the industry ecosystem. In the future, we hope that through continuous practice, we will continue to promote the implementation of AI and other technologies in the industry and create more value for society.
Many existing Zoom products that customers know and love already incorporate AI. As we continue to invest in AI, Zoom remains committed to ethical and responsible AI development; our AI approach puts user security and trust at the center of what we do. We will continue to build products that help ensure equity, privacy, and reliability.
At Zühlke, we work with highly regulated clients to implement data and AI. The power of AI and data with insights enable impact on decisions to valuable actions. We approach such problem space by focusing on the core components of our responsible AI framework: to be human-centered, ethical, interpretable and sustainable.
Along with AI Verify's vision to harness the collective power in approaching trust through ethical AI, we contribute and collaborate with organisations to adopt AI safely, backed by our industry experience in highly regulated industries.
The foundation shows the leading stance IMDA is taking to ensure that AI Governance becomes a core for all organisations and society, not limiting its availability, but ensuring that all actors using AI can benefit from AI Governance at this pivotal moment in AI's progression. 2021.AI will endeavour to be a core member with its AI Governance offering and expertise.
Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?
Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?
Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?
Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?