Model AI Governance Framework for Generative AI

Fostering a trusted AI ecosystem

The Model AI Governance Framework for Generative AI (MGF for GenAI) outlines 9 dimensions to create a trusted environment – one that enables end-users to use Generative AI confidently and safely, while allowing space for cutting-edge innovation. Recognising that no single intervention is enough to address existing and emerging AI risks, the framework offers a set of practical suggestions that apply as initial steps, expanding on the existing Model AI Governance Framework for Traditional AI.
The framework also aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally. This is the first step towards developing more detailed guidelines and resources under each of the 9 dimensions to enable a systematic and balanced approach to AI governance.

Framework’s 9 dimensions

1. Accountability​

Learn more
Putting in place the right incentive structure for different players in the AI system development life cycle to be responsible to end-users​

2. Data​

Learn more
Ensuring data quality and addressing potentially contentious training data in a pragmatic way, as data is core to model development

3. Trusted Development and Deployment​

Learn more
Enhancing transparency around baseline safety and hygiene measures based on industry best practices in development, evaluation and disclosure

4. Incident Reporting​

Learn more
Implementing an incident management system for timely notification, remediation and continuous improvements, as no AI system is foolproof​

5. Testing and Assurance​

Learn more
Providing external validation and added trust through third-party testing, and developing common AI testing standards for consistency

6. Security​

Learn more
Addressing new threat vectors that arise through generative AI models​

7. Content Provenance​

Learn more
Transparency about where content comes from as useful signals for end-users​

8. Safety and Alignment R&D​

Learn more
Accelerating R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values​

9. AI for Public Good​

Learn more
Responsible AI includes harnessing AI to benefit the public by democratising access, improving public sector adoption, upskilling workers and developing AI systems sustainably​


MGF for GenAI was first developed by the Foundation and IMDA and released for international views on 16 January 2024. The Proposed Framework sought to expand on the existing Model AI Governance Framework for Traditional AI. The Proposed Framework identified 9 dimensions to support a comprehensive and trusted AI ecosystem and provided practical suggestions that model developers and policymakers could apply as initial steps.
The feedback received on the Proposed Framework was instructive in finalising MGF for GenAI, released on 30 May 2024.

Model AI Governance Framework for Traditional AI

On 23 January 2019, IMDA/PDPC released its first edition of the Model AI Governance Framework for Traditional AI for broader consultation and adoption. The Model Framework provides detailed and readily-implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions.
On 21 January 2020, the PDPC released the second edition of the Model Framework. The second edition includes additional considerations (such as robustness and reproducibility) and refines the original framework for greater relevance and usability.

Thank you for completing the form. Your submission was successful.

Preview all the questions


Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?


Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?


Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?


Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?


Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?


Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.