Official frameworks, standards, and tools to guide safe, secure, and compliant AI deployment.

The world’s first comprehensive AI law, taking a risk-based approach to classify AI systems by risk level. It imposes obligations (e.g. transparency, safety, human oversight) especially on high-risk AI to ensure systems are safe, transparent, non-discriminatory, and human-centric
LinkThe UK’s approach to AI governance, empowering existing regulators to apply five cross-cutting principles (safety, transparency, fairness, accountability, contestability) in their sectors . Rather than new rigid laws, it sets a flexible, principles-based framework to foster innovation while managing AI risks.
LinkA White House OSTP framework (Oct 2022) outlining five principles to guide the design, use, and deployment of AI/automated systems in order to protect civil rights, privacy, and democratic values.
LinkA White House OSTP framework (Oct 2022) outlining five principles to guide the design, use, and deployment of AI/automated systems in order to protect civil rights, privacy, and democratic values.
LinkA voluntary U.S. framework (Jan 2023) for organisations to manage AI risks and integrate “trustworthiness” considerations into AI design, development, and use . It provides a structured approach (Identify, Manage, Govern, Map functions) to address issues like bias, explainability, robustness, privacy, and safety in AI systems. (Widely used as a practical guide for enterprise AI risk assessment and controls.)
LinkAn international standard defining requirements for establishing an AI Management System (AIMS) within an organisation . It offers a structured framework for AI governance, helping organisations build trustworthy, compliant AI by covering risk management, lifecycle controls, accountability, and continual improvement. (Aligns with regulatory expectations like the EU AI Act and enables certification of an organisation’s AI governance processes.)
LinkA comprehensive guidance standard for AI-specific risk management. It provides strategic direction on identifying, assessing, and mitigating AI risks across the AI system lifecycle . The standard includes concrete examples and maps AI risk management practices (building on ISO 31000) that organizations can adapt to their context. (Helps enterprises integrate AI risk controls into existing risk frameworks and prepare for upcoming compliance obligations.)
LinkA detailed practical framework from Singapore’s PDPC (first issued 2019, updated 2020) to help private sector organizations implement AI governance in practice. It provides readily-implementable guidance on managing ethical and consumer protection issues when deploying AI – e.g. explaining AI decision logic, ensuring data accountability, and building transparency and fairness into AI services. (Includes an assessment guide (ISAGO) and use-case compendium; a widely-referenced toolkit in industry for operationalizing AI ethics.)
LinkA World Economic Forum toolkit (2022) offering practical tools and guidance for corporate executives to oversee AI in their organizations . It covers AI’s opportunities and risks across technical, organizational, regulatory and societal aspects. The toolkit helps C-level leaders and boards ask the right questions and make informed decisions on AI strategy, governance structures, risk mitigation, and ethical AI implementation. (Designed to translate abstract AI principles into actionable business practices and governance checklists for enterprises.)
LinkA knowledge base and taxonomy of AI risks (developed by IBM). It compiles risks associated with generative AI and traditional ML, distilled from research and real-world incidents. The Risk Atlas defines various risk types (e.g. privacy, bias, explainability, safety, etc.) and groups them into categories (inputs, inference, outputs, etc.), helping organizations identify and understand potential failure modes in AI systems . (IBM’s watsonx Governance tool integrates this library; the Atlas is a great resource for developing risk registers and mitigation plans for enterprise AI projects.)
LinkA living database of 1600+ AI risks drawn from dozens of AI risk frameworks and academic papers . It categorizes risks by cause (how/when/why they occur) and by domain of impact. This shared repository (launched 2024) provides industry, policymakers, and researchers with a common reference to monitor AI risks and develop oversight strategies . (Includes an AI risk database with sources, a causal taxonomy of risk factors, and domain taxonomy of risk areas. Useful for conducting AI risk assessments, comparing risk taxonomies, and staying current on emerging AI risk concerns.)
LinkA crowd-sourced global database tracking real-world AI incidents where AI systems caused harm or near-misses (e.g. safety failures, fairness issues, wrongful outcomes). It has collected over 1,200 incident reports of AI failures across sectors . As a centralized repository of “when AI goes wrong,” the AIID helps practitioners and researchers learn from past incidents and preempt similar issues. (By studying incident patterns, organizations can better anticipate risks and put guardrails in place. The database is publicly accessible and encourages contributions of new incidents.)
Link