While the foundational challenges of AI regulation remain constant, the past two years have witnessed a flurry of legislative activity and policy discussions worldwide, revealing a diverse array of approaches to AI governance. From comprehensive legal frameworks to principles-based guidelines, nations are grappling with how best to harness AI's potential while mitigating its risks. This global panorama highlights distinct philosophies and priorities, each grappling with the 'velocity,' 'what to regulate,' and 'who regulates' dilemmas.
European Union: The AI Act (Pioneering Hard Law)
The European Union's AI Act, officially entering into force on August 1, 2024, stands as a landmark achievement and the world's first comprehensive legal framework for AI. Its phased implementation, culminating in full effect by August 2026 for most high-risk systems and August 2027 for certain obligations, demonstrates a deliberate, yet ambitious, approach to managing the "velocity" challenge. The EU's strategy is characterised by:
- Risk-Based Classification: AI systems are categorised into four levels:
- Unacceptable Risk: Prohibited (e.g., manipulative AI, social scoring, real-time biometric identification in public spaces with narrow exceptions). These prohibitions became enforceable as early as February 2, 2025.
- High-Risk: Subject to stringent requirements including data quality, transparency, human oversight, safety, and conformity assessments. Examples span critical infrastructure, employment, law enforcement, and justice.
- Limited Risk: Requires transparency obligations, such as informing users when interacting with an AI system (e.g., chatbots, deepfakes).
- Minimal Risk: Not subject to specific regulations.
- General Purpose AI (GPAI): The Act uniquely addresses foundation models like ChatGPT, imposing transparency, documentation, and copyright compliance requirements, with additional scrutiny for systemic risk GPAI models. Codes of Practice for GPAI models are expected by August 2, 2025.
- Governance: The newly established European AI Office and the European Artificial Intelligence Board will oversee implementation and enforcement, addressing the "who regulates" challenge head-on.
- Extraterritorial Reach: A significant feature, the Act applies to providers and deployers of AI systems marketed or used within the EU, regardless of their origin, reflecting the interconnected nature of the digital economy.
- Alignment with Brookings: The EU AI Act directly embodies Brookings' call for risk-based regulation and the creation of a dedicated regulatory body. Its phased approach and flexible "Codes of Practice" for GPAI also attempt to address the "velocity" problem by allowing for iterative adaptation.
United States: Fragmented and Principles-Based Approach
In contrast to the EU's singular, comprehensive legislation, the United States has adopted a more fragmented, agency-led, and principles-based approach to AI regulation:
- Biden Executive Order (October 2023): This executive order is a cornerstone of current US policy, directing various federal agencies to develop guidelines for "safe, secure, and trustworthy AI." Its scope is broad, covering safety and security, responsible innovation, worker protection, civil rights, privacy, and international collaboration. It mandates crucial steps like the development of red-teaming guidelines for frontier AI and content authentication requirements.
- NIST AI Risk Management Framework (AI RMF): Finalised in January 2023, this voluntary framework provides organisations with a guide for managing AI risks across the entire lifecycle, aligning with Brookings' emphasis on safety and principles. NIST also updated its Privacy Framework (April 2025 draft) to address AI-specific privacy concerns.
- Legislative Proposals: The 118th Congress has seen numerous AI-related bills, including proposals for an "AI Task Force" to identify regulatory gaps, prohibitions on AI in nuclear weapons launches without human control, and mandates for disclaimers on AI-generated political content. While a comprehensive federal AI law has yet to pass, these proposals signal ongoing congressional interest in addressing specific harms and governance structures (e.g., a "Digital Platform Commission" similar to Brookings' suggestion).
- State-level Legislation: Many US states are proactively introducing their own AI-related legislation, covering areas like critical infrastructure, automated decision-making in government services, deepfakes in elections, and child protection. This decentralised approach can offer agility but also risks a fragmented regulatory landscape.
- Trump Administration Developments (April 2025): Recent directives from the White House OMB on AI use and procurement for federal agencies indicate a policy shift towards "pro-innovation, pro-competition." The Department of Energy has also announced potential sites for AI data centres, and an executive order is anticipated regarding coal-powered AI infrastructure. The GAO has highlighted the environmental and human impacts of generative AI (energy, carbon, water, accountability, bias, privacy, cybersecurity), prompting further policy consideration. The National Science Foundation and OSTP are developing a 2025 National AI R&D Strategic Plan, and Kansas has banned AI models from "platforms of concern" linked to certain countries.
- Alignment with Brookings: The US approach embraces risk management (NIST), principles (Biden EO, Blueprint for an AI Bill of Rights), and a distributed form of agile regulation through agency-specific directives. However, it currently lacks a single, dedicated AI regulatory agency, making its "who regulates" response more diffused than the EU's.
United Kingdom: Principles-Based and Sectoral
The United Kingdom has opted for a less prescriptive, more agile approach, initially favoring existing regulators and a principles-based framework:
- Approach: Principles-based, non-statutory, and sector-specific. There is no dedicated AI law currently in force.
- Principles: Five core principles guide AI development and deployment: safety and robustness, transparency, fairness, accountability, and contest-ability and redress. These are intended to be applied by existing regulators like the Information Commissioner's Office (ICO), Ofcom, and the Financial Conduct Authority (FCA).
- Governance: No new, central AI regulator has been established. The Digital Regulation Cooperation Forum (DRCF) facilitates coordination among existing regulatory bodies.
- Legislation: A Private Member's Bill, the "Artificial Intelligence (Regulation) Bill," was reintroduced in March 2025, proposing an "AI Authority" and binding duties. The government is also preparing its own "AI Bill" (expected Summer 2025) to target advanced AI models and make voluntary commitments legally binding, reflecting a desire to align with the US's pro-innovation stance.
- Alignment with Brookings: The UK's principles align with Brookings' oversight recommendations. While it initially differed on creating a new central AI agency, recent legislative proposals suggest a potential shift towards a more centralised governance structure, even if it aims for a lighter touch than the EU. The delay in the government's AI Bill to align with the US highlights the tension between regulatory certainty and the "velocity" challenge.
Canada: Federal Stalls, Provincial Action
Canada's federal AI regulatory efforts have faced setbacks, leading to a more fragmented landscape driven by provincial initiatives:
- Approach: Currently uncertain at the federal level, with focus shifting to provincial laws and non-binding guidance.
- Previous Intent (AIDA): Bill C-27, which included the Artificial Intelligence and Data Act (AIDA), aimed to establish a federal framework for AI systems, enforced by an "AI and Data Commissioner." However, the bill "died" with the prorogation of Parliament in January 2025.
- Current Status: No new federal privacy or AI legislation is immediately anticipated. Regulatory efforts are increasingly seen at the provincial level (e.g., Quebec's Law 25 on privacy, Ontario's recent acts on public sector cybersecurity and AI use, and employer disclosure of AI in hiring).
- Guidance: The Office of the Privacy Commissioner of Canada (OPC) and provincial regulators have released non-binding principles for generative AI, and the Competition Bureau monitors AI's impact on competition.
- Alignment with Brookings: The failure of AIDA means Canada currently lacks a dedicated federal AI regulatory body, contrasting with Brookings' call for a new agency. The reliance on provincial efforts and non-binding guidance reflects a more fragmented and "soft law" approach, which may struggle with comprehensive oversight and the "velocity" challenge.
China: Comprehensive and State-Controlled
China's approach to AI regulation is distinct, characterised by its comprehensive, rapidly evolving, and state-controlled nature, prioritising national security and social stability:
- Key Regulations: China has been remarkably proactive, enacting a series of targeted regulations:
- AI Measures (August 15, 2023): Applies specifically to generative AI services provided to the public within China.
- Deep Synthesis Provisions (January 2023): Regulates deepfakes and synthetic media.
- Recommendation Algorithms Provisions (March 2022): Regulates algorithmic recommendations used by platforms.
- Data-related laws: Supported by foundational laws like the Cybersecurity Law, Personal Information Protection Law (PIPL), and Data Security Law.
- Scientific and Technological Ethics Reviews (December 2023): Mandates ethical reviews for AI activities.
- New "Labelling Rules" (Effective September 1, 2025): Requires explicit and implicit labelling for AI-generated content, with obligations for both providers and distribution platforms to implement detection mechanisms.
- Governance: The Cyberspace Administration of China (CAC) is a key enforcement body, focusing on content moderation, misinformation, and AI-related applications.
- Alignment with Brookings: China's framework directly addresses "old-fashioned abuses" (e.g., deepfakes, misinformation) and "ongoing digital abuses" (data privacy, algorithmic control) with specific, prescriptive rules. The mandatory labelling rules strongly align with Brookings' principle of transparency. China's rapid legislative pace demonstrates an attempt to keep up with "velocity," but through a top-down, state-controlled approach rather than a multi-stakeholder model.
Japan: Pro-Innovation, Soft-Law Approach
Japan has adopted a "soft-law" approach, emphasising innovation, voluntary governance, and international interoperability:
- Approach: Primarily relies on non-binding guidelines and promotes voluntary efforts by businesses. Influenced by the Hiroshima AI Process, which aims for international cooperation on responsible AI.
- Guidelines: The government issues guidelines, such as the "AI Guidelines for Business," revised in March 2025.
- Governance: The AI Strategy Council, established in May 2023, guides policy.
- Legislation: While the ruling party is considering legislation, potentially informed by EU/US trends, the current focus remains on soft law. A proposed bill aims to promote AI R&D and ethical use, incorporating a "name and shame" approach for infringements rather than direct penalties.
- Alignment with Brookings: Japan's "soft-law" approach contrasts with Brookings' call for enforceable regulation and a new agency. It prioritises innovation and agility but might be less effective in establishing strong "Duty of Care" or "Responsibility" without binding mechanisms, particularly for smaller companies or malicious actors.
In summary, the global AI regulatory landscape is a dynamic mosaic of distinct philosophies. While the EU leads with a comprehensive, legally binding framework, the US prefers a fragmented, principles-based approach. The UK and Canada navigate their paths with varying degrees of legislative ambition, while China and Japan represent more centralised, prescriptive, and pro-innovation "soft law" models, respectively. Each approach represents a unique attempt to address the core challenges of velocity, scope, and governance in the rapidly evolving world of AI.