AI Governance

The Foundational Challenges of AI Regulation: A Brookings Perspective

Gurpreet Dhindsa
|
August 7, 2025
Table of Content
AI Governance

The Foundational Challenges of AI Regulation: A Brookings Perspective

Gurpreet Dhindsa
|
August 12, 2025

The journey towards effective AI governance is fraught with complexities, often likened to building an aircraft while in flight. As articulated by the Brookings Institution in their seminal June 2023 commentary, "The three challenges of AI regulation," three core hurdles consistently impede progress: the breakneck speed of technological advancement, the inherent difficulty in defining the scope of what to regulate, and the critical question of who should regulate and how. Understanding these foundational challenges is paramount for crafting adaptable and impactful regulatory frameworks.

The Velocity Problem: Keeping Pace with the Red Queen

The first and perhaps most daunting challenge is the sheer velocity of AI development, often dubbed the "Red Queen Problem." In Lewis Carroll's Through the Looking-Glass, the Red Queen tells Alice, "It takes all the running you can do, to keep in the same place." This aptly describes the regulatory dilemma for AI.

Traditional legislative processes, characterised by their deliberate pace, struggle to keep up with AI's exponential growth. By the time a law is drafted, debated, and enacted, the technology it seeks to govern may have already evolved significantly, rendering the regulation obsolete or inadequate. This constant race highlights the urgent need for regulatory approaches that are not only comprehensive but also agile and adaptive, capable of evolving alongside the technology itself.

The Brookings piece explicitly warns against relying on outdated "industrial management assumptions" for regulation, advocating instead for "agile digital management techniques" to reflect the dynamism of the AI era.

What to Regulate? Navigating the Nuances of Risk

The second challenge revolves around precisely what to regulate, urging a shift towards risk-based and targeted interventions rather than a broad, undifferentiated approach. AI's pervasive nature means its potential for both immense benefit and significant harm spans various domains. The Brookings analysis categorises these potential harms, providing a useful framework for regulatory focus:

  • Old-fashioned abuses: These are existing illegal or unethical activities (e.g., fraud, discrimination, surveillance) that AI can amplify or accelerate. Regulation here might involve extending existing laws to AI contexts or developing specific prohibitions against AI-enabled versions of these harms.
  • Ongoing digital abuses: This category encompasses issues that have become more prominent with digital platforms and are now exacerbated by AI, such as privacy invasion, algorithmic bias leading to market concentration, and the spread of misinformation. Regulation needs to address how AI perpetuates or intensifies these concerns.
  • Unknowns: Perhaps the most challenging category, this acknowledges the unforeseen risks and societal impacts that powerful AI systems may present in the future. Regulatory frameworks must incorporate mechanisms to anticipate and respond to these emergent issues, building in flexibility and foresight.

To navigate this complexity, the article underscores the importance of foundational principles for oversight. These include:

  • Duty of Care: Imposing a responsibility on AI developers and deployers to ensure their systems do not cause harm.
  • Transparency: Requiring clarity on how AI systems function, make decisions, and interact with users, especially when impacting fundamental rights.
  • Safety: Ensuring AI systems are robust, reliable, and secure, drawing parallels with frameworks like the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
  • Responsibility: Clearly assigning accountability for AI-generated outcomes, aligning with principles seen in the White House's Blueprint for an AI Bill of Rights.

Who Regulates and How? Forging a New Governance Paradigm

The third critical challenge addresses the institutional and methodological questions of who regulates and how. The existing patchwork of sectoral regulators may not be sufficient for a cross-cutting technology like AI. Brookings' commentary advocated for:

  • A new, dedicated federal agency: This would centralise expertise, streamline oversight, and provide a focal point for AI governance, potentially overcoming the fragmentation seen in existing regulatory landscapes.
  • Agile, risk-based regulation: Beyond just identifying what to regulate, this speaks to how regulations are designed and implemented. It implies a departure from rigid, prescriptive rules towards more adaptable, outcome-oriented frameworks.
  • Consideration of licensing mechanisms: While acknowledging the debate and potential for reinforcing market dominance by larger players, the idea of licensing AI systems or capabilities above a certain scale (as suggested by OpenAI CEO Sam Altman) was presented as a potential tool for ensuring compliance with safety standards.
  • Enforceable behavioural standards: The article proposed that multi-stakeholder groups of experts could develop these standards, which would then be given legal teeth by a new regulatory agency. This approach blends the agility and domain-specific knowledge of industry and civil society with the enforcement power of government, representing a form of co-regulation.

In essence, the Brookings perspective provides a crucial blueprint for understanding the fundamental hurdles in AI regulation. It calls for a pragmatic, principles-driven approach that champions agility, targeted intervention, and dedicated governance structures to ensure AI's responsible development and deployment. The subsequent global regulatory responses, while diverse, largely reflect attempts to address these very challenges.

Table of Content

Enterprise AI Control Simplified

Platform for real-time AI monitoring and control
Join newsletter
Stay up to date withj new case studies. We promise no spam, just goodf content
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Compliance without complexity

If your enterprise is adopting AI, but concerned about risks, Altrum AI is here to help.

Check out other articles

see all