AI Governance

Governing the AI Triad: Why Modern AI Oversight Must Evolve with the Technology Itself

Last month, I watched a room full of executives freeze when their chief compliance ...
Gurpreet Dhindsa
|
September 11, 2025
Table of Content
AI Governance

Governing the AI Triad: Why Modern AI Oversight Must Evolve with the Technology Itself

Gurpreet Dhindsa
|
September 11, 2025

Last month, I watched a room full of executives freeze when their chief compliance officer asked a simple question: "If our AI system makes a biased decision, how do we fix it?"

The silence stretched uncomfortably. Finally, someone offered the usual answer: "We'll add more training on bias detection."

But that misses the point entirely. By the time you're detecting bias in AI outputs, you're already playing defence. The real question isn't how to catch problems after they happen—it's how to prevent them from taking root in the first place.

This is why I've become obsessed with something researchers call the "AI Triad"—a framework that's revolutionising how we think about governing artificial intelligence. But before I explain what it is, let me tell you why traditional AI oversight is failing.

Most companies approach AI governance like food safety inspection: they check the final product and hope for the best. But AI isn't like manufacturing a widget. It's more like raising a child—every input shapes the outcome, and by the time you see problematic behaviour, it's already deeply ingrained in the system's "thinking."

The Three Pillars That Shape Every AI System

In 2020, researchers at Georgetown's Center for Security and Emerging Technology identified what they call the "AI Triad"—three fundamental components that determine how every AI system behaves:

1. Algorithms – The "brain" of the AI system

2. Data – The "experience" the AI learns from

3. Compute – The "body" that makes it all possible

Think of it like this: if AI were a person, algorithms would be their reasoning ability, data would be their life experiences, and compute would be their physical capacity to think and act.

Here's what most people don't realise: these three components don't just work together—they create a feedback loop that amplifies both strengths and weaknesses. Get one wrong, and the whole system can go sideways in ways you never anticipated.

Let me show you what I mean by walking through each component and why governing them requires a completely different approach than traditional technology oversight.

Algorithms: The Black Box Problem

When I explain AI algorithms to non-technical executives, I often use this analogy: imagine hiring someone incredibly smart who can solve complex problems but can never explain how they reached their conclusions. They just say, "Trust me, this is the right answer."

That's essentially what modern AI algorithms are like. They don't follow pre-programmed rules the way traditional software does. Instead, they develop their own internal logic by studying patterns in data—patterns that are often too complex for humans to understand.

A real example: A major bank's AI system started approving loans at different rates for different zip codes. When regulators investigated, they discovered the AI had learned to associate certain neighbourhood characteristics with loan defaults. The problem? Some of those patterns were proxies for racial and economic discrimination. But the AI couldn't explain its reasoning—it had just found correlations in the data and acted on them.

This creates what I call the "explainability paradox": the most powerful AI systems are often the least explainable. But we can't govern what we can't understand.

What this means for governance:

Instead of waiting to audit AI decisions after they're made, we need to build accountability into the algorithms themselves:

  • Design for interpretability: When possible, choose AI approaches that can explain their reasoning, even if they're slightly less accurate
  • Require explanations for high-stakes decisions: Hiring, lending, healthcare, and justice decisions should come with clear reasoning that humans can evaluate
  • Build in safety switches: AI systems should have built-in mechanisms to reject decisions they're uncertain about

The key insight? You can't retrofit transparency into an opaque system. You have to design for it from the beginning.

Data: The Foundation That Determines Everything

If algorithms are the brain, data is the life experience that shapes how that brain thinks. And just like human experience, the quality and bias of that data determines everything about how the AI system will behave.

But here's where things get complicated: modern AI systems learn from massive datasets scraped from across the internet, containing billions of examples of human text, images, and behaviour. Imagine trying to raise a child by exposing them to every conversation, book, and video ever created—including all the worst examples of human behaviour.

A story that illustrates the problem: I recently spoke with a healthcare AI company that discovered their diagnostic system was less accurate for women than men. After months of investigation, they traced the problem back to their training data: historical medical research had systematically under-represented women in clinical trials. The AI had learned medicine from a dataset that was fundamentally biased, and it replicated those biases in its diagnoses.

But the data problem goes deeper than historical bias. Today's AI systems often learn from data generated by other AI systems, creating what researchers call "synthetic feedback loops." It's like a game of telephone where each AI system learns from the mistakes and biases of the previous one, potentially amplifying problems with each generation.

What effective data governance looks like:

  • Know your data's history: Every AI system should come with a clear record of what data it learned from, where that data came from, and whether it was ethically obtained
  • Audit for bias continuously: Use AI tools to constantly scan training data for patterns that could lead to unfair outcomes
  • Set boundaries on synthetic data: When AI systems learn from other AI-generated content, establish limits to prevent error amplification

The uncomfortable truth is that most organisations have no idea what their AI systems learned from—and that's a governance disaster waiting to happen.

Compute: The Hidden Power Behind AI

Of the three components in the AI Triad, computing power is the one most people overlook. But it might be the most important from a governance perspective.

Here's why: compute isn't just about speed or efficiency. It determines who gets to build advanced AI systems, how quickly they can improve them, and ultimately what capabilities are even possible.

Think about it this way: Training a state-of-the-art AI model can cost millions of dollars in computing resources and require access to thousands of specialised chips. This means that the most advanced AI capabilities are concentrated in the hands of a few tech giants and well-funded research labs.

From a governance standpoint, this creates both challenges and opportunities:

The challenge: If only a few organisations can afford to build the most powerful AI systems, how do we ensure those systems serve everyone's interests fairly?

The opportunity: Because compute is a physical requirement, it's something governments and organisations can actually control and regulate.

A practical example: The European Union is considering requirements that companies training large AI models above certain computational thresholds must register with regulators and submit to safety testing. This approach uses compute as a measurable threshold for determining when an AI system is powerful enough to warrant special oversight.

What compute governance means in practice:

  • Set thresholds for oversight: AI systems that require massive computing resources should trigger additional safety and ethics reviews
  • Invest in distributed compute: Instead of concentrating AI power in a few companies, governments and organisations should invest in shared computing infrastructure
  • Consider environmental impact: Training large AI models can consume as much energy as small cities—this should factor into approval processes

The key insight is that compute is both a bottleneck and a control point. Used wisely, it can help ensure that the most powerful AI systems develop responsibly.

Why You Need to Govern All Three Together

Here's the crucial point that most organisations miss: you can't govern algorithms, data, and compute separately. They're interconnected in ways that create emergent risks and opportunities.

A real-world example that illustrates this: A financial services company I worked with thought they had solved their AI bias problem by cleaning up their training data. But they hadn't considered how their algorithm was designed to optimise for profit maximisation, or how their computing infrastructure was set up to process certain types of customer data faster than others. The result? Their "unbiased" data was being processed by a profit-focused algorithm running on infrastructure that created systematic delays for certain customer segments. Same bias, different source.

The three-way interaction creates compound effects:

  • Algorithm + Data: The way an AI system processes information can amplify subtle biases hidden in training data
  • Data + Compute: The speed and scale at which AI systems can process information can turn small data problems into massive systematic issues
  • Algorithm + Compute: Powerful computing resources can enable AI systems to find complex patterns that humans never intended and can't easily detect

This is why I tell my clients that effective AI governance isn't about choosing between technical controls and policy controls—it's about designing systems where all three components of the triad work together to produce trustworthy outcomes.

From Compliance Theatre to Constitutional AI

Most AI governance today feels like security theatre at airports—lots of visible procedures that make people feel safer without actually addressing the underlying risks.

Companies create AI ethics boards that review AI projects quarterly. They implement bias testing that happens after models are already deployed. They write policies that sound impressive but don't actually change how AI systems are built or used.

But governing the AI Triad requires a fundamentally different approach. Instead of bolt-on compliance, we need what I call "constitutional AI"—embedding governance principles directly into how AI systems learn and operate.

What this looks like in practice:

Real-time monitoring: Instead of quarterly reviews, AI systems continuously monitor their own performance across fairness, accuracy, and safety metrics

Embedded controls: Governance rules are built into the AI system's decision-making process, not layered on top after the fact

Adaptive oversight: As AI systems learn and change, the governance systems learn and adapt alongside them

Here's a concrete example: One of our clients in healthcare built an AI diagnostic system with embedded fairness constraints. Instead of hoping the system would be fair, they programmed it to continuously monitor its accuracy across different demographic groups and automatically flag cases where it detected performance disparities. When disparities appeared, the system would route those cases to human specialists and update its training to address the gaps.

This isn't just better governance—it's governance that can keep up with the speed at which AI systems evolve.

The Stakes Are Higher Than We Think

I started this article with a story about executives who couldn't answer how they'd fix a biased AI system. But here's what really concerns me: in most cases, they wouldn't even know the bias existed until it became a legal or PR crisis.

We're rapidly moving toward a world where AI systems make thousands of decisions that affect people's lives—who gets hired, who qualifies for loans, what medical treatments are recommended, how resources are allocated. These systems are becoming more powerful and more autonomous while remaining largely opaque to the people they affect.

The window for getting governance right is closing. Once AI systems become deeply embedded in critical infrastructure—healthcare, finance, education, criminal justice—the cost of fixing governance failures becomes exponentially higher.

But I'm not pessimistic. I think we have the tools and knowledge to govern AI responsibly. The AI Triad framework gives us a roadmap. We just need the will to implement it before it's too late.

The choice is ours: We can continue treating AI governance as a compliance afterthought, or we can recognise it as one of the defining challenges of our generation.

I know which path I'm choosing. The question is: what path will you choose?

Gurpreet Dhindsa

CEO, Altrum AI

Building governance systems for the age of autonomous intelligence

What questions does the AI Triad framework raise for your organisation? I'd love to continue this conversation.

Table of Content

Enterprise AI Control Simplified

Platform for real-time AI monitoring and control
Join newsletter
Stay up to date withj new case studies. We promise no spam, just goodf content
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Compliance without complexity

If your enterprise is adopting AI, but concerned about risks, Altrum AI is here to help.

Check out other articles

see all