Insight · AI Governance

Building an AI governance framework for the financial sector

Most institutions have AI policy. Few have AI governance. The difference is the difference between a document and an infrastructure.

Policy is not governance

Every financial institution we encounter has AI policy. A responsible AI statement. An ethics framework. Perhaps an AI committee that meets quarterly.

None of that is governance.

Governance means that every regulated AI system is registered with clear ownership. That regulatory obligations are classified per system. That controls are enforced during operation — not described in a document. And that evidence of all of this is generated continuously, not assembled when a supervisor asks.

Policy describes intent. Governance demonstrates control.

Governance maturity levels

Where does your institution stand?

1 — Ad hoc

No formal register. AI systems are deployed without structured oversight. Evidence is assembled reactively when requested.

2 — Documented

Policy exists. A register is maintained manually. Periodic reviews occur. But controls are not enforced during operation.

3 — Structural

Systems are registered, classified, and governed with enforceable controls. Evidence is generated continuously as a byproduct of execution.

Supervisors expect Level 3. Most institutions operate at Level 1 or 2.

Four structural layers

A governance framework is not a document. It is an infrastructure with four operational layers.

01

System register

Every AI system is registered with purpose, owner, operational status, and regulatory scope. The register is living — not a spreadsheet updated quarterly.

02

Regulatory classification

Each system is classified under AI Act, DORA, and AML frameworks. Classification determines which controls apply. It is not a one-time assessment but a continuous process.

03

Control enforcement

Policy obligations are translated into executable conditions: identity verification, operational authorisation, regulatory boundary enforcement. Controls operate during execution, not during periodic reviews.

04

Evidence generation

Every control action produces a timestamped, immutable evidence record. Evidence is a byproduct of governance execution — never manually assembled.

Common mistakes

Assigning AI governance to IT

IT operates systems. Risk and compliance own governance. Placing governance with IT ensures technical controls but misses regulatory accountability.

Building governance per regulation

Separate DORA, AI Act, and AML governance tracks create duplication and blind spots. One system may fall under all three — it needs one integrated framework.

Treating the register as the framework

A register lists systems. A framework governs them. Knowing what exists is necessary but not sufficient.

Waiting for external guidance

The AI Act is in force. DORA is applicable. Supervisors are assessing. Waiting for final regulatory technical standards is not a defensible position.

Confusing compliance documentation with evidence

A compliance report written for a supervisor is not evidence of control. Evidence is the continuous, deterministic output of an operational governance system.

Frequently asked questions

What should an AI governance framework include? +
Four structural layers: a system register, regulatory classification, governance control enforcement, and continuous evidence generation. Policy documents alone do not constitute a framework.
Should AI governance sit with IT or with Risk? +
AI governance is a risk and compliance responsibility. IT implements and operates, but ownership, classification, and accountability belong with the risk function.
How long does it take to implement? +
An institution with fewer than 20 AI systems and a mature risk function: 8-12 weeks. Larger institutions with complex landscapes: 4-6 months. The first step is always a complete system inventory.
Can we use existing GRC tooling? +
Existing GRC tools manage policy and risk registers. They do not enforce governance conditions during AI system operation or generate evidence by execution. A control plane operates at a different layer.

From policy to infrastructure

Discuss how a control plane transforms AI governance from documentation into operational infrastructure at your institution.

Request an Executive Session Download Whitepaper