Insight · AI Governance
Building an AI governance framework for the financial sector
Most institutions have AI policy. Few have AI governance. The difference is the difference between a document and an infrastructure.
Policy is not governance
Every financial institution we encounter has AI policy. A responsible AI statement. An ethics framework. Perhaps an AI committee that meets quarterly.
None of that is governance.
Governance means that every regulated AI system is registered with clear ownership. That regulatory obligations are classified per system. That controls are enforced during operation — not described in a document. And that evidence of all of this is generated continuously, not assembled when a supervisor asks.
Policy describes intent. Governance demonstrates control.
Governance maturity levels
Where does your institution stand?
No formal register. AI systems are deployed without structured oversight. Evidence is assembled reactively when requested.
Policy exists. A register is maintained manually. Periodic reviews occur. But controls are not enforced during operation.
Systems are registered, classified, and governed with enforceable controls. Evidence is generated continuously as a byproduct of execution.
Supervisors expect Level 3. Most institutions operate at Level 1 or 2.
Four structural layers
A governance framework is not a document. It is an infrastructure with four operational layers.
System register
Every AI system is registered with purpose, owner, operational status, and regulatory scope. The register is living — not a spreadsheet updated quarterly.
Regulatory classification
Each system is classified under AI Act, DORA, and AML frameworks. Classification determines which controls apply. It is not a one-time assessment but a continuous process.
Control enforcement
Policy obligations are translated into executable conditions: identity verification, operational authorisation, regulatory boundary enforcement. Controls operate during execution, not during periodic reviews.
Evidence generation
Every control action produces a timestamped, immutable evidence record. Evidence is a byproduct of governance execution — never manually assembled.
Common mistakes
Assigning AI governance to IT
IT operates systems. Risk and compliance own governance. Placing governance with IT ensures technical controls but misses regulatory accountability.
Building governance per regulation
Separate DORA, AI Act, and AML governance tracks create duplication and blind spots. One system may fall under all three — it needs one integrated framework.
Treating the register as the framework
A register lists systems. A framework governs them. Knowing what exists is necessary but not sufficient.
Waiting for external guidance
The AI Act is in force. DORA is applicable. Supervisors are assessing. Waiting for final regulatory technical standards is not a defensible position.
Confusing compliance documentation with evidence
A compliance report written for a supervisor is not evidence of control. Evidence is the continuous, deterministic output of an operational governance system.
Frequently asked questions
What should an AI governance framework include?
Should AI governance sit with IT or with Risk?
How long does it take to implement?
Can we use existing GRC tooling?
From policy to infrastructure
Discuss how a control plane transforms AI governance from documentation into operational infrastructure at your institution.