Insight · Financial Sector
What does DNB expect from AI governance at financial institutions in 2026?
For CROs, Heads of Risk and Compliance Leaders in the financial sector.
The supervisor has already started
DNB has explicitly identified AI governance as a supervisory priority for 2026. Banks and insurers will be more rigorously assessed this year on how they deploy AI — not only on prudential risks, but also on compliance with the European AI Act.
This is not an announcement for the future. This is the current supervisory plan.
And the bar is higher than most institutions expect.
What DNB concretely assesses
DNB does not assess intentions. DNB assesses structure.
For insurers, DNB already conducted a sector-wide investigation into AI usage in 2024. The conclusion was clear: AI is widely deployed, but governance and risk management are insufficiently developed at many institutions.
DNB expects institutions to clearly establish in advance:
- Which responsibilities apply to AI systems
- How risks are mitigated
- How decisions are reproducible
The phrase "in advance" is crucial. Retroactive reconstruction is not sufficient.
Why documentation alone no longer works
The traditional compliance approach works as follows: write policy, maintain a register, audit periodically, collect evidence when requested.
The problem is that AI systems do not behave like static processes. A credit model evolves. A transaction monitoring algorithm receives new training data. An AML system responds differently as the environment changes.
Every time an AI system changes, the control conditions change. And every time control conditions change without being recorded, supervisory exposure emerges.
The five questions DNB asks
When DNB contacts your institution about AI governance, these are the five questions you must have structural answers to.
1. Which AI systems are in use and who owns them?
Not a list sitting with IT. A formal register linked to the governance structure, with designated ownership per system.
2. How are these systems classified?
Under the AI Act, classification is mandatory. DNB expects you to know which systems are 'high risk' under the regulation and which additional requirements apply.
3. Which controls are active and under what conditions?
Not: 'we have a policy.' Rather: which specific measures apply to this system, who is responsible, and what are the criteria for incident reporting?
4. How are relevant events recorded?
Status changes, reviews, incidents, decision context — is this systematically documented or reconstructed after the fact?
5. Is the evidence reproducible?
Evidence assembled after the fact is not the same as evidence deterministically generated from a defined governance state.
What high-risk AI systems are in the financial sector
Credit decisions
AI systems that assess the creditworthiness of natural persons fall under Annex III of the AI Act.
AML and transaction monitoring
Algorithms that detect suspicious transaction patterns and generate reports directly touch AML obligations.
Risk classification and underwriting
Automated systems that make risk assessments for insurance products fall within scope.
Audit analytics
AI-assisted audit processes fall under the responsibility of the institution.
DORA and AI Act: two regulatory tracks, one governance problem
A common misconception: DORA covers ICT risks and the AI Act covers AI systems — so they are two separate tracks.
That is incorrect.
DORA requires institutions to manage ICT risks, including the risks of outsourced ICT services. AI models procured from external vendors fall directly under this.
The result: an institution deploying AI systems for credit decisions or transaction monitoring simultaneously faces:
- DORA requirements for ICT risk management and third-party management
- AI Act requirements for registration, classification and human oversight
- AML obligations for decision accountability
A separate approach per regulation leads to duplicate work and blind spots at the intersections. Structural governance covers all three from one framework.
What structural AI governance looks like
The difference between documentation compliance and structural governance is the difference between a photograph and a surveillance camera.
Register as gatekeeper
No AI system is deployed without formal incorporation into the governance framework.
Classification as control condition
The outcome of classification determines which controls apply.
Control as embedded structure
Activation conditions, supervisory responsibilities, and incident criteria are part of the governance structure.
Evidence as byproduct
When governance is correctly embedded, evidence is an automatically generated output of the governance state.
Frequently asked questions
Does the AI Act already apply to my institution?
What is the difference between an AI register and a governance framework?
How do I know which systems are high risk?
What if an AI system is procured from an external vendor?
How quickly can a governance structure be set up?
The next step
An Executive Regulatory Architecture Session provides CROs, Heads of Risk and Compliance Leaders with a structured analysis of your institution's supervisory exposure on AI governance.