Insights — AI Act Implementation

AI Act implementation for financial institutions

Checklist for Compliance Leaders

For Compliance Leaders, Heads of Compliance and Regulatory Affairs at financial institutions.

The deadline is closer than you think

August 2026. That is when requirements for high-risk AI systems under the AI Act become fully applicable.

Sounds far away. It isn't.

Building governance structures, classifying systems, establishing controls, making evidence reproducible — this takes time. Institutions starting in Q2 2026 will not meet the deadline.

The question is not whether you should start. The question is where you stand now and what is still missing.

Step 1

Inventory your AI systems completely

Before you can classify anything, you need to know what exists.

  • Overview of all systems making automated or assisted decisions
  • Including systems from external vendors
  • Including internally developed systems
  • Per system: purpose, owner, using department, decisions it influences
  • Including systems in pilot or test

An incomplete register is the most common reason institutions have governance gaps.

Step 2

Classify each system

Classification determines which obligations apply.

  • Credit assessment systems
  • Risk assessment and pricing for insurance
  • AML analysis and transaction monitoring
  • Critical infrastructure systems
  • Personnel management systems (internal)

Classification is not a one-time event. With significant system changes, classification must be reviewed.

Step 3

Establish a risk management system per high-risk system

The AI Act requires a continuous risk management system — not a one-time risk assessment.

  • Identification of known and foreseeable risks
  • Analysis for correct use and foreseeable misuse
  • Mitigation measures per identified risk
  • Residual risks evaluated after mitigation
  • Review cycle established
  • Owner designated per system

The risk management system is not a document. It is a process.

Step 4

Ensure technical documentation before deployment

For each high-risk AI system, documentation must be available before use.

  • General description and intended purpose
  • Design and architecture description
  • Training data: origin, scope, quality measures
  • Validation and test results
  • Description of human oversight
  • Cybersecurity measures
  • For external systems: documentation from the provider

For externally procured models, obtaining sufficient documentation is a practical bottleneck.

Step 5

Establish human oversight

The AI Act requires effective human oversight — not a checkbox, but a structural arrangement.

  • Per system: documented who fulfills the oversight role
  • Criteria for when human intervention is required
  • Overseers have sufficient insight
  • Override capability is guaranteed
  • Overseers are demonstrably trained
  • Escalation path documented

'Human oversight' does not mean an employee reviews the output. It means that employee can understand, assess, and intervene.

Step 6

Ensure automatic logging

High-risk AI systems must automatically log events.

  • System logs activation periods
  • Reference databases are logged
  • Input data is stored
  • Identification of involved persons is recorded
  • Logs are tamper-proof
  • Retention period is established

For external systems: verify the provider offers logging that meets requirements.

Step 7

Register high-risk systems in the EU database

Registration obligation applies to providers and deployers in certain categories.

  • Determine per system whether registration obligation applies
  • Required information is available
  • Registration planned before deployment
  • Changes are communicated timely
Step 8

Ensure reproducible evidence

Reproducible evidence is generated from a defined governance state — not assembled from loose documents.

  • Governance state is retrievable at any moment
  • Changes are dated and traced
  • Reviews are documented with date and outcome
  • Incidents are logged including context
  • Evidence is demonstrably not assembled after the fact

The difference between 'we have this arranged' and 'we can prove we have this arranged' is the difference supervisors assess.

Most common mistakes

Treating classification as a one-time project

Systems evolve. Classification must keep pace.

Placing governance with IT

AI governance is a risk and compliance responsibility.

Considering external vendors 'their problem'

As deployer, you are responsible.

Waiting for definitive guidance

The AI Act is in force. Core requirements are fixed.

Building evidence after the fact

Supervisors assess whether evidence is reproducible.

Frequently asked questions

Do we need a conformity assessment? +
For most high-risk systems, an internal conformity assessment is sufficient. An external assessment by a notified body is required for a limited category of systems.
What is the difference between provider and deployer? +
A provider developed the AI system. A deployer uses it. Financial institutions are often deployers for externally procured systems and providers for internally developed systems.
What if we are unsure whether a system is high risk? +
When in doubt: treat the system as high risk until classification is completed. Over-governance is more defensible than under-governance during supervision.

Where do you stand?

Knowing where you stand is the first step. A structured analysis of your supervisory exposure is the second.

Request an Executive Session Download Whitepaper