EU AI Act High-Risk Categories
The two classifications of high-risk AI systems under the EU AI Act: standalone systems listed in Annex III (covering biometrics, critical infrastructure, employment, education, law enforcement, and other named uses), and AI embedded in regulated products listed in Annex I (covering medical devices, vehicles, machinery, and other regulated product categories).
The EU AI Act treats certain AI systems as high-risk because of their potential to affect fundamental rights, safety, or critical societal functions. Annex III lists eight standalone categories where the system itself is classified as high-risk based on its intended purpose, while Annex I covers AI components embedded in products already subject to existing EU sectoral safety legislation. Following the Digital Omnibus deal of 7 May 2026, Annex III obligations apply from 2 December 2027 and Annex I obligations from 2 August 2028. Both categories require conformity assessment, technical documentation, post-market monitoring, and adherence to the AI Act's risk management and human oversight requirements.
A retail bank's CV-screening AI is classified as Annex III high-risk under the employment and worker management category. The same bank's AI-driven fraud-detection model embedded in payment-processing infrastructure may also qualify as Annex I high-risk if the underlying payment system is regulated under existing EU financial-services safety law. Each system requires its own Article 6 classification and a separate conformity assessment.
“What used to take weeks of manual reviews and policy work is now structured and auditable in Enzai within minutes. It’s the first time AI governance has felt operational, not theoretical.”
Ready to get started
with your AI governance program?
Enzai provides an AI governance and enablement platform that will help your organisation maximise AI adoption, while minimising AI risk.
Hear back in 24 hours

Customer Support Ticket Classification
Draft Use Case
5 requested AI solutions
Requested on: 7 Nov 2026
Requested by: Enzai
Reviewers:



Automated Contract Risk Review
Draft Use Case
5 requested AI solutions
Requested on: 7 July 2026
Requested by: Enzai
Reviewers:



Sales Forecasting & Demand Prediction
Draft Use Case
5 requested AI solutions
Requested on: 18 August 2026
Requested by: Enzai
Reviewers:



Employee Resume Screening Assistant
Draft Use Case
5 requested AI solutions
Requested on: 19 June 2026
Requested by: Enzai
Reviewers:




We help you find answers
What problem does Enzai solve?
Enzai provides enterprise-grade infrastructure to manage AI risk and compliance. It creates a centralized system of record where AI systems, models, datasets, and governance decisions are documented, assessed, and auditable.
Who is Enzai built for?
How is Enzai different from other governance tools?
Can we start if we have no existing AI governance process?
Does AI governance slow down innovation?
How does Enzai stay aligned with evolving AI regulations?
Research, insights, and updates
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.





