A decision-tree guide to EU AI Act risk classification - four risk tiers, Annex III categories, edge cases, and the classification process.
•
•
15 min read time
Topics
Every AI system deployed in or affecting the European Union now sits somewhere on a four-tier risk spectrum. That placement is not a theoretical exercise. It dictates whether an organisation faces no regulatory burden at all, a narrow set of transparency obligations, a comprehensive conformity regime costing hundreds of thousands of euros, or an outright ban. Getting the classification wrong in either direction carries material consequences: over-classify and resources drain into compliance infrastructure that was never required; under-classify and the organisation faces enforcement action, fines of up to 35 million euros or 7% of global turnover, and reputational damage that no remediation plan can quickly undo [1].
The EU AI Act risk classification framework is, on its surface, straightforward. In practice, the boundaries between tiers are less crisp than the legislative text suggests. Systems that appear to sit neatly in one category can, upon closer inspection, straddle two. This guide provides a structured path through that complexity.
The Four Risk Tiers
The AI Act establishes four tiers of risk, each carrying a distinct set of obligations. Understanding what each tier demands is a prerequisite to accurate classification.
Unacceptable Risk (Prohibited)
Article 5 of the AI Act identifies AI practices considered so fundamentally threatening to safety, livelihoods, and rights that they are banned outright. These include social scoring systems operated by or on behalf of public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement (subject to narrow exceptions), AI systems that exploit vulnerabilities of specific groups based on age, disability, or social situation, and systems deploying subliminal techniques beyond a person's consciousness to materially distort behaviour in a way that causes harm [2].
No compliance pathway exists for prohibited systems. The only lawful response is to cease deployment.
High Risk
High-risk systems constitute the regulatory centre of gravity. They are subject to the most detailed obligations: risk management systems, data governance requirements, technical documentation, record-keeping, transparency provisions, human oversight mechanisms, and requirements for accuracy, robustness, and cybersecurity [3]. Two routes lead to a high-risk classification, explored in detail below.
This tier is where most enterprise compliance effort concentrates, and where classification errors carry the greatest cost.
Limited Risk
Systems classified as limited risk face transparency obligations only. The primary requirement is disclosure: users must be informed that they are interacting with an AI system. This tier captures chatbots, emotion recognition systems not falling under the prohibited category, deepfake generators, and AI systems that generate or manipulate text, audio, or image content [4].
Limited risk is the Act's mechanism for managing deception without imposing the full conformity apparatus.
Minimal Risk
The vast majority of AI systems fall here. Spam filters, AI-enabled video games, inventory management algorithms - these carry no specific obligations under the Act, though voluntary codes of conduct are encouraged [5].
Minimal risk is the default. A system only moves up the spectrum if it meets defined criteria for one of the higher tiers.
The Decision Tree: Classifying Step by Step
EU AI Act risk classification requires a sequential analysis. The following decision tree mirrors the logic of Articles 5, 6, and 7, and provides the same repeatable process that Enzai's classification workflows automate for enterprise teams working through dozens or hundreds of systems.
Step 1: Does the system fall under Article 5 prohibitions?
Review the system's purpose and mechanism against each prohibited practice. If the system performs real-time biometric identification in public spaces for law enforcement, manipulates persons through subliminal techniques causing harm, exploits specific vulnerabilities, or enables social scoring by public authorities, it is prohibited.
If yes: the system is Unacceptable Risk. Stop here.
If no: proceed to Step 2.
Step 2: Is the system a safety component of a product covered by EU harmonisation legislation listed in Annex I, or is the system itself such a product?
Annex I lists existing EU product safety directives and regulations, including those covering machinery, toys, medical devices, civil aviation, motor vehicles, and railway systems [6]. If the AI system serves as a safety component within any of these regulated products, or if the system itself is the regulated product, it qualifies as high-risk under Article 6(1). Critically, these systems also require a third-party conformity assessment under the relevant sectoral legislation.
If yes: the system is High Risk via Article 6(1). Proceed to post-classification obligations.
If no: proceed to Step 3.
Step 3: Does the system fall within one of the use-case categories listed in Annex III?
Annex III enumerates eight areas of high-risk application. If the system's intended purpose maps to any of these categories, it is provisionally high-risk under Article 6(2). This is where the analysis demands the most precision, as Annex III categories are broad and fact-specific.
If yes: proceed to Step 4.
If no: proceed to Step 5.
Step 4: Does the Article 6(3) exception apply?
Article 6(3) introduced a significant qualification in the final text. Even if a system falls within Annex III, it is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. Specifically, a system is exempt if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing human assessment, or performs a preparatory task to an assessment relevant to the Annex III use cases [7].
The provider must document why the exception applies and notify the relevant national authority before placing the system on the market. If the authority disagrees, the system reverts to high-risk status.
If Article 6(3) applies: the system is not high-risk, but documentation and notification obligations remain. Classify as limited or minimal risk based on transparency criteria.
If Article 6(3) does not apply: the system is High Risk via Article 6(2). Proceed to post-classification obligations.
Step 5: Does the system require transparency disclosures?
If the system interacts directly with natural persons (chatbots), generates or manipulates image, audio, or video content (deepfakes, generative AI), or performs emotion recognition or biometric categorisation outside prohibited contexts, it falls under limited risk transparency obligations [4].
If yes: the system is Limited Risk.
If no: the system is Minimal Risk. No specific obligations apply.
The classification is only as reliable as the rigour applied at each node. For teams working through multiple systems, the following worksheet can be copied and completed for each AI system in the inventory:
Step | Question | Your answer | Resulting tier | Owner |
|---|---|---|---|---|
1 | Does the system fall under Article 5 prohibitions? | Yes / No | If yes: Prohibited | |
2 | Is it a safety component of an Annex I product? | Yes / No | If yes: High risk (Article 6(1)) | |
3 | Does it fall within an Annex III use-case category? | Yes / No / Category: ___ | If yes: Provisionally high risk | |
4 | Does the Article 6(3) exception apply? | Yes / No / Rationale: ___ | If yes: Not high risk | |
5 | Does it require transparency disclosures? | Yes / No | If yes: Limited risk | |
Final | Classification result | ___ | ___ |
Annex III Categories: Concrete Enterprise Examples
Annex III is the gateway to high-risk classification for most enterprise AI systems. Each of the eight areas deserves examination with concrete examples, as the legislative text uses broad language that benefits from practical illustration.
1. Biometric Identification and Categorisation
This covers remote biometric identification systems (not real-time for law enforcement, which is prohibited) and biometric categorisation systems that assign natural persons to categories based on biometric data. An enterprise example: an airport deploying facial recognition for automated boarding verification, or a retailer using biometric categorisation to infer demographic attributes of shoppers.
2. Management and Operation of Critical Infrastructure
AI systems used as safety components in the management and operation of road traffic, water, gas, heating, and electricity supply, as well as digital infrastructure. Examples include an AI system managing electricity grid load balancing, a predictive maintenance algorithm for water treatment facilities, or a traffic signal optimisation system deployed by a municipal authority.
3. Education and Vocational Training
Systems that determine access to or assignment within educational institutions, or that evaluate learning outcomes. A university using an AI-powered admissions scoring tool, an automated essay grading system, or a platform that determines student placement into academic tracks all qualify.
4. Employment, Workers Management, and Access to Self-Employment
AI systems used in recruitment, selection, hiring decisions, task allocation, performance monitoring, or termination decisions. This is one of the most frequently triggered categories in enterprise contexts. An AI-powered CV screening tool that ranks or filters candidates, a workforce scheduling algorithm that allocates shifts based on predicted productivity, or a performance evaluation system that flags employees for potential dismissal all fall within scope.
5. Access to and Enjoyment of Essential Private and Public Services
This encompasses AI systems used to evaluate eligibility for public benefits, credit scoring, risk assessment in life and health insurance, and emergency services dispatch prioritisation. A credit scoring algorithm used by a bank to determine loan eligibility, an AI system that triages applications for social housing, or an insurance underwriting model that sets premiums based on individual risk profiles are all high-risk under this category.
6. Law Enforcement
AI systems used by law enforcement for individual risk assessment, polygraph analysis, evaluation of evidence reliability, crime prediction concerning natural persons, and profiling in criminal investigations. A predictive policing tool that identifies persons likely to commit offences, or an AI system that assesses the reliability of witness testimony, are covered here.
7. Migration, Asylum, and Border Control
Systems used to assess security risks posed by persons entering the EU, to assist in examination of asylum applications, or to detect, recognise, or identify persons in the context of migration. A risk-scoring tool used at border control to flag travellers for additional screening, or an AI system that analyses asylum seekers' claims for consistency, qualify.
8. Administration of Justice and Democratic Processes
AI systems intended to assist judicial authorities in researching and interpreting facts and law, or systems used to influence the outcome of elections. A legal research tool that recommends case outcomes to judges, or a system used to micro-target political advertising based on voter profiling, fall within this category.
The specificity of the intended purpose, not the underlying technology, determines classification.
Edge Cases and Grey Areas
The boundaries of the AI Act's risk tiers are tested most sharply in several recurring scenarios that organisations using platforms like Enzai for AI governance consistently encounter.
Emotion Detection: Context Determines Everything
Emotion recognition in the workplace and educational settings triggers specific prohibitions or high-risk classification. But emotion detection deployed in a customer service context, such as an AI tool that analyses caller sentiment to route support tickets, does not fall under the workplace prohibition. It may still qualify as limited risk under the transparency provisions, requiring disclosure to the caller. The determining factor is not the technology but the context of deployment and the power asymmetry between the parties involved [8].
AI Recommendations Versus AI Decisions
A system that recommends a decision for human review sits in a different position than one that autonomously executes that decision. An AI tool that ranks job candidates and presents a shortlist to a human recruiter may still be high-risk under Annex III (employment category), but the nature of human oversight affects the specific obligations that apply. Conversely, a system that automatically rejects loan applications without meaningful human intervention faces the full weight of high-risk requirements. The critical question is whether the human can realistically and routinely override the AI output, or whether the system's recommendation functions as a de facto decision [9].
The Article 6(3) Self-Determination Path
Article 6(3) offers a route out of high-risk classification, but it is narrower than many organisations initially assume. A CV screening tool that merely reformats applications into a standardised layout might qualify for the exception, as it performs a preparatory task. The same tool configured to rank candidates does not. Organisations must resist the temptation to characterise their system's function in the most favourable light. National authorities retain the power to reclassify, and the burden of documentation falls on the provider. Enzai's classification workflows are designed to stress-test Article 6(3) claims against the regulation's criteria before an organisation commits to a position.
General-Purpose AI Models in High-Risk Systems
When a general-purpose AI model is integrated into a high-risk system, the obligations attach to the deployer who configures the system for the specific high-risk use case, not solely to the model provider. An organisation that fine-tunes a foundation model for credit scoring inherits the high-risk obligations, even if the underlying model was developed by a third party [10].
Where a system sits on the risk spectrum is rarely as obvious as it first appears.
The Classification Process: Governance and Documentation
EU AI Act risk classification is not a task for a single function. It requires structured input from multiple disciplines and a documentation trail that can withstand regulatory scrutiny.
Who Should Be Involved
At minimum, the classification process should involve legal counsel with regulatory expertise, the technical team responsible for the system's design and intended purpose, a domain expert who understands the operational context of deployment, a risk or compliance officer, and, where the system affects employees, worker representatives or human resources leadership.
A common failure mode is delegating classification entirely to legal or entirely to engineering. Legal teams may lack the technical understanding to assess whether a system truly makes autonomous decisions; engineering teams may underestimate the regulatory significance of a system's operational context.
How to Document
The classification rationale must be recorded in a form that can be presented to a national supervisory authority. Documentation should include a clear description of the system's intended purpose, the specific articles and annexes considered, the reasoning for the tier assigned, an analysis of Article 6(3) where applicable, identification of the individuals and roles involved in the assessment, and the date of the classification and any planned review triggers.
Handling Disagreements
Where internal stakeholders disagree on classification, the conservative position should prevail pending further analysis. An organisation that provisionally classifies a system as high-risk and later downgrades it faces far less risk than one that classifies low and is subsequently found non-compliant. Disagreements should be recorded in the documentation, as they demonstrate the rigour of the process and protect the organisation in the event of a regulatory inquiry.
Classification is a governance act, not an administrative formality.
What Happens After Classification
The risk tier assigned to a system determines the compliance obligations that follow. The gap between tiers is substantial.
Prohibited Systems
The obligation is absolute: do not develop, deploy, or make available on the EU market. Existing systems must be decommissioned. There is no transition period for prohibited practices [2].
High-Risk Systems
Providers must implement a risk management system that operates throughout the system's lifecycle. Data governance practices must ensure training, validation, and testing datasets are relevant, representative, and free from errors. Comprehensive technical documentation must be prepared before the system is placed on the market. The system must be designed to enable automatic logging of events. Transparency requirements demand clear instructions for deployers. Human oversight measures must allow natural persons to understand, monitor, and override the system. Requirements for accuracy, robustness, and cybersecurity must be met and maintained [3].
Deployers of high-risk systems carry their own obligations: conducting fundamental rights impact assessments, ensuring human oversight is operationally effective, monitoring the system in accordance with the provider's instructions, and reporting serious incidents.
Limited-Risk Systems
The primary obligation is transparency. Users must be informed they are interacting with AI. Content generated or manipulated by AI must be labelled as such. Emotion recognition systems must disclose their operation to affected persons [4].
Minimal-Risk Systems
No binding obligations apply. Organisations are encouraged to adopt voluntary codes of conduct addressing, among other topics, environmental sustainability and AI literacy [5].
The obligations at each tier are not aspirational. They are enforceable requirements with defined penalties for non-compliance.
Moving From Classification to Compliance
Determining a system's risk tier is the first act in a longer compliance programme, but it is the act upon which everything else depends. An inaccurate classification cascades through every subsequent decision, from the resources allocated to documentation through to the conformity assessment pathway selected.
Organisations managing portfolios of AI systems across multiple jurisdictions and business units face this challenge at scale. Tracking classification decisions, monitoring for regulatory updates that might shift a system's tier, and maintaining an auditable record of the reasoning behind each determination requires purpose-built infrastructure.
For organisations seeking to operationalise EU AI Act risk classification across their AI inventory, Enzai provides the governance framework to classify, document, and monitor AI systems against evolving regulatory requirements. Request a demo to see how structured classification workflows reduce both compliance risk and wasted effort.
References
[1] Regulation (EU) 2024/1689, Article 99 - Fines.
[2] Regulation (EU) 2024/1689, Article 5 - Prohibited artificial intelligence practices.
[3] Regulation (EU) 2024/1689, Articles 8-15 - Requirements for high-risk AI systems.
[4] Regulation (EU) 2024/1689, Article 50 - Transparency obligations for limited-risk AI systems.
[5] Regulation (EU) 2024/1689, Article 95 - Codes of conduct for voluntary commitments by providers of non-high-risk AI systems.
[6] Regulation (EU) 2024/1689, Annex I - Union harmonisation legislation.
[7] Regulation (EU) 2024/1689, Article 6(3) - Classification rules for high-risk AI systems.
[8] Regulation (EU) 2024/1689, Recitals 44-46 - Scope of emotion recognition prohibitions.
[9] Regulation (EU) 2024/1689, Article 14 - Human oversight.
[10] Regulation (EU) 2024/1689, Articles 51-56 - Obligations for providers of general-purpose AI models.
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.







