Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

EU AI Act Compliance: The Enterprise Implementation Guide

AI Regulations

EU AI Act Compliance: The Enterprise Implementation Guide

AI Regulations

EU AI Act Compliance: The Enterprise Implementation Guide

A practical enterprise guide to EU AI Act compliance - risk classification, high-risk obligations, enforcement timeline, and what to do before August 2026.

Belfast

Belfast

14 min read time

Topics

AI Governance
EU AI Act
Compliance
Risk Classification

Topics

The EU AI Act is no longer forthcoming. It is here. The prohibition on unacceptable-risk AI systems has been enforceable since 2 February 2025. General-purpose AI model obligations took effect on 2 August 2025. And the full suite of high-risk system requirements - risk management, technical documentation, human oversight, conformity assessment - takes effect on 2 August 2026.[1] That is four months away.

For enterprise compliance teams, the challenge is not understanding the regulation in the abstract. It is operationalising it across dozens or hundreds of AI systems, each with different risk profiles, different providers, and different deployment contexts. The volume of legal commentary on the Act is substantial; practical implementation guidance is scarce.

This guide bridges that gap. It provides a structured approach to EU AI Act compliance for enterprise teams - covering risk classification, the obligations that attach at each tier, the enforcement timeline, and a sequenced action plan for the months ahead.

The Enforcement Timeline

The Act entered into force on 1 August 2024, but obligations are phased across multiple tranches. Understanding which obligations are already active and which are approaching is the starting point for any compliance programme.


Date

What applies

2 February 2025

Prohibited AI practices (Article 5). AI literacy obligation (Article 4). Already in force.

2 August 2025

GPAI model obligations (Articles 51-56). Penalty and enforcement framework (Article 99). Governance structures (Chapter VII). Already in force.

2 August 2026

Full high-risk AI system obligations (Articles 9-15). Conformity assessment (Article 43). Transparency obligations (Article 50). Registration in EU database (Article 71).

2 August 2027

GPAI models placed on the market before August 2025 must achieve compliance.

The Digital Omnibus proposed in November 2025 would extend the Annex III high-risk deadline to December 2027 and the Annex I embedded-product deadline to August 2028.[2] However, these proposals are not yet law - trilogue negotiations between Parliament, Council and Commission are ongoing, and the outcome is uncertain. Enterprises should plan for the August 2026 date and treat any extension as contingency, not baseline.

Risk Classification: Where Does Each AI System Fall?

The Act's regulatory architecture is built on a four-tier risk classification system. Every AI system an enterprise builds, buys, or deploys must be classified against these tiers.

Unacceptable risk: prohibited practices (Article 5)

Eight categories of AI use are prohibited outright, with fines up to 35 million euros or 7% of global annual turnover.[3] The most relevant for enterprises:

  • Emotion recognition in the workplace or education - AI systems that infer emotional states of employees or students, except where used for safety or medical purposes

  • Biometric categorisation by sensitive attributes - Systems that categorise individuals by race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation using biometric data

  • Social scoring - Systems used by public authorities to evaluate individuals over time based on social behaviour, leading to disproportionate treatment

  • Subliminal manipulation - AI using techniques below the threshold of consciousness to distort behaviour in ways likely to cause significant harm

Enterprise action: audit all AI systems immediately for proximity to these categories. HR tools using emotion analysis, biometric systems, and behavioural scoring tools require particular scrutiny. If a system cannot be clearly distinguished from a prohibited practice, discontinue or redesign it. This obligation has been enforceable since February 2025.

High risk: the core compliance obligation

High-risk classification is triggered through two routes.[4]

Route 1 - Safety components (Article 6(1)): AI systems that are a safety component of products already regulated under EU sectoral legislation (medical devices, machinery, aviation, automotive, pressure equipment, and others listed in Annex I), where those products require third-party conformity assessment.

Route 2 - Annex III standalone systems (Article 6(2)): AI systems deployed in eight sensitive use-case categories:


Category

Examples

Biometrics

Remote biometric identification; emotion recognition

Critical infrastructure

Safety components in water, gas, electricity, digital infrastructure, road traffic management

Education

Determining admissions; evaluating learning outcomes; monitoring cheating

Employment

CV screening; interview assessment; performance monitoring; task allocation; termination decisions

Essential services

Credit scoring; insurance risk assessment; social benefits eligibility; emergency dispatch

Law enforcement

Risk assessment of individuals; evidence evaluation; reoffending prediction

Migration and border management

Risk assessment of irregular migration; visa and asylum application examination

Justice and democracy

Assisting judicial fact-finding; systems that could influence elections

An important nuance: under Article 6(3), a provider may determine that a system falling within an Annex III category does not in fact pose a significant risk, provided the system is used for a narrow procedural purpose, does not influence substantive decisions, or the risk is demonstrably negligible given context. The provider must notify the relevant national market surveillance authority and register the self-determination in the EU database before the determination takes effect. This is not a risk exemption - it is a documented, auditable claim that must withstand regulatory scrutiny.

Limited risk: transparency obligations (Article 50)

Systems that interact directly with individuals but do not fall into the high-risk categories must meet transparency requirements from August 2026:

  • Chatbots and conversational AI must disclose that the user is interacting with an AI system

  • Deepfake content must be labelled as AI-generated or manipulated

  • AI-generated text on matters of public interest requires disclosure

  • Emotion recognition and biometric categorisation systems must notify exposed individuals

Minimal risk: no mandatory obligations

AI systems that do not fall into any of the above categories - spam filters, recommendation engines, AI-assisted grammar tools, AI in video games - carry no mandatory obligations. Voluntary codes of conduct under Article 95 are encouraged but not required.

High-Risk Obligations: What Compliance Actually Requires

For AI systems classified as high-risk, the Act imposes seven categories of mandatory requirements through Articles 9-15. These are not abstract principles; they are specific, auditable obligations with documentation requirements.

Risk management (Article 9)

Establish a continuous risk management process - not a one-time assessment but an ongoing cycle throughout the AI system's lifecycle. The process must identify and analyse foreseeable risks to health, safety, and fundamental rights; estimate and evaluate those risks; adopt mitigation measures (with design changes taking priority over operational controls); and test the system against the risk management plan before deployment. Residual risks must be documented and communicated to deployers.

Data governance (Article 10)

Training, validation, and test datasets must be relevant, representative, free from errors, and sufficiently complete. Data governance practices must cover collection, labelling, processing, and retention. Bias detection and correction procedures are required, and personal data must be processed in accordance with GDPR.

Technical documentation (Article 11 and Annex IV)

Prepare comprehensive technical documentation before placing the system on the market. Annex IV specifies what must be included: system description and purpose, design specifications, training methodology and data characteristics, performance metrics, testing procedures, known limitations, cybersecurity measures, and a post-market monitoring plan. This documentation must be kept current and available to authorities on request.

Record-keeping (Article 12)

High-risk systems must have automatic logging capabilities built in, recording events relevant to identifying risks and substantial modifications throughout the system's lifetime. Deployers must retain logs for a minimum of six months. For agentic AI systems operating across multi-step reasoning chains, the logging requirement is particularly demanding - and particularly important.

Transparency (Article 13)

Providers must supply instructions for use that enable deployers to understand the system's capabilities, limitations, accuracy metrics, intended purpose, and required human oversight measures. The instructions must be comprehensible to someone without specialist AI knowledge.

Human oversight (Article 14)

Systems must be designed to allow effective human oversight - meaning humans can understand the system's outputs, intervene or interrupt operation, disregard or override outputs, and prevent the system from overriding human decisions without prior authorisation. Providers build the capability; deployers assign and train qualified personnel to exercise it.

Accuracy, robustness, and cybersecurity (Article 15)

Systems must maintain declared accuracy levels throughout their lifecycle, resist errors and inconsistencies in inputs, withstand adversarial manipulation, and meet applicable cybersecurity standards.

Beyond Articles 9-15, providers must also establish a quality management system (Article 17), undergo conformity assessment (Article 43), issue an EU Declaration of Conformity (Article 47), affix CE marking (Article 48), register the system in the EU database (Article 71), implement post-market monitoring (Article 72), and report serious incidents within 15 days (Article 73).

The Provider-Deployer Split

The Act assigns obligations differently depending on whether an organisation is a provider (develops or commissions the AI system) or a deployer (uses it in a professional context).[5]

Providers bear the heavier burden: full compliance with Articles 9-15, conformity assessment, documentation, and post-market monitoring. Deployers must use systems according to the provider's instructions, assign human oversight to qualified personnel, retain logs, inform affected individuals, and report issues.

The critical boundary: an enterprise that takes a third-party AI system and substantially modifies it, changes its intended purpose, or places it on the market under its own brand becomes a provider under Article 25 and assumes all provider obligations. Fine-tuning a foundation model for a new use case, for instance, may cross this threshold. Organisations should map each AI system to the provider-deployer framework and document the classification.

For third-party AI systems, deployer due diligence is essential. Request documentation confirming the system's risk classification and conformity status. Update vendor contracts to address incident reporting, log retention, human oversight specifications, and responsibility allocation. If a vendor cannot provide adequate documentation, that is a material compliance risk.

A common and underappreciated scenario: large SaaS providers (Salesforce, ServiceNow, Workday and others) increasingly embed AI capabilities that may qualify as high-risk under employment or essential services categories, yet may decline to provide conformity documentation. The deployer cannot outsource its Article 26 obligations in this situation. Deployers must either conduct their own assessment of the AI feature, restrict use to non-high-risk contexts, or cease use until adequate documentation is available. This is a material procurement risk that should be evaluated before contract renewal. Enzai maps provider-deployer obligations across your AI stack, including layered GPAI and high-risk system configurations.

GPAI Model Obligations

Many enterprise AI systems are built on general-purpose AI models - foundation models from Anthropic, OpenAI, Google, Meta and others. The Act imposes distinct obligations on GPAI model providers under Articles 51-56, effective since August 2025.[6]

All GPAI providers must maintain technical documentation, provide downstream integrators with information sufficient to comply with their own obligations, implement a copyright compliance policy, and publish a training data summary.

Models above the systemic risk threshold (cumulative training compute exceeding 10^25 FLOPs, or designated by the Commission based on demonstrated capabilities) carry additional obligations: adversarial testing, risk assessment and mitigation, serious incident reporting to the European AI Office, and cybersecurity protections.

The GPAI Code of Practice, endorsed in August 2025, provides a compliance pathway and creates a presumption of conformity for signatories.[7] The European AI Office has exclusive supervisory authority over GPAI models, separate from national market surveillance authorities.

For enterprises deploying agents and applications built on third-party GPAI models, the practical implication is layered compliance: the model provider bears GPAI obligations, while the deploying organisation bears high-risk system obligations for the application layer. Clarity on where one set of obligations ends and the other begins is essential.

Penalties and Enforcement

The penalty framework is substantial and tiered.[8]


Violation

Maximum fine

Prohibited practices (Article 5)

35 million euros or 7% of global annual turnover

High-risk and transparency obligations

15 million euros or 3% of global annual turnover

Supplying incorrect information to authorities

7.5 million euros or 1% of global annual turnover

Enforcement is decentralised. National market surveillance authorities handle high-risk system compliance. The European AI Office handles GPAI model compliance. Fundamental rights protection authorities (which may be the data protection authority in some Member States) handle cases involving public authority use of high-risk AI.

Factors in fine determination include the nature and gravity of the infringement, whether it was intentional or negligent, the organisation's size, cooperation with authorities, and steps taken to mitigate harm.

A Sequenced Action Plan

With the August 2026 deadline approaching, enterprises need a structured approach. Enzai recommends the following sequencing, based on our work with enterprise compliance teams navigating the Act.

Phase 1: Immediate (already-applicable obligations)

Designate an interim AI compliance lead to own Phase 1 actions and begin scoping the full programme. The immediate actions below require someone with authority and bandwidth to drive them.

Audit for prohibited practices. Article 5 has been enforceable since February 2025. Review all AI systems for proximity to prohibited categories - particularly emotion recognition in the workplace, biometric categorisation, and behavioural scoring.

Establish AI literacy. Article 4 requires personnel involved in AI operations to have sufficient AI literacy. Document training programmes and retain evidence.

Assess GPAI exposure. If your organisation develops or fine-tunes foundation models, ensure compliance with Articles 51-56 obligations active since August 2025.

Phase 2: Foundation-building (now through Q2 2026)

Establish governance structure. Designate an executive AI compliance owner, assign product-level accountability for high-risk systems, and convene a cross-functional AI governance group spanning legal, technology, security, procurement, and HR. Without this authority structure in place, the inventory and classification exercises that follow will lack the organisational mandate to compel disclosure from business units. Budget and tooling decisions should be made at this stage - inventory and classification at enterprise scale typically requires dedicated tooling or a managed review process.

Build a comprehensive AI inventory. Catalogue every AI system the organisation builds, buys, or deploys - including shadow AI, embedded AI in vendor platforms, and SaaS tools with AI capabilities. Record purpose, data inputs, deployment context, and system owner. Enzai's AI inventory module provides automated discovery across cloud environments and SaaS integrations, reducing the manual effort that typically consumes the largest share of this phase - see how it works.

Classify every system. Map each inventoried system against the risk tiers. Document the classification reasoning for each - this is itself a compliance artefact.

Phase 3: Compliance build (Q2-Q3 2026)

For each high-risk system, build the compliance artefact set:

  • Risk management documentation (Article 9)

  • Data governance procedures (Article 10)

  • Technical documentation per Annex IV (Article 11)

  • Automatic logging implementation (Article 12)

  • Instructions for use (Article 13)

  • Human oversight framework - who oversees, how they intervene, how interventions are logged (Article 14)

  • Conformity assessment (Article 43) - internal assessment for Annex III points 2-8 where harmonised standards apply; notified body assessment for biometric remote identification (point 1(a)) and for any category where no harmonised standard has been published. Check the Commission's harmonised standards register before finalising your conformity route

Update vendor contracts. For third-party AI systems, ensure contracts address documentation rights, incident reporting, log retention, and responsibility allocation under Article 25.

Prepare transparency disclosures. Audit customer-facing and employee-facing AI systems for Article 50 requirements - chatbot disclosures, deepfake labelling, emotion recognition notifications.

Phase 4: Validation and readiness (Q3 2026)

Conduct a fundamental rights impact assessment where required under Article 27 - mandatory for public bodies and recommended for private deployers using high-risk AI in high-impact contexts.

Establish incident response procedures. Extend existing security incident response to cover AI-specific incidents: 15-day reporting for incidents involving death or serious harm, 72-hour reporting for personal data breaches, identification of the correct national authority in each Member State.

Register high-risk systems in the EU database before deployment (Article 71).

Complete conformity assessment and issue EU Declaration of Conformity (Article 47) with CE marking (Article 48) for systems being placed on the market.

Looking Ahead

The EU AI Act is the most comprehensive AI regulation in force anywhere in the world. Its phased implementation gives enterprises a defined timeline, but the compliance surface is broad - spanning technical documentation, risk management, data governance, human oversight, transparency, and post-market monitoring across potentially hundreds of AI systems.

The organisations best positioned are those building governance infrastructure now rather than approaching compliance as a point-in-time exercise. The Act's obligations are continuous: risk management must be ongoing, monitoring must be active, documentation must be current. A management system approach - where compliance is embedded in how AI systems are developed, deployed, and monitored - is the only sustainable model.

At Enzai, our platform provides the operational infrastructure for EU AI Act compliance: centralised AI inventory, automated risk classification against the Act's framework, structured documentation for Articles 9-15, continuous monitoring, and audit-ready evidence management. For enterprises preparing for August 2026, book a demo to see how the platform maps to the Act's requirements.

References

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council, Articles 113-114 (entry into force and application dates). Official Journal of the European Union, L series, 12 July 2024.

[2] European Commission, "Digital Omnibus on AI" (COM(2025) 871), 26 November 2025. Proposed extensions to Annex III and Annex I deadlines, conditional on harmonised standard availability.

[3] Regulation (EU) 2024/1689, Article 5 (Prohibited AI practices) and Article 99 (Penalties).

[4] Regulation (EU) 2024/1689, Article 6 (Classification rules for high-risk AI systems) and Annex III.

[5] Regulation (EU) 2024/1689, Articles 16 (Provider obligations), 26 (Deployer obligations), and 25 (Other parties along the AI value chain).

[6] Regulation (EU) 2024/1689, Chapter V, Articles 51-56 (GPAI model obligations).

[7] GPAI Code of Practice, endorsed by the European Commission and AI Board, 1 August 2025.

[8] Regulation (EU) 2024/1689, Article 99 (Penalties).

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.