The EU AI Act bends for agentic AI. It need not break.
.png)
In two years, agentic AI has moved from research curiosity to production reality. By late 2025, eighty per cent of Fortune 500 companies were running at least one AI agent in daily operations.[1] These systems are qualitatively different from the AI that dominated discussion when the EU AI Act (the "Act") was drafted between 2021 and 2023. Where a chatbot generates text and a classifier assigns labels, an agentic system pursues goals: planning, selecting tools, executing actions, observing results and iterating - often across dozens of steps with minimal human intervention.
A growing body of commentary argues the Act is unfit for this era. Jones has argued that the compliance framework is "structurally inadequate" for agents that autonomously invoke third-party tools across jurisdictions, coining the concept of "Agentic Tool Sovereignty."[2] The ACM's Europe Technology Policy Committee has called for a "fundamental rethinking", arguing that the Act assumes AI behaves like traditional software - "predictable, bounded and under human command".[3] The Future Society has concluded that technical standards under development "will likely fail to fully address risks from agents".[4]
These critiques are serious, and the operational difficulties they identify are real. But they tend to treat the Act's adequacy as a single question, when it is better understood as two. This article puts forward a two-layer analysis. At the definitional layer - whether agentic AI falls within the Act's regulatory perimeter - the Act proves remarkably resilient. At the operational layer - the obligations, assessment procedures and monitoring requirements - the critics have the stronger case.
This distinction matters more than it might first appear. If the definitional layer were broken, the Act would need wholesale revision. Because it holds, the challenge becomes one of operational adaptation - and the Act's own secondary instruments give us a good deal to work with.
The Definitional Layer: Article 3(1) and Its Resilience
Article 3(1) defines an "AI system" as:
This definition was the product of considerable negotiation, reflecting deliberate choices about scope and abstraction.[6] To see how it extends to agentic AI, it helps to take each element in turn.
Taken together, the definition describes what AI systems do (infer, decide, influence) and how they operate (autonomously, adaptively, towards objectives), rather than how they are built. That functional, technology-neutral approach is what gives it its resilience. A definition built around specific techniques - machine learning, neural networks - would have been chained to a technological moment. A definition focused on "automated decision-making" (as in the GDPR's Article 22) would have missed the autonomous, multi-step and environment-shaping character that sets agents apart.
Was this resilience deliberate? Or was it a fortunate by-product of negotiated abstraction? Probably a bit of both. The shift from the Commission's more technically specific original proposal reflected pressure from multiple directions: member states seeking flexibility, Parliament seeking breadth and the OECD framework providing a high-level template.[8] The outcome may owe more to the logic of compromise than to prescient design. But whatever its origins, a regulatory definition that holds up across a paradigm shift is no small thing.
The Operational Layer: Where the Framework Strains
If the definitional layer holds, the operational layer is where the real difficulties lie. The Act's obligations were designed for more static and bounded systems, and three problems stand out.
These are not the only operational challenges - post-market monitoring under Article 72(2) faces similar difficulties with scope, access and temporality[16] - but they illustrate the core tension: an operational framework built for predictable, bounded systems now confronting technology that is neither.
The Secondary Instruments: A Credible Path Forward
The operational challenges are real, but they are not all equally intractable. The Act's secondary instruments offer a credible, if incomplete, pathway to adaptation.
Harmonised standards
CEN and CENELEC are developing standards through Joint Technical Committee 21, under a mandate amended in June 2025, with both organisations adopting exceptional acceleration measures in October 2025.[17] This is arguably the most promising avenue. Article 14's proportionality language is, in effect, an open invitation to spell out what adequate oversight looks like for highly autonomous systems - bounded action spaces, structured checkpoints, audit trails and intervention mechanisms - without demanding a human in the loop at every turn.
Delegated acts
Article 7 empowers the Commission to amend Annex III - the high-risk use-case list - through delegated acts, subject to Parliament and Council objection.[18] This means specific categories of agentic system can be folded into the high-risk framework as risks emerge, without legislative amendment. Article 6(3) allows calibration in the other direction too.[19]
Commission guidance and codes of practice
The Commission has already issued interpretive guidance on the AI system definition,[20] and further guidance could tackle how the provider-deployer framework maps onto agentic value chains, and when runtime tool invocation counts as "substantial modification". Codes of practice under Article 56 offer another lever, addressing agent-specific risks at the GPAI model layer - controllability features, tool-use logging and action-space constraints - and targeting risks at a natural choke point in the value chain.[21]
The Commission's Digital Omnibus on AI, proposed in November 2025, is early evidence that this adaptive machinery works in practice.[22] The Omnibus proposes deferring high-risk obligations until harmonised standards are actually available, extending timelines for generative AI transparency requirements, simplifying conformity for SMEs and centralising enforcement of GPAI-based systems at the AI Office level. It does all of this without touching the Article 3(1) definition - the definitional layer stays intact whilst the operational layer is adjusted. Just as telling, though, the Omnibus does not address agentic AI specifically, which suggests the secondary instruments discussed above still have significant work ahead of them.
Two structural features will likely need legislative amendment in time. The provider-deployer binary cannot be stretched through secondary instruments to cover runtime tool providers who may not even know they are part of an agentic system - that requires a new legislative basis. And a genuine shift from point-in-time to continuous conformity assessment goes beyond what standards or guidance can deliver on their own. These are areas where the Act will need to evolve, and its own review mechanisms - including Article 112 - provide a pathway for doing so.[23]
A Framework That Bends
The dominant narrative positions the EU AI Act as a relic of the pre-agentic era. This article has argued that narrative is importantly incomplete.
At the definitional layer, the Act shows genuine resilience. Article 3(1)'s references to varying autonomy, implicit objectives, decision-making and environmental influence draw a regulatory perimeter that takes in agentic AI without strain. At the operational layer, the critics are right that there are real gaps. But those gaps sit within a framework that was deliberately built with adaptability in mind. The Digital Omnibus proposal already shows the EU's willingness to adjust the operational layer whilst leaving the definitional foundation alone, and the Act's broader secondary instruments can tackle a good portion of the agentic challenge without full legislative amendment.
The EU should resist two temptations: (i) complacency, in assuming the framework will hold without active adaptation; and (ii) panic, in concluding it needs tearing up. The AI Act was not designed for agentic AI. But it was designed well enough to accommodate it - and that distinction matters enormously for the future of AI governance in Europe. The framework bends. It need not break.
For organisations deploying agentic systems today, the practical implication is clear: compliance is not a future problem to defer until the law catches up. The definitional perimeter already captures these systems, and the operational obligations are arriving. The task now is to build governance into agentic workflows from the outset: inventorying AI systems as they are deployed, mapping them against evolving risk classifications and maintaining the kind of continuous oversight that the Act's secondary instruments will increasingly demand. At Enzai, this is the challenge our platform is built to help organisations navigate. To learn more, get in touch here.
References
[1] Microsoft Security Blog, "80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier" (February 2026), available at https://www.microsoft.com/en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/.
[2] L. Jones, "Agentic Tool Sovereignty," European Law Blog (2025), available at https://www.europeanlawblog.eu/pub/dq249o3c.
[3] ACM Europe Technology Policy Committee, "Systemic Risks Associated with Agentic AI: A Policy Brief" (October 2025), available at https://www.acm.org/binaries/content/assets/public-policy/europe-tpc/systemic_risks_agentic_ai_policy-brief_final.pdf.
[4] M.L. Miller Nguyen, "How AI Agents Are Governed Under the EU AI Act," The Future Society (June 2025), available at https://thefuturesociety.org/aiagentsintheeu/.
[5] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Art. 3(1).
[6] The AI Act definition builds on, but diverges from, the OECD's definition of AI systems adopted in 2023. See the European Commission's guidelines on the definition of an AI system (February 2025).
[7] AI Act, Recital 12.
[8] See the European Commission's original proposal, COM(2021) 206 final, and subsequent Council and Parliament positions during trilogue negotiations (2022-2023).
[9] The conformity assessment framework draws on the EU's "New Legislative Framework" for product safety, including Decision No 768/2008/EC.
[10] AI Act, Art. 3(23).
[11] AI Act, Arts. 16-27 (provider obligations) and Art. 26 (deployer obligations).
[12] AI Act, Art. 25(4).
[13] Jones (n 2); AI Act, Recital 88, which merely "encourages" value chain cooperation without creating binding obligations.
[14] AI Act, Art. 14(1); Art. 14(4)(a) and (d).
[15] M. Fink, "Human Oversight under Article 14 of the EU AI Act," SSRN (2025), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5147196.
[16] AI Act, Art. 72(2). See Jones (n 2), discussing the enforcement model's temporal mismatch with agentic operations.
[17] European Commission, Standardisation Request M/593, as amended by M/613 (June 2025); CEN-CENELEC, "Update on CEN and CENELEC's Decision to Accelerate the Development of Standards for Artificial Intelligence" (October 2025).
[18] AI Act, Arts. 7 and 97. The delegation is for five years from 1 August 2024, with a three-month objection period.
[19] AI Act, Art. 6(3).
[20] See Orrick, "EU Commission Clarifies Definition of AI Systems" (April 2025). MEP Lagodinsky formally asked the Commission to clarify the regulation of agents in September 2025, signalling political appetite for further guidance: see Jones (n 2).
[21] AI Act, Art. 56.
[22] European Commission, Proposal for a Regulation amending Regulations (EU) 2024/1689 and (EU) 2024/1689 (Digital Omnibus on AI), 19 November 2025. See also European Parliament Think Tank, "Digital Omnibus on AI: EU Legislation in Progress" (February 2026).
[23] AI Act, Art. 112.
