Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

EU AI Act: Breaking Down the First Draft of the GPAI Code of Practice

AI Regulations

EU AI Act: Breaking Down the First Draft of the GPAI Code of Practice

AI Regulations

EU AI Act: Breaking Down the First Draft of the GPAI Code of Practice

The first draft is useful and well-structured, though light on detail in some areas.

Belfast

Belfast

3 min read time

Topic

Topic

GPAI Code

GPAI Code

Focus

Focus

EU AI Act Art. 53

EU AI Act Art. 53

Topics

EU AI Office
Regulatory Capacity
Self-Governance
Institutional Design

Topics

The EU AI Act (AIA), which came into effect this summer, outlines principles for providers of General Purpose AI (GPAI) models. Specific, actionable interpretations of these principles for GPAI providers will be available to in May 2025, when the European AI Office publishes its GPAI Code of Practice. Aligning with the GPAI Code of Practice may give GPAI providers a “presumption of conformity” with parts of the AIA, until the European AI Office publishes harmonized standards.

So, it was informative for GPAI providers to access the first draft of the GPAI Code of Practice, published on the European Commission’s website on November 14. The draft is the product of collaborative work within and among four Working Groups since September 30:

1: Transparency and copyright-related rules

2: Risk identification and assessment for systemic risk

3: Technical risk mitigation for systemic risk

4: Governance risk mitigation for systemic risk

With the caveat that this is just the first of four drafts in an iterative process leading up to the final text in May 2025, the 36-page draft Code of Practice is remarkable in at least three ways.

1. Provider Determination

First, it acknowledges the significance of the GPAI provider determination. While OpenAI, Google, Anthropic, Meta, Cohere, and Mistral are the most obvious examples of GPAI providers, the AIA also describes situations in which organizations using GPAI models can be considered providers (for example, if they put their own name or trademark on such a model). A gray area in the AIA is whether an organization that ‘fine tunes’ such models becomes a provider. While the draft Code of Practice notes that its final version will provide guidance on this topic, it shares that even if an organization is considered a provider due to fine tuning, its provider obligations will be limited to the effects of the fine tuning.

2. Systemic Risks Taxonomy

Second, it outlines a high-level taxonomy of systemic risks from GPAI systems, which includes types, nature, and source. The AIA classifies certain AI systems as posing systemic risks (for example, if “the cumulative amount of computation used for its training measured in floating point operations is greater than 10(^25)”). The taxonomy outlined by the draft Code of Practice outlines a) risk type fields ranging from ‘cyber offence’ to ‘large scale discrimination,’ b) risk nature fields that include origin, actor(s) driving the risk and intent and c) risk source fields such as ‘dangerous model capabilities.’ Though the taxonomy is outlined at a high level, it provides insight into the system risk considerations that the European AI Office is likely to be concerned about.

3. Transparency

Third, it provides a measure of clarity on the kinds of “technical documentation” and “information” that GPAI providers need to share with deployers and with the European AI Office under Article 53 of the AIA. These range from ‘Acceptable Use Policies’ to ‘Architecture and number of parameters.’  While it is helpful that these elements are fleshed out and referenced against Annexes of the AIA, the level of detail provided for each element varies significantly.

The document also delves into copyright matters and into risk assessments adn mitigations for GPAI systems posing systemic risks.

It is also noteworthy that the European AI Office made this first draft public, since it is likely to evolve significantly. This was potentially done to help organizations keep up with the drafting process as they develop their AIA compliance and to stress the public, collaborative, and iterative nature of the drafting approach. To this end, the document notes that “although the first draft is light in detail, this approach aims to provide stakeholders with a clear sense of direction of the final Code's potential form and content.”

Enzai is here to help

Enzai’s AI GRC platform can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

Request more information

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.