Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

European Data Protection Board adopts opinion on AI Models

AI Regulations

European Data Protection Board adopts opinion on AI Models

AI Regulations

European Data Protection Board adopts opinion on AI Models

EDPB Opinion 28/2024 addresses key topics at the intersection of privacy and AI.

Belfast

Belfast

3 min read time

By

By

Var Shankar

Var Shankar

GDPR and AI Training

GDPR and AI Training

EDPB Opinion 28/2024 clarifies that companies can use "legitimate interest" as a legal basis to train AI models on personal data, provided they pass a strict three-factor test of purpose, necessity, and balancing.

EDPB Opinion 28/2024 clarifies that companies can use "legitimate interest" as a legal basis to train AI models on personal data, provided they pass a strict three-factor test of purpose, necessity, and balancing.

Defining Model Anonymity

Defining Model Anonymity

The guidance establishes a high threshold for anonymity, requiring a low likelihood of identifying individuals or extracting personal data through queries, shaping how member states will prioritize future enforcement.

The guidance establishes a high threshold for anonymity, requiring a low likelihood of identifying individuals or extracting personal data through queries, shaping how member states will prioritize future enforcement.

Topics

Medical AI
FDA Guidance
Lifecycle Management
Healthcare Compliance

Topics

On December 17, 2024 the European Data Protection Board (EDPB) adopted EDPB Opinion 28/2024 regarding the use of personal data to train and deploy AI models. The opinion comes in response to a request from Ireland’s data protection authority (DPA). 

Large model developers, including Meta and OpenAI, that provide AI models within the EU have been subject to scrutiny by EU policymakers and regulators. These developers, companies using their products, and regulatory authorities in EU countries have all sought clear guidance on how to train and deploy AI models on personal data without violating the GDPR.

EDPB Opinion 28/2024 provides guidance on these matters. Though is not binding upon companies, DPAs will likely align with its guidance when interpreting the GDPR and prioritizing enforcement actions. 

When is a model considered anonymous – and therefore not subject to GDPR?

Though anonymous AI models are not subject to the GDPR, analysts to date have had varying opinions on whether a trained foundation AI model is anonymous.

EDPB Opinion 28/2024 states that such a model can be considered anonymous if there is a low likelihood of being able “(1) to directly or indirectly identify individuals whose data was used to create the model, and (2) to extract such personal data from the model through queries.” The opinion notes that this analysis must be conducted on a case-by-case basis and includes a non-prescriptive and non-exclusive list of methods that can help achieve anonymity.

When can an AI developer process personal data?

Per the GDPR, companies need a legal basis to process personal data. Analysts have noted that of the available legal bases, only “legitimate interest” seems relevant to how leading developers train foundation AI models. EDPB Opinion 28/2024 confirms that a company can use “legitimate interest” as a basis to train AI models using personal data, after an analysis of three factors: 

Purpose: whether the interest is lawful, clearly and precisely articulated, and real and present;

Necessity: whether personal data is necessary to achieve the purpose;

Balancing: based on an analysis of the benefits, drawbacks, and expectations of individuals whose data is processed, whether the interests and rights of such individuals do not outweigh the legitimate interest of the company

EDPB Opinion 28/2024 also describes how an AI model developed on personal data in violation of the GDPR may either not be deployed or be deployed in a limited way subject to assessments and safeguards.

What’s next?

The EU will continue to provide interpretation and guidance on how the GDPR, the EU AI Act (AIA), and other EU laws will apply to foundation AI models, even as large model developers continue to release impressive new models.

EDPB Opinion 28/2024 does not provide clarity on many other intersections of privacy and AI, including privacy by design or the processing of other sensitive information (like the user’s state of mind or political views).

Companies using AI systems should carefully monitor regulatory developments in the EU and across the world and incorporate relevant interpretation and guidance into their AI governance programs.

Enzai is here to help

Enzai’s AI GRC platform can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.