What You Need To Know

 The EU AI Act, approved by the European Parliament in February 2024, is the world’s first comprehensive Artificial Intelligence legislation. It will come into force 20 days after its publication in the Official Journal of the European Union and will be effective after 24 months. Some provisions will be effective sooner: prohibitions will be enforced after only 6 months, and General Purpose AI Systems (“GPAI”) after 12 months. 

The Act applies to providers, deployers, importers, distributors and product manufacturers of AI systems with a link to the EU market (note, there are some exceptions). 

When the Act comes into force, businesses will need to be compliant with all the legal requirements. This is not just to avoid fines, which could be up to tens of millions of euros or reach up to 7% of worldwide turnover, but also to ensure continued trust among customers and users.

Risk Criteria

The EU AI Act analyses and classifies AI systems into four categories according to the risk they pose to users for specific applications
  1. Low/minimal risk Permitted with no restrictions.
  2. Limited risk Permitted but subject to information/transparency obligations.
  3. High risk Permitted subject to compliance with AI requirements and ex-ante conformity assessment.
  4. Unacceptable risk Prohibited (e.g., social scoring)

Risk Criteria

The EU AI Act analyses and classifies AI systems into four categories according to the risk they pose to users for specific applications
Low
Permitted with no restrictions.
Limited
Permitted but subject to information/transparency obligations.
High
Permitted, subject to additional compliance requirements.
Unacceptable
Prohibited (e.g., social scoring)


AI systems that are high risk may not be subject to the full compliance obligations if they meet certain additional criteria. There are also other separate rules for general purpose AI systems (GPAI) and GPAI which meets the threshold to pose a systemic risk will be subject to additional requirements again.

What Is High Risk?

AI systems falling into these areas will be considered high risk and subject to the Act:
Permissible biometric systems
Management and operation of critical infrastructure
Education and vocational training
Employment and workers management, access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control
Administration of justice and democratic proceses
Systems already covered under existing European product safety legislation (and subject to an assessment under those regimes) can also be high risk:
Machinery
Toys
Recreational craft and personal watercraft
Lifts (and their safety components)
Systems protecting against explosives
Radio equipment
Pressure equipment
Cabelway installations
Personal protective equipment
Appliances burning gaseous fuels
Medical devices (including in vitro diagnostics)

“Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation” - EU AI Act, Article 17.1

In accordance with Article 17, compliance must be documented in a “systematic and orderly manner in the form of written policies, procedures and instructions”. Organisations can be providers and/or users of AI systems - and both will be covered by this legislation.
User obligations
  • Operate AI systems in accordance with the instructions of use
  • Ensure human oversight when using an AI system
  • Monitor operation for possible risks
  • Inform the provider or distributor about any serious incident or malfunctioning
  • Comply with existing legal obligations (e.g., GDPR)
Provider obligations
  • Establish and implement a quality AI management system internally, allowing for traceability and auditability
  • Draw up and keep up to date technical documentation with high quality training, validation and testing data that is relevant and representative
  • Log obligations to enable users to monitor the operation of the high-risk AI system
  • Undergo conformity assessments of the system
  • Register the system in an EU-wide database
  • Affix CE marking and signing a declaration of conformity
  • Conduct post-market monitoring
  • Collaborate with market surveillance authorities
  • Ensure robustness, accuracy and cybersecurity

Manage Your Risks with Enzai

The Enzai platform is the best way to manage AI governance risk. This is the only solution with a strong legal foundation to ensure compliance with the nuanced requirements of emerging legislation and industry codes of conduct.

Enzai’s platform provides compliance by design. The ready-made policy packs enable  organisations to stay on top of and comply with emerging regulations and standards efficiently. Companies can even create their own policies for maximum flexibility.

Audits are made easy with the platform’s custom assessments. We provide increased assurance to resolve regulatory inquiries.

Latest in EU AI Act

Join 2,933 others getting no-fuss AI Governance updates every month.
Oops! Something went wrong while submitting the form.

Compliance By Design

Speak to an Expert