Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

The World’s First Multilateral AI Treaty

AI Regulations

The World’s First Multilateral AI Treaty

AI Regulations

The World’s First Multilateral AI Treaty

The EU, UK, US and several other countries signed on to the world's first AI safety treaty.

Belfast

Belfast

3 min read time

Regulation

Regulation

Council of Europe

Council of Europe

Topic

Topic

AI Safety Treaty

AI Safety Treaty

Topics

AI Quality Management
EU AI Act
Compliance Operations
Continuous Oversight

Topics

On September 5, the EU, UK, US and several other countries signed on to an AI safety treaty developed by the Council of Europe (CoE). Since treaties are legally binding, this is a significant development in global AI policy. At the same time, the treaty’s application is flexible, since it is up to signatory countries to “adopt or maintain appropriate legislative, administrative or other measures” to give effect to its provisions.

About the Council of Europe

The CoE is an international organization based in France with 46 member states. Its mission is “promote democracy, human rights and the rule of law across Europe and beyond.” Though the CoE cannot pass laws, it has a history of developing significant conventions and treaties that address global challenges, such as combating cybercrime and human trafficking. The CoE’s achievements include the creation of the European Court of Human Rights.

About the Treaty

Formally known as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, the treaty is “the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law,” according to the CoE.

The treaty applies to the use of AI by governments, while also requiring them to “address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors.” Like the EU AI Act, the treaty uses a variation of the OECD definition of ‘AI system.’ It also outlines a risk-based approach to AI use throughout an AI system’s lifecycle, including a consideration of transparency and oversight, accountability and responsibility, equality and non-discrimination, privacy and data protection, and reliability. 

Limitations of the Treaty

The treaty is flexible, likely by design, to attract signatories from around the world with different approaches to governance. Since the treaty does not provide specific rules, signatories can interpret it in different ways. Remedies, too, are left to signatory countries.

The treaty includes significant carveouts for national security uses of AI.

Though any country in the world can sign the treaty, major global players such as China, India and Russia have not signed on.

Next Steps

The next step for the treaty is for signatories to ratify it. According to the CoE, “the treaty will enter into force on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it.”

Reflecting on the treaty, Secretary General Marija Pejčinović Burić shared that she hopes that “these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible.”

Due to the treaty’s flexibility, many analysts don’t expect that significant changes will be required in the regulatory approaches of EU countries, the UK and the US, if they ratify the treaty.

Enzai is here to help

Enzai’s product can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

Request more information

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.