Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

What does a new UK government mean for AI regulation in the UK?

AI Regulations

What does a new UK government mean for AI regulation in the UK?

AI Regulations

What does a new UK government mean for AI regulation in the UK?

The UK got a new government on Friday 5 July and this could signal big changes for the approach to regulating AI in the UK.

Belfast

Belfast

3 min read time

Region

Region

United Kingdom

United Kingdom

Topic

Topic

Policy Shift

Policy Shift

Topics

US AI Executive Order
Federal Policy
Compliance Implications
Strategic Response

Topics

New government = new AI law?

On the morning of Friday 5 July, during an audience at Buckingham Palace, the King requested that Keir Starmer form a new administration to govern the UK. The change of power from the previous Conservative government was both swift and peaceful. Later that day, a new cabinet was formed and they got straight down to work implementing a new programme for government.

For those of us working in AI, the big question is whether or not this new programme for government will involve a change in approach for AI regulation and innovation in the UK. Peter Kyle MP has been appointed as the new Minister for the Department of Science, Innovation and Technology, and he will be tasked with reviewing how the UK approaches AI. Although digital regulation wasn’t a key battleground during the course of the election campaign, we do have some clues as to how Labour will approach this area and we could see a real step change.

Quick refresher - under the previous government, the UK has already established a voluntary code on AI safety, as part of its pro-innovation approach, along with an AI Safety Institute led by Ian Hogarth to evaluate frontier models. Find out more from our blog on the pro-innovation approach here.

What have Labour said so far?

Labour do seem more willing to implement new digital policy regulations, and enhance existing ones, than their predecessors. At the 2023 Labour party conference, delegates carried a motion put forward by trade-unionists Unite to develop “a comprehensive package of legislative, regulatory and workplace protections” to ensure that the “positive potential of technology is realised for all”. This includes amendments to the UK GDPR and safeguards against discriminatory algorithms.

Peter Kyle MP has also previously been quite vocal on addressing some of the societal harms that AI could pose, such as deep fakes used to generate non-consensual pornography or electoral misinformation. Speaking to the Guardian in March 2024, Mr Kyle noted that he had already raised these issues with leaders in the tech industry and that he was carefully considering proposals to create regulatory protections here. Then in June 2024, Mr Kyle told London Tech Week that his party would look to keep the UK AI Safety Institute and put some aspects of its work on a statutory footing.

Could we end up with something like the EU AI Act in the UK?

There are some reports emerging anecdotally from the corridors of Whitehall that the new government would look to pass its own version of the EU AI Act. Certainly some of the soundings coming from Labour MPs and think-thanks on generative AI are starting to bear the same horizontal characteristics of the article 53 obligations that the EU AI Act places on general-purpose AI models.

If Labour do continue down this road, we think that international alignment is going to be key to give businesses the certainty they need here. Under the pro-innovation approach, it was unclear whether total alignment to the EU AI Act would be sufficient for UK regulators to deem that an organisation has also met all of the principles put forward in the voluntary code. This issue had yet to surface in practice, given the EU AI Act is not in force and UK regulators are still getting to grips with the principles, but divergence questions like this could prove problematic (and burdensome) in future.

We will know more on 17 July when, as part of the State Opening of  Parliament, the new administration will set out their programme for government in the King’s Speech. Stay tuned…

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

Request more information

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.