UK update on AI regulation: what you need to know

The UK shared its response to the public consultation on the "pro-innovation approach to regulating AI". We analyse what's changed and what it means for UK businesses
Karen WasersteinKaren Waserstein
Karen Waserstein
7 Feb
2024
UK update on AI regulation: what you need to know

Yesterday the UK government shared its response to the public consultation on its AI regulation white paper. The UK is not introducing any AI legislation just yet - the Department for Science, Innovation and Technology (DSIT) has reinforced its commitment to a pro-innovation approach - but this update does signal a step change. This is the first time the government has recognised that “binding measures for overseeing cutting-edge AI development” will be needed at some point.

The government has fleshed out its proposed approach for using existing regulatory structures in the UK to ensure that AI development adheres to their cross-sectoral principles. This will have a direct impact on how businesses build, deploy and use AI in the UK.

Enzai has been directly engaged in shaping this approach - from our initial response to the white paper, to liaising with the government through meetings and roundtables at Downing Street. Here's what you need to know.

What is the “pro-innovation approach to regulating AI”?

In contrast to the EU’s push to regulate AI from the outset (through the recently approved EU AI Act), the UK has chosen an ‘agile’ approach. By empowering existing regulators to oversee AI risks in their remit, the government remains flexible to adapt its strategy as the AI technology continues to develop. The UK wants to avoid rushing legislation and later implementing ‘quick fixes’. Whether this approach works remains to be seen but the government has noted that supporting innovation is the UK’s number one priority, as well as avoiding over-regulation that could hurt business. 

There are 5 cross-sectoral principles (the “Cross Sectoral Principles”) that make up the pro-innovation approach. In the AI regulation white paper, the government defines them as follows:

  1. Safety, security and robustness: AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed.
  2. Appropriate transparency and explainability: AI systems should be appropriately transparent and explainable.
  3. Fairness: AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes.
  4. Accountability and governance: Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.
  5. Contestability and redress: Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

Regulators must take account of these principles when reviewing AI systems under their remit. In our response paper to the government’s initial draft of the white paper in 2022, we stressed the need for these requirements to have some form of legislative bite and consequences for non-adherence. While we don’t yet have a full piece of AI regulation for the UK (which we are still advocating for), there will be a legal obligation placed on the regulators to adhere to the principles and this will filter down to the organisations these regulators manage.

The other distinguishing feature of the pro-innovation approach is the focus on AI overall, and on AI safety research and evaluation in particular. The government has invested £100 million in building the world’s first AI Safety Institute and is assembling a multidisciplinary team for cross-sector risk assessments and monitoring. The UK hosted the first global AI Safety summit at Bletchley Park in November. It has also committed £9 million through the International Science partnerships Fund to build a partnership with the US focusing on responsible and trustworthy AI. 

Note, the UK did investigate the possibility of establishing a copyright code for AI in the UK and Enzai was a part of the working group looking at this issue. There was no overall consensus on the way forward, so the government decided not to publish the code of practice.

What this change means for businesses operating in the UK

If your business operates in the UK, the relevant regulating authority for your industry - for example, Ofcom, the Competition and Markets Authority (CMA), or the Financial Conduct Authority (FCA) - have been tasked with enforcing the Cross Sectoral Principles. The government has promised an additional £10 million to help the regulators with this new remit, and asked that they set out their plans for compliance by April 2024. We expect regulators across the UK will start to adhere to these principles in short order and organisations should start thinking immediately about how they can demonstrate compliance to the Cross Sectoral Principles.

Even though the UK has not passed laws on AI regulation like the EU, the government has made clear that existing laws on product safety, privacy, discrimination, etc. apply to AI systems. Your business needs to demonstrate compliance with existing laws to avoid regulatory action. For companies that offer their services to the government, there may soon be procurement guidelines for AI systems (following the US model of President Biden’s Executive Order on AI), increasing the need to prove robust guardrails against AI risks.

In the medium term, we can expect the UK to start drafting legislation on GPAI systems and possibly baseline obligations for all AI systems and the various stakeholders involved. 

Finally, it’s worth noting that if your business operations extend beyond the UK, you need to ensure compliance with AI regulations in other jurisdictions around the world. The EU has passed the EU AI Act (“EU AIA”) and the US Executive Order on AI activated 50 different entities and created 150 new directives (actions, reports, etc.) to be implemented in the next year. 

How to respond to regulatory scrutiny and manage AI risks

The best way to navigate an ever-increasingly complex regulatory landscape, and to demonstrate adherence to the Cross Sectoral Principles, is to adopt a quality management system (QMS) to manage AI risks and build, use and deploy AI with confidence. But such systems can be painful to set up and maintain using generic incumbent tools - the bigger an organisation, the harder it can be to ensure that every AI system is accounted for and that nothing falls through the cracks. To add another layer of complexity, the AI systems themselves are not static, so yearly audits are not enough to ensure compliance with certain metrics - for example, for accuracy or robustness of the data. 

Enzai can help organisations adopt the UK’s Cross Sectoral Principles efficiently through our AI governance platform. It can also enable compliance with the EU AIA, the US Executive Order, ISO 42001 and other regulations and standards being introduced around the world.

Looking ahead: what we can expect from UK regulations

Enzai continues to engage with the UK government on AI safety and to advocate for high standards throughout the AI lifecycle. We must ensure these powerful technologies can fulfil their true potential. We look forward to continuing our discussion with the UK government (and regulators around the world) to make AI safe and trustworthy.

Learn more about Enzai’s AI governance solutions - request a demo today.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.