Our response to the UK's 'pro-innovation' AI framework

Enzai's response to the UK's pro-innovation approach to AI regulation and governance.
Ryan DonnellyRyan Donnelly
Ryan Donnelly
2 Jan
2024
Our response to the UK's 'pro-innovation' AI framework

Pleased to share our response to the recent policy paper from the UK government's Office for Artificial Intelligence entitled "Establishing a pro-innovation approach to regulating AI". We would like to thank the Office for Artificial Intelligence for their work on this incredibly important topic to date, and we are thrilled to form part of the discussion going forward. See how Enzai can help you navigate evolving AI regulations.


VIA EMAIL: evidence@officeforai.gov.uk

Response to the policy paper entitled “Establishing a pro-innovation approach to regulating AI” dated 20 July 2022 (the “Policy Paper”)

By way of introduction, Enzai is a new startup company incorporated in Belfast, Northern Ireland. Founded by Ryan Donnelly (a leading corporate and regulatory lawyer) and Jack Carlisle (an experienced software engineer) in August 2021, we are on a mission to ensure that powerful artificial intelligence (“AI”) technologies are used for the benefit of society as a whole in future.

We are particularly focused on protecting against the substantial risks that come with AI. Through our software platform, our customers are able to benchmark their AI systems against the gold standard of AI development practices, understand the aggregate AI governance position across their organisation and impose the necessary governance checks and balances on their systems to ensure their success. By doing this, we enable our customers to build safer, more robust systems that benefit not only the individual organisations using the technologies, but society as a whole.

We are firm believers in the power of regulation as enabling force, with the potential to drive forward innovation in the AI space. Technologies this powerful need sensible guide rails in place to ensure that future developments benefit business and society. This presents a tremendous opportunity for the United Kingdom. Given our founder’s legal background, we are acutely aware of the United Kingdom’s global reputation for its regulatory regimes and rule of law. We firmly believe that the United Kingdom is uniquely placed to attract substantial talent and investment in the AI space with an intelligent regulatory framework. In order to maximise this opportunity, any regulatory framework should take account of the following principles. It should be: (1) targeted, so that the regulation has the greatest impact with the least amount of intervention; (2) context-driven, taking account of the unique circumstances of the technology and the domain(s) it is applied to; and (3) clear, so that market participants can have certainty around what is expected of them.

To this end, we welcome the proposals put forward in this Policy Paper as an important step towards establishing the United Kingdom as a pro-innovation AI jurisdiction. We are pleased to present our responses to the individual questions raised in the Policy Paper below.


What are the most important challenges with our existing approach to regulating AI? Do you have views on the most important gaps, overlaps or contradictions?

As a general comment, the risk-based approach set out in the Policy Paper is a sensible starting point for this proposal – we would stress the need to ensure that any regulatory intervention conforms with the principles set out above.

We note the government’s desire to regulate the use of AI, rather than the AI itself. It is our view that focusing solely on the downstream impacts of AI around its use is too simplistic. The regulatory framework should be holistic, taking account of all aspects of the AI lifecycle, to ensure that the systems are safe and robust. A failure to provide sensible guide rails on other aspects of the development lifecycle (such as the upstream tasks of dataset curation and preparation, and the midstream tasks of modelling, training, testing and evaluating) leaves substantial room for ambiguity and confusion. Imposing reasonable expectations on market participants throughout the development lifecycle, for systems that have high-risk impact potential, would be a welcome addition to the approach. This could take the form of requiring a level of technical documentation and the creation of a risk management system for these forms of AI, and the software platform we are building would greatly help ease any administrative burden these requirements could be perceived to place on market participants.

The approach put forward in the Policy Paper around the definition of AI is interesting and the concept of a ‘core characteristics’-based interpretation is appealing. However, we do think this could go further - it would be possible to draw up an exhaustive list of technologies that could be clearly considered AI to aid interpretation. The core characteristics analysis could then be applied to edge cases, and any new AI could be added to the list of AI technologies once invented. Having said this, we do recognise that this would impose an administrative burden upon the regulatory bodies. Therefore, we would welcome further discussions around the potential of this particular proposal.

We were pleased to read the recognition that, at present, this space has evolved in an organic and hap-hazard way (at a rapid pace) and this makes it extremely difficult for market participants to navigate. There are many government bodies publishing different frameworks, roadmaps and guidelines and it has become difficult to keep track. We see this as a significant challenge, and would like to see a substantial simplification of the United Kingdom’s AI bodies and policy makers with a single source of truth for market participants to interact with and refer to. This suggestion is not intended to be a contradiction to the sectoral approach advocated in the Policy Paper. Rather, this single source of truth approach should guide participants through to sector-specific frameworks and regulatory bodies as necessary.

Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper? What do you see as the benefits of this approach? What are the disadvantages?

The context-based approach to regulating AI is sensible in principle. In order to ensure the approach is truly effective, we would stress that context is all encompassing – it includes all aspects of the technology itself, along with the specific domain context in which it is being used. From our experience engaging with market participants, different industry sectors (healthcare, finance, automotive, etc.) do take different approaches to building and implementing the technologies. Further, the context-based approach needs to be targeted (to ensure the maximum impact with the least intervention) and clear (so that market participants can easily understand their obligations). In order to fulfil its potential, the context-based approach will need to be complemented by additional policy and guidance.


Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach? Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?

Carefully constructed principles represent a strong bedrock for a highly functioning AI ecosystem. However, in our experience, principles alone rarely go far enough. They tend to be nebulous concepts that are difficult to translate into real-world actionable insights. For example, most would agree that AI systems should be fair and safe, but a much wider divergence of views would emerge on what fair and safe actually means in any given practical scenario. Market participants would benefit massively from a well-defined, clear set of obligations that can be applied consistently with certainty. Therefore, principles need to be supported by more concrete requirements to ensure that they can be effectively adopted in practice.

There is a balance to be struck around where the appropriate level of specificity should rightfully lie between principles and those concrete requirements. Establishing a detailed set of requirements regulating AI that apply across all sectors could lead to a rigid piece of regulation that is not fit for purpose in certain situations. Recognising this limitation, we support the context driven approach. However, conversely, leaving too much of the detail to be defined at the sectoral level opens the proposal to the risk of an inconsistent application which would undermine trust in the regime and damage the United Kingdom’s reputation for the rule of law with regards to AI. Therefore this approach is not ideal either.

It is important to flag that the latter approach has significant practical implications. Despite the nuances between industry sectors, there are no clear dividing lines on a sectoral basis – in fact, there is often a substantial overlap between the use of these technologies in different industries. Each sectoral regulator would have to take its own interpretation of the core principles and this would quickly become fragmented, inconsistent and difficult to navigate. An approach which recognises this complexity, and seeks to add additional clarity at a holistic level to ensure consistency, would be a welcome addition to the proposal.

Do you have any early views on how we best implement our approach? In your view, what are some of the key practical considerations? What will the regulatory system need to deliver on our approach? How can we best streamline and coordinate guidance on AI from regulators?

The suggested approach of putting the proposal on a non-statutory footing is understandable. Given how new the space is, there is a required evolution process for all frameworks and a non-statutory footing would allow for the quick feedback loop that rapid evolution requires. The downside of a non-statutory footing is that the proposal will not gain the same level of engagement and market participation as a statutory initiative would have received. There is a chance that non-statutory guidance, which contains no ‘bite’ for non-compliance, would largely be ignored by the market. The landscape for voluntary AI ethics principles and frameworks is already very diverse and noisy - with limited resources and lots of competing demands, we suspect market participants would focus on ensuring compliance with other global regimes instead (such as the EU AI Act or specific frameworks which are already established within their relevant industry sector).

The benefit of starting on a statutory footing is that the proposal would be taken more seriously and receive stronger engagement. The necessary feedback loop to ensure the proposal is evolving quickly and effectively could be achieved through sandboxing initiatives. It is ultimately a political decision as to whether or not this should go on a statutory footing, however if it were to do so we would support such a move. It is our view that there is space to establish the UK as a pro-innovation leader in safe and robust AI. An Act of Parliament addressing many of the issues raised in the Policy Paper, and responded to here, would go some way towards achieving that goal.


Do you anticipate any challenges for businesses operating across multiple jurisdictions? Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?

The Policy Paper correctly indicates that, in this digital age, due care and consideration should be given to how any regulatory initiatives in this space would interact with both other national and international frameworks. As noted, many other jurisdictions are now implementing their own AI regulations and policies.

Market participants across the world would generally benefit from a harmonised approach to regulating AI across jurisdictions, and we would expect any regime here in the United Kingdom to complement those emerging global standards to the benefit of citizens in the United Kingdom. Any wider considerations around cross-border trade and international cooperation are likely better dealt with outside of the issues considered in this Policy Paper.

Are you aware of any robust data sources to support monitoring the effectiveness of our approach, both at an individual regulator and system level?

Yes – the software platform that we are building will allow regulators to monitor the effectiveness of their approach at both the overall system level and at the individual regulator level. We are already working with some key design partners who are using our platform to implement detailed AI principles, measure compliance with these principles across their organisation and establish effective governance gates to ensure their systems are as robust and safe as possible. We would be happy to engage further with the Office for Artificial Intelligence to share our learnings here.

You can learn more about US and European approaches to AI regulation on Enzai's blog.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.