The Future of AI Governance: Insights from the AI experts

Our panel of AI Governance experts weighs in on regulations and standards, accountability, the three lines of defence and more.
Karen WasersteinKaren Waserstein
Karen Waserstein
2 Jan
2024
The Future of AI Governance: Insights from the AI experts

We are excited to share the insights from our recent webinar’s expert panel featuring Connor Dunlop (CD), European Public Policy Lead at Ada Lovelace Institute; Martin Koder (MK), AI Governance Lead at SWIFT; Chloe Autio (CA), AI Policy and Governance Advisor; and Ryan Donnelly (RD), CEO at Enzai, an AI governance solution. 

Panellist responses have been edited for length and clarity. 

After the latest round of EU AI Act discussions last week, what do you see as the key sticking points and what developments are you hoping to see from the EU next year? 

CD: This is a huge moment for Brussels and the EU - there is political agreement on the AI act. This would mark the first horizontal regulation of AI anywhere in the world. It was a big moment. There hopefully won’t be many sticking points, but we’ll see some in the technical drafting to come. But before speaking about sticking points and developments for next year, I want to briefly mention why the AI act is important and why we should care that there’s a political agreement. As I mentioned, it would be the first horizontal regulation anywhere in the world of AI. And, this is interesting because, they actually will ban the use of some AI systems. Real time remote biometric identification in public spaces would be completely prohibited, for example, with some exemptions for use by law enforcement. The focus this week was on how wide that exemption would be. 

High risk AI systems - according to the European Commission - might make up 5 to 15% of systems. This would capture a lot of public sector use cases, for example in the legal system or law enforcement, along with other high impact sectors: education, healthcare, etc. The act will set some rules around risk management, data governance, ensuring we keep human oversight.  There may be a new category, just agreed from last week - we know that the act’s draft classifies systems into prohibited systems, high risk systems, and low risk systems, and now a ‘systemic risk’ category has been introduced. This category is for general-purpose AI models. This is where there could be a sticking point - specifically, the compute threshold to classify an AI system as posing systemic risks. Right now that threshold is - in the opinion of our institute - quite high. The proposed threshold of 10^25 FLOPs captures only one model on the market, GPT-4. Gemini from Google Deepmind may be captured as well, but it’s not clear yet.

On the question of what is to come in 2024, the first quarter of the year there will still be a focus on drafting the technical language of the legislation. Stakeholders will still be trying to influence that process - the recitals of the AI act and clarification to what the legal text says will be crucial. That work cannot be overlooked in the early part of next year. A potentially interesting development is the agreement on a code of practice for systemic general-purpose AI models. Initially this will be in the form of non-binding guidance for general-purpose providers. This is an avenue for civil society to get involved, which is really important. Starting from probably Q2 next year, there will be a lot of focus on the code of practice. If this is done well, it could address the democratic deficit of the standard setting process. The last thing to highlight would be the EU’s work on an AI liability directive - even though it will get interrupted due to the EU elections next year.

We’ve been discussing the topic of AI Safety in the European context. And for some time the US did not take a leading role in regulation, but that’s changed now with President Biden’s executive order on AI. What is the mood in the US? How do you see the executive order being implemented this coming year

CA: So as a quick summary, the EO came out on October thirtieth right before the UK AI Summit. The executive order was the longest and most comprehensive EO of the Biden administration to date - at 111 pages - related to tech or digital policy ever. So it really demonstrates how intentional the administration is being and how proactive they want to be about this issue.

The EO activates 50 different entities, with the Commerce Department leading a lot of implementation and particularly over at NIST, the National Institute for Standards and Technology, which you may have heard of in the context of their work on the AI risk management framework, which has been a leading voluntary standards framework for AI governance. It also created over 150 new directives, meaning actions, reports, guidance, any kind of rules or policies, agencies creating their own reports, to be implemented within the next 30 to 365 days. The bottom line is that there's a lot of work here to be done. And the reception has been really, really positive.

In my discussions with folks on Capitol Hill - and the administration and industry as well - there's a lot of excitement about the roadmap and path that the executive order sets out, and a lot of curiosity about how it will be implemented. 

This is different from previous executive orders, particularly one from the Trump administration on trustworthy AI. There were a lot of directives in that executive order that weren't quite carried across the finish line. And some of these include, you know, cataloguing AI use cases, developing different sorts of risk management guidance, for different agencies, the Office of Management and Budget creating a memo with guidance about how to adopt and implement AI responsibly. None of this got finalised. And so I think with this executive order, the tone and tenor, we'll see a lot more focus on implementation, and a whole government approach on making that happen. 

It is important to understand that before the EO came out, the US government was actually doing a lot of work on AI governance. While it's true that Congress hasn't passed a broad omnibus or a horizontal AI law on AI governance like we have seen in the EU, a lot of agencies within and across the government (the EEOC, the Equal Employment Opportunity Commission; NIST itself with the AI Risk Management Framework; the CFPB, the Consumer Financial Protection Bureau) have put out different types of guidance to address AI harms that are well known. The executive order builds on top of the great work that a lot of these agencies were doing. That work helped to inform the direction of this executive order.

The EO really hits on a broad swath of issues - it focuses on national security as it relates to model access and protecting against future risks. There's a lot of focus on privacy and consumer protection and IP. 

A requirement for agencies in this context is to catalogue different data sources, including from data brokers to really understand the privacy implications - that is, where the data for these models is coming from. I think a lot of civil society folks and advocates, myself personally, I'm very excited to see where that goes. The US Patent and Trademark Office here will be issuing new guidance on IP rules and guidance on AI inventorship. That'll be really interesting as well. 

The executive order also focuses a lot on equity and nondiscrimination issues. It tasks leading regulatory agencies in the US government, the DOJ, the EOC, and others in DHS to come together and decide how they are going to invest and perhaps make a new guidance for enforcement against AI harms. We haven't really seen a coordinated effort to do that yet. The EO really formalised that. We're looking at issues like non discrimination in housing, loan availability, things that we all have known for a long time are well documented AI harms. It'll be really great to see how the government comes up with an enforcement strategy around these issues.

The EO also covers labour and worker rights. It asks agencies to provide some guidance on how they might be using AI systems or how federal contractors are using AI systems in hiring processes. We have all heard about the UK AI Safety Institute. NIST itself has also set up an AI Safety Institute where they'll be doing a lot of work to interrogate the security and robustness of foundation models, AI models, developing red teaming processes, building on existing risk management processes to really improve the security of these models.

And finally, the executive order really, really focuses on bolstering AI talent in the US government. This is a sad statistic, but it's a real one. Only 1% of graduating computer science and AI PhDs in the US join the federal workforce. We have a huge amount of very talented AI professionals going to the private sector or to academia or large labs. But only a small chunk of those individuals end up working in the public sector and contributing to public sector goals and needs, making public services better. The executive order creates some new rules and changes - small amendments to immigration policies to allow foreign-born, US-trained individuals in AI to stay in the US and contribute to the US ecosystem. That's a very high level overview of the key themes that are sprinkled through this executive order.

What is happening in the UK? I know you’ve been closely involved in the discussions here, with both senior AI leaders and the government at Downing Street. Has there been any progress since the AI Safety Summit?

RD: To quickly summarise the developments in the UK, the government started out with a Pro-innovation approach to regulating artificial intelligence. The idea was to establish the UK as a world leader in artificial intelligence. It touches on a lot of the points raised around promoting talent and making sure that organisations are not overburdened; how the government can set up an infrastructure that really promotes innovation around these incredible technologies. The first white paper on that was published a couple of years back. We as a company fed into those direct proposals. Early on it started out as a guidance and as time evolved, some of the feedback that they received was that the pro innovation approach to regulating artificial intelligence didn't have any teeth. And they took this feedback on board - the government had a choice to make: do they regulate AI or do they not regulate AI? 

The second draft of the pro innovation approach was quite creative. I’ve said there were 2 options. There may have been a third one: the plan for regulating AI in the UK is to push the burden of regulation onto existing sector regulators and say, “You are the financial conduct authority or the competition authority, you are already regulating for your area. You also should look at how AI is used within your existing remit.” That will involve making sure that AI systems comply with cross sectoral principles. This approach puts an obligation on industry.

There was talk of going even further and introducing an AI bill and I think the government did definitely consider this in detail. It was an open question going into the safety summit. At the back of the Safety Summit there was a decision to not try and introduce any additional regulation in the UK. There was an additional bill on self-driving cars and off vehicles but there's no sort of like nothing like the EU AI act, which I think maybe people were considering at the moment.There will be a new draft of this pro innovation approach coming out in the new year. And I've been speaking quite closely with some of the people in DSIT about that. It's very unlikely that a horizontal piece of legislation like we've seen in Europe will come out in the UK.

The approach is different in many ways. One of the biggest criticisms that is levelled against the pro-innovation approach and putting the burden on sectoral regulators is that there is no coordinating function. With no central function pulling everything together behind the scenes,  one regulator may take an interpretation of one aspect of the value chain, while another regulator takes an opposing interpretation of the same question.There is a lot of uncertainty in the market, so that's a piece of feedback that the government's working the address on their latest version of the white paper. In the same way, criticism is almost the reverse at the EU AI act where some say it's too horizontal, and that there is a need to consider specific sectors and the nuances that come with them. 

We've now discussed different regulatory approaches in the US, the EU, the UK, and we also know that there's been similar efforts in Brazil, China, and very much around the world. Why would international cohesion be so valuable and what do you see as the benefits of setting global standards?

MK: I am happy to talk about SWIFT and the approach we are taking to grapple with some of these issues. As an introduction, SWIFT is the Society for Worldwide Interbank Financial Telecommunications. We are a neutral, not-for-profit organisation which runs the infrastructure for international payments messaging. We are very heavy on governance. The way we're talking to our customers, to big financial institutions, is to point at the extremely extensive and mature risk management capacities that are already in place in established financial service providers. Under the 3 lines of defence model, we have vendor management, product governance, data governance, security controls, privacy assessments, etc. We have umpteen committees and processes and legions of people that look at this stuff. This already gets us 70% aligned with what policymakers are hoping to see. 

There may be one or two gaps in some areas. But it may be a question of using the vocabulary of AI governance regulation - explainability, which in legal terms means what is a risk liability. And principles like fairness already exist as non-discrimination, etc. 

So it might be a case of extracting out of all the many processes you already have - risk assessment, identification, impact monitoring, etc. - the management information against some defined metrics that regulators are looking to see for responsible AI and AI risk management. These metrics span across different dimensions: fairness, auditability, accountability, accuracy. And then present that information in a standardised way so that senior decision-makers in the organisation (at the executive level, the board, Chief Risk Officer, General Counsel, and so on) who don't necessarily have much data expertise or AI expertise, can understand. It should be in a way which is repeatable and that gives leadership assurances that they can take decisions on and to provide assurances which can be made available to whoever wants them - whether that is the Regulator, whether that is customers, whether that is other types of stakeholders.

What I would say to customers out there is: don't panic. Because, certainly if you're a mature institution, you've already got a large amount of risk management capacity to deal with AI Governance.

On global standards, the culture of the company, the capacity to do generic risk management processes, as well as the infrastructure you can have in place, will get you the assurance outputs that are available. Set your own standards in your own context. You can then repurpose them based on the requirements of local jurisdictions through various strategies: have partnerships with people in different parts of the world, for example, amongst your customer base, amongst your vendor group, with academics, with regulators. You can sit on the boards of working groups, looking at new areas like pet technologies to understand what standards they follow, to learn the local context for certain areas and to repackage the assurances you can provide to suit local regulations. 

I think Connor was talking about categories and thresholds. I'm afraid this is just the way regulation works. Regulators always create buckets. And then people complain that things are in the wrong bucket or the threshold is at the wrong level, but they do change over time. I think regulators have done a pretty good job globally, bearing in mind that most policy officers don’t have much expertise in machine learning. NIST is a key organisation to follow. There is so much noise about what constitutes best practice on AI governance, how to deal with all the different emerging regulatory standards. It's actually got a lot worse over last year in terms of the signal to noise. It's much more difficult to penetrate through to see actually which organisations you should trust. The best thing you can do is leverage what you already have - for example, the 3 lines of defence.

On Ryan’s points about the UK, I think the UK has been rather smart in three dimensions. First of all, they set up the AI Safety Institute. The UK will be able to see all the cutting-edge foundation models of US companies because they will be checked by the US and UK agencies. Second, the UK is still a member of the standard-setting body that is empowered to set the standards for the EU regulation. In a certain respect, the UK still has a seat at the table when it comes to the standard-setting phase of the EU Act. Thirdly, the requirements of the EU Act for companies to demonstrate that they have a risk management system in place will be a bonanza for the professional advisory firms, many of which are based in London. So we might see quite a few of them getting decent business out of the act.

You mention the three lines of defence. This is a concept that's really familiar in the financial services and insurance sectors, but not so much in other industries. How applicable are the three lines of defence in trying to manage AI risks and do you think this is something that other organisations should look into?

MK: It does depend which sector you're in. In general, the message coming out of the AI Safety Summit was that throughout the history of the world, there have been certain sectors that were considered a bit unsafe, whose usage has made people nervous but they don’t anymore. It is through safety standards that we have managed those risks and earned society’s trust. If we look at aviation, or transport in general, nobody worries about getting on a plane because they know there are strict safety checks and regulations. If we look at pharmaceuticals and the phased approach to clinical trials, this is an approach that some people are suggesting we should take for foundation models. Part of the mission that the safety institutes have is to look at the transferability of standards from these highly-regulated sectors into the AI universe. 

The three lines of defence is quite specific to the financial services sector. But the essence still makes sense for other industries. The first line is the business, the product owners. They own the risk and bear the responsibility to identify the risks, identify the controls, put in place the mitigations and monitor the outcomes. Accountability sits with the business. The second line is all your support partners - the risk managers, the lawyers, the data governance team, the cyber security team, etc. They give domain expertise, through interdisciplinary workshops to identify all the issues with the business response, helping the business convene and register and rate quantitatively all the risks against the kind of defined risk appetite for a particular project. The second line helps you with monitoring. And the third line is the auditors that will come in and periodically check that all the work you've done in terms of identifying and managing risks is as you've said it is. In general, it's this point about looking at sectors where they're already got advanced safety measures in place and what transferability they have to this sector. 

It's crucial for the measures to be specific to what you're doing. If you're running a business to business (B2B), tabular data back office, logistic regression anomaly detection service, then fundamental human rights issues aren't necessarily the biggest risk of your AI use. 

We've touched on a really interesting point there about ownership of the risk. This leads us straight into questions about accountability. Where do you think that ultimate accountability in managing AI risk lies across the entire value chain?

CD: We've been thinking a lot about this question at the Ada Lovelace Institute. Along with what Martin said, the multiple lines of defence concept maps on really well to the AI value chain. One of the challenges that we've had with the EU AI act is a focus on the deployment phase because they took the product safety lens. In our research at the institute we're constantly finding that it's much more complex than that. Risk can originate at any point in the value chain so there's not a one-size-fits-all answer on where to assign accountability. 

What is clear is that accountability should be allocated along the value chain. For example, decisions made at the design and development phase or on which data that you feed into the model when you're training it will definitely have some impact on outputs even at the application layer when it's being deployed. Thinking about where the risk originates is one useful concept to pin to. Risk origin is very important. Risk proliferation as well is a key one for regulators to hone in on. Risk may proliferate, for example, once it's uploaded to a Cloud Hosting Service when it's provided via API access, that's really when something that could be harmful can be amplified by giving wider accessibility. This can be a useful point of intervention potentially for regulators to ask where the risk originates and how it can be mitigated.

We cannot say clearly where accountability lies - it needs to be shared and it needs to be very targeted around where the risk originates. There are some interesting governance models that we can look out to learn from other sectors. For example, the 3 lines of defence. The regulation of life sciences in the US is one that we've been looking at recently - basically what an FDA for AI could look like. We are going to publish research on this in the next few days. The key thing is to get access to high value information, to address the information asymmetry that we see between developers and regulators. The Food and Drug Administration (FDA) has access to information to regulate medical devices, for example. There is an ongoing relationship between the developer and the regulator throughout the product’s life cycle. There are options to have targeted scrutiny based on specific standards. This is a good way to address value chains where risk origination is very unclear, and it's also a good way for a regulator to learn and upscale over time, by finding checkpoints where they can get access to high value information. 

There's significant accountability pre-market on the developers and employers. Then at the post-market stage there is a need for a strong regulator that can go and look at risk throughout the life cycle. This probably gets to the second and third line of defence, which should be supported by what we call an ecosystem of inspection. Meaning access for auditors, as Martin mentioned, or access for vetted red teamers.

The main challenge is a question of incentives. The AI act probably won’t set obligations to have independent audits or adversarial testing with vetted red teamers. So the question is, what are the incentives for providers who might not necessarily open up to that type of scrutiny without a regulatory hook. That's what we are going to be thinking about moving forward.

We have a question from the audience that says: are you working on a mapping document which maps the NIST risk management framework to the EU AI Act?

CA: There have been so many different mapping exercises with the Risk Management Framework (RMF), with the EU AI Act, with various other proposals and I'm happy to pull up a few resources and share them with you all so you can have them. 

MK: Even if such a document existed, there will not be a point in the future when we're going to get total legal certainty about the requirements everyone has in a particular jurisdiction. It's always going to be dynamic. We're always going to be operating with a certain amount of uncertainty and need to get comfortable with that. Part of the solution is developing one's own standards, specific to your own context. Go back to the beginning and instead of leaving it to the policymakers, work it out yourself in your own context. Define some of your own standards and create an audit trail against them. You can use that audit trail when you want to give transparency for accountability, for every stage of every stage of the life cycle.

But I would caution anyone away from the idea that if only they could chuck the different jurisdictions’ standards and requirements into an LLM, or a platform that fully maps documents and all an organisation needs to do is check all the boxes and they're good, is not realistic because this is going to be a dynamic environment. 

RD: I completely agree. We're putting out a free AI policy guide very shortly touching on exactly these points where you need to tailor some of these frameworks to your organisation and go through and understand what matters to you. It's not a case of just rolling things off the shelf. 

There is a question around intellectual property and how intellectual property in this area is going to play out. 

Ryan: Incentives are also a key consideration in all this and the incentives are not always what you might think. To give an example of how this is currently playing out, there's deep intellectual property concerns with a lot of these large foundational models because they've pulled their data from absolutely everywhere. What you're seeing some of the larger providers do - those that own big datasets already - is put out intellectual property indemnities, saying “we're going to indemnify you against any kind of claim that anyone may ever bring that you've breached intellectual property laws anywhere in the world, because we are so certain that we own all of the data this model was trained on that we don't think that's ever going to happen so we can confidently give you this indemnity”. That is inspiring quite a lot of trust in those types of products.

Rather than waiting for specific rules to do with intellectual property and these big models. organisations are taking the initiative there and saying, we're going to take this risk out of the customer's hand entirely. We're going to make our tools trustworthy. I think it's really compelling, and it shows you how sometimes the incentives are a bit different than you might initially expect. Similarly, you can look towards the cybersecurity area where there's a lot of high standards that SaaS (Software as a Service) businesses need to meet such as ISO/IEC 27001 and SOC 2 that are not required by regulators. 

But there are market incentives for organisations to comply with standards because no one will buy your software if you don’t comply with the highest possible levels of cybersecurity standards to protect their data. That market incentive is sometimes overlooked.

The executive order is going to have a huge impact, but there's a lot of initiatives at the state level as well in the US, aren't there? I'm thinking for example of New York local law 144, they seem to be state level and they seem to be focused on specific industries. How do you think that all of those moves are going to interact with the wider federal piece?

CA: It's a really good point to bring up. We're going to see a ton of state activity next year, particularly in the absence of congressional action. There's been a lot of focus on majority leader Chuck Schumer in the Senate's AI Insight forums. He's convened about 10 different focus roundtables with leading AI experts, with other discussions on discrete topics like transparency and explainability and national security and bias and civil rights. But the political realities in Washington are what they are. We have a big election coming up next year. A very consequential election. It's really unlikely, even if we make some great progress in the Senate on legislation informed by the Insight forums and the executive order implementation, that we will get a comprehensive bill across the finish line, or that we will manage to address a lot of the AI harms that we're concerned about. 

In that void, a lot of states have already begun to step in. New York City passed local law 144, which requires bias audits for any kind of use of AI hiring technology both for hiring but also for employment. If your business is doing any kind of worker monitoring or interviews or check-in that is facilitated by AI technology, you're going to have to perform a bias audit. Implementation is still being sorted out. We’ll see a lot of sector-specific bills, as well as bills bolstering AI talent and efficiency in state government. On the sector specific bills, in Colorado we have an insurance law that looks at how data sources are used and how data is analysed through AI for insurance decisions. That law is on the books now, with implementation happening as we speak. In Connecticut, State Senator James Maroney has introduced an interstate AI working group, bringing a group of different lawmakers together at the state level who are really concerned about these issues, whether it's in the context of bias and discrimination or election integrity.

And could you tell us, how might the 2024 presidential election impact the execution of Biden’s Executive Order on AI? 

CA: I'm well aware that the Biden administration is really focused on implementing as much of this executive order as they possibly can, as we go into the election year. And particularly in the context of the last executive order on AI not quite hitting the mark when it came to implementation. It'll be really interesting to see what happens next year. Trump himself has already actually come out on the campaign trail and said that on day one, he will repeal President Biden's “Woke AI executive order” to protect free speech. Regardless of your political leanings I think these are very predictable talking points and positions and my takeaway here is that AI governance is a political issue in the US and it will continue to be that way, particularly as we head into this election. There is lots to watch on the federal level. More importantly, pay attention to what's going on at the state level because there are many up-and-coming elected officials who are really interested in working on these issues.

There's elections in Europe next year as well. How's that going to affect the timeline for the EU AI act, particularly for it coming into effect?

CD: I'm hoping it won't affect it at all. There's a small chance that it could affect it. If so, they need to finish the technical drafting of the text by March latest so it wouldn't be impacted by the election. So work will basically start from March. Right now it's looking like they have more than enough time. I don't anticipate hiccups. There might be if there was significant pushback. France may intervene in some destructive manner to try and block the act. If that were to happen, you run the risk of no agreement by March. In practice that would mean work is postponed until 2025, which would be messy, to say the least, because then you would have new parliamentarians elected. It might not be the same people in the parliament so you have new people picking up the file with different priorities.

Assuming they do go ahead, there is a 2-year entry into application after publication in the official journal of the EU. They will probably publish the text in early 2024, so 2026 will be the year for most AI applications to become compliant. For general-purpose AI models with systemic risk, the aim is to come into application in one year after publication.

That concludes a very insightful discussion on the responsible development and governance of AI systems. I want to sincerely thank our panellists, Connor, Martin, Chloe and Ryan, for sharing their expertise and perspectives. A big thank you to all our attendees for joining us - I invite you to continue this dialogue on our event page and social media channels. 

The full Webinar recording is available on Youtube.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.