UK Financial Sector Weighs in on AI/ML Framework

The Discussion Deepens on How AI will Affect Financial Services Regulation in the UK
Karen WasersteinKaren Waserstein
Karen Waserstein
2 Jan
UK Financial Sector Weighs in on AI/ML Framework

The Bank of England, along with the Prudential Regulation Authority and the Financial Conduct Authority - who are collectively known as the supervisory authorities - published a discussion paper, DP5/22, on Artificial Intelligence and Machine Learning in October 2022. The purpose of the paper was to deepen their understanding and facilitate a dialogue on how AI could impact their objectives in prudential and conduct supervision of financial institutions. This discussion paper was part of a broader initiative related to AI, which included the AI Public Private Forum.

The Bank published a feedback statement on October 26th 2023, which summarises the responses the supervisory authorities received and identifies common themes. It does not propose specific policies or indicate how the authorities intend to clarify, design, or implement regulatory proposals related to AI.

DP5/22 received 54 responses from various stakeholders in the financial sector. These responses came from a diverse range of institutions, with industry bodies accounting for nearly a quarter, and banks representing an additional fifth. The feedback statement highlights that there wasn't significant divergence of opinion among these different sectors.

Below is a summary of the key points:

  1. A regulatory definition of AI would not be useful. Instead, respondents preferred alternative, principles-based or risk-based approaches
  1. Regulatory guidance should be ‘live’. In response to rapidly changing AI capabilities, regulators could periodically update guidance and examples of best practice
  1. Ongoing industry engagement is crucial. Instead, respondents preferred alternative, principles-based or risk-based approaches
  1. We need more coordination between regulators. The current landscape is too complex and fragmented, both domestically and internationally
  1. To address data risks, more alignment is necessary. Especially risks related to fairness, bias, and management of protected characteristics
  1. Consumer outcomes are key. Especially with respect to ensuring fairness and other ethical dimensions
  1. The use of third-party models is a concern. More regulatory guidance would be helpful. Respondents noted the relevance of the discussion paper
  1. A joined-up approach could help to mitigate risks. Closer collaboration between data management and model risk management teams would be beneficial
  1. Areas of CP6/22 could be clarified or strengthened. There is still a need to address issues particularly relevant to models with AI characteristics
  2. Existing firm governance structures are sufficient to address AI risks

Read more about the feedback report here

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.