UK finance regulators balancing risk, reward in pro-AI stance

UK finance regulators balancing risk, reward in pro-AI stance

Amanda Khatri

Editorial Manager

The UK’s financial regulators have published their strategies for supervising AI in financial services, indicating the need for a balanced “pro innovation” and “pro safety” approach.


In responding to the UK government’s AI Regulation Policy Paper in July 2022, the Financial Conduct Authority (FCA), the Prudential Regulation Authority (PRA), and the Bank of England (BoE) acknowledged the pace and complexity at which AI was developing made it important to monitor.


All three stopped short of prescribing rules for the use of AI within financial services anytime soon, however.


The UK government has said it has no plans to rush new laws into place to deal with AI or legislate as a “quick fix”, as this may lead the rules to date quickly. Instead, ministers are happy to empower existing regulators to address AI as they see fit.


Industry reaction was overall positive, with calls for the regulators to push ahead with their agendas and begin consulting City firms.


“The AI paper is comprehensive and helpful to the UK financial sector,” said Etay Katz, co-head of the bank's industry team at law firm Ashurst.


“Now the regulators need to move forward with an ambitious and proactive programme to foster AI adoption rather than to merely assess the challenges and risks that need to be managed.,” Katz said.


“Given the huge impact AI will have over the future delivery of financial services, this will be a real global differentiator and critical to the UK maintaining the edge over other global financial centres."


Which aspects of AI will the UK financial regulators monitor?


The FCA warned it will be closely observing the processes and systems in place at financial institutions to ensure AI remains safe.


The City watchdog noted that AI can “make a significant contribution to economic growth” but will require “a strong regulatory framework that adapts, evolves and responds to the new challenges and risks that technology brings”.


It has previously described its stance as a “technology-agnostic, principles-based and outcomes-focused regulator”, and will remain relatively hands-off unless it recognises egregious use of machine learning.


It said it will primarily focus on mitigating risks and assessing the implications on financial markets and consumers, rather than seeking to outlaw specific use cases of AI.


This outcomes-based approach to AI regulation intends to give businesses the freedom to innovate and boost competitiveness of markets while ensuring consumers are protected, it said.


The FCA did note that its regulatory strategy must evolve along with the speed, scale, and complexity of AI developments and that over time it will have a greater focus on the testing, validation, and understanding of AI models, and develop robust accountability principles.


Continued adoption of AI in financial services may pose a risk to financial stability, the PRA noted, indicating it will carry out a fuller probe on this through the rest of 2024. The findings will be reviewed by the Financial Policy Committee (FPC) of the Bank of England.


Leaders in finance, leaders in AI?


The government’s ambition is to help the UK become “a global leader in safe AI development and deployment” according to technology portfolio holder Michelle Donelan.


It aims to do this by providing guidance for regulators to follow, based around five key principles that balance innovation with the need to keep the public safe:


  • Ensuring safety, security, and robustness
  • Appropriateness explainability and transparency
  • Ensuring fairness
  • Accountability and governance
  • Contestability and redress


AI can and must remain a force for the public good, and we will ensure that is the case as we develop our policy approach in this area,” said Donelan.

As a major global financial centre, London is expected to play a central role in the development of AI within banking and will have a chance to


CUBE comment


The UK’s finance regulators have recognised that a regulatory framework that can evolve and address emerging challenges and risks is needed.


As such, businesses will need to be attuned to changes that may occur quickly given the pace at which machine learning is developing.


Diverging international approaches also require vigilance from multinational banks which may be subject to unique standards across locations.


CUBE is uniquely positioned to solve these problems for global financial institutions and has done so for over a decade.


We are leaders in AI and have developed industry-leading solutions that take the complexity out of compliance.


Talk to us today.


Book a demo today