Maria Fritzsche
AI solutions – a strong ally under global review
The term Artificial Intelligence (AI) is floating around all over the news and the internet, especially as OpenAI’s ChatGPT, a chatbot-style technology, becomes the poster child of generative AI.
“As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data,” said Thierry Breton, the European Commissioner for the Internal Market.
With AI rapidly developing, the European Commission proposed the first-ever legal framework on AI.
EU lawmakers are scheduled to vote on the draft AI Act in March. This could mean the final Act should be adopted as early as the end of this year. At the moment, lawmakers continue to work on fundamental aspects of the legislation, including an AI definition, scope, prohibited practices and high-risk categories.
AI Act European Union
The proposed Act will regulate the use of AI across all sectors, not only the financial sector. It outlines a uniform, cross-cutting legal framework for AI that aims to provide safety and fundamental rights for people and businesses. The enactment of the framework seeks to encourage innovation in AI, strengthen people’s trust in it and set an international standard, similar to the GDPR.
The draft law presented to the European Commission in April 2021 follows a risk-based approach. The main categories are proposed to be split into 4 levels: unacceptable risk, high-risk, limited risk, and minimal risk.
“Our approach has been to make this regulation truly human-centric,” said Benifei, AI Act co-rapporteurs, to EURACTIV. “We haven’t agreed on everything, but we have made an important step forward.”
A major task will be the definition of AI as it will determine the application of the AI framework. It has been discussed that the definition of AI will be mirrored on, but not copied from, the US National Institute of Standards and Technology “an engineered or machine-based system that can, for a given set of objectives, generate output such as content, predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy”.
The initial definition proposed by the European Commission included statistical approaches and as Clifford Chance said, “this definition could capture almost any business software, even if it does not involve any recognizable form of artificial intelligence.” It is worth noting that the definition was moved from the Annex of the legislation to the main body of the text, which means it will not be possible for the European Commission to amend the definition later on.
The impact on financial services remains to be seen as the draft version runs through the machinery of the European Union’s institutions. Looking at the draft presented to the European Commission, financial services are described as “high impact” in the draft instead of “high risk” such as aviation and health care. Neither is finance included in the Annexes exploring high-risk systems. However, credit institutions and banks are referenced throughout, and credit scoring is named as a high-risk use case. Once passed, the Act will work alongside existing and proposed data-focused legislation, such as the GDPR, the Digital Operational Resilience Act (DORA), the Digital Services Act, the proposed Data Act and Cyber Resilience Act (CRA).
AI and the United Kingdom
The UK’s regulatory body, Financial Conduct Authority (FCA), is optimistic about the use of AI in financial services according to Chief Data, Information and Intelligence Officer Jessica Rusu.
“AI has the potential to enable firms to offer better products and services to consumers, improve operational efficiency, increase revenue, and drive innovation.”
Organisations using manual processes to manage their compliance requirements risk non-compliance due to human error and inefficient resource allocations. A recent survey conducted by the FCA with the Bank of England found that UK financial services firms agree with the benefits of AI. 72% of firms that responded are using or developing machine learning (ML) according to the survey. In comparison, in 2020 it was reported that 64% of European banks adopted AI technologies. Now, over 90% of EU banks are said to be using, developing or testing AI technology.
The survey also found that out of the firms who responded almost half said that there are Prudential Regulation Authority (PRA) and FCA regulations that limit ML implementation and one-quarter of firms named “a lack of clarity within existing regulations” as the reason for this.
The FCA closed its call for feedback on how artificial intelligence may affect the objectives of the FCA, Bank of England and PRA earlier this month. It will be interesting to see the outcome of this consultation.
US National AI Initiative Act
The National AI Initiative Act of 2020 was passed in January 2021 with an overall focus on fostering innovation to maintain the US’s top spot as a global leader in AI. The introduction of the Act was met with greater resistance than the EU AI Act concerning legal restrictions. Another notable difference concerns governance. The National AI Initiative Act of 2020 introduced limited governance whereas the EU is looking at very specific governance requirements.
The US and the EU come from different perspectives when regulating AI. The US from a view to preserving their place in a competitive market and the EU from a perspective of protecting fundamental rights, however, neither is guaranteed a successful outcome nor has the golden ticket to achieve what they set out to do.
CUBE comment
The implementation of the AI Act is likely to shake up many sectors and industries. The financial sector is certainly going to be impacted by these changes and further guidance on how this new legislation will show how it interacts with existing data-focused legislation. For now, the EU is facing major challenges, including pressures from US firms. Facebook stated in their consultation response regarding the AI Act that many requirements are not achievable. The AI Act is proposed to have a wide territorial reach and therefore it can apply to AI systems supplied by non-EU organisations. Therefore, another element to this will be based on the UK’s plans to regulate AI and how the FCA looks to adopt the AI regulation in the UK going forward.
“There are days when I’m optimistic and moments when I’m pessimistic about how humanity will put AI to use,” said Breton.
At CUBE, we remain optimistic about the use of AI and specifically how it can transform the way in which regulatory change management activities are performed. AI is a crucial part of our existing products and will have an ever-increasing impact on our future roadmap. We believe in both transparent and ethical technology, find out more about how we approach the use of AI.
Keep ahead of emerging AI regulations by speaking to CUBE.