CUBE RegNews: 7th June

Greg Kilminster

Greg Kilminster

Head of Product - Content

US regulators address AI issues 


Two regulators in the United States have addressed artificial intelligence (AI) in separate speeches at the Financial Stability Oversight Council (FSOC) Conference on Artificial Intelligence and Financial Stability. 


Janet Yellen, US Secretary of the Treasury, began her remarks by highlighting the transformative potential of AI within financial services. She noted that AI’s predictive capabilities have already enhanced areas such as forecasting and portfolio management, while its anomaly detection has bolstered efforts against fraud and illicit finance. Additionally, the automation of customer support services has improved efficiency and accessibility in financial products. 


Advances in natural language processing, image recognition, and generative AI promise further enhancements, potentially lowering costs and increasing access to financial services. However, Yellen emphasised that these benefits are accompanied by considerable risks. The FSOC’s 2023 annual report identified the widespread adoption of AI in financial services as a new vulnerability, requiring a deeper understanding of the associated financial stability risks. 


Yellen outlined specific vulnerabilities including the complexity and opacity of AI models, inadequate risk management frameworks, and the risks from the concentration of vendors. Additionally, issues such as faulty data can perpetuate or introduce new biases in financial decision-making. 


Treasury’s actions on AI-related risks 

Yellen detailed several initiatives aimed at mitigating AI-related risks. Following President Biden’s Executive Order on AI, the Treasury released a comprehensive report in March addressing AI use cases and best practices for cybersecurity and fraud prevention in the financial sector. This report also identified key steps to tackle immediate AI-related operational risks. 


Treasury has maintained regular communication with federal financial regulators regarding AI-related efforts. One of the key priorities in Treasury’s 2024 National Illicit Finance Strategy is leveraging technology to combat illicit finance risks. This includes employing AI to detect money laundering, terrorist financing, and sanctions evasion. Furthermore, the Internal Revenue Service has been utilising AI to enhance fraud detection. 


Recognising the global nature of AI, Yellen noted that Treasury’s work extends beyond domestic borders. Engagements with international partners, including financial regulators through the Financial Stability Board, are crucial in addressing AI’s impact on the global financial system. 


Future initiatives and continued efforts 

Yellen announced several initiatives to further understand and manage AI’s role in financial services. The Treasury is launching a formal public request for information to gather insights from financial institutions, consumers, advocates, academics, and other stakeholders on AI’s current uses, opportunities, and risks in the financial sector. 


Additionally, Treasury’s Federal Insurance Office will convene a roundtable on AI and insurance, aimed at discussing the benefits and challenges of AI use by insurers. This initiative seeks to identify best practices and potential consumer protections to prevent discrimination. 


The FSOC will continue its efforts to monitor AI’s impact on financial stability, facilitate information exchange, and promote dialogue among financial regulators. This includes building supervisory capacity to better understand associated risks and utilising scenario analysis to identify potential future vulnerabilities. 


Meanwhile Acting Comptroller of the Currency Michael J Hsu centred his remarks on AI’s potential to act both as a tool and a weapon in the financial sector, highlighting the urgent need for robust accountability measures to mitigate systemic risks. 


AI: tool and weapon 

Much like Yellen, Hsu commenced by acknowledging AI's dual nature. As a tool, AI can enhance efficiency and innovation across financial services. Applications he noted include improved customer service, fraud detection, and portfolio management. However, he similarly cautioned that the rapid, uncontrolled adoption of AI could replicate past financial crises, drawing parallels to the unchecked growth of derivatives pre-2008 and cryptocurrencies leading up to the 2022 crypto winter. 


Hsu proposed a phased approach to AI implementation in banking, akin to the evolution of electronic trading. 

  1. Initially, AI acts as an input, providing information for human decision-making. 
  2. It then progresses to a co-pilot, aiding and enhancing human actions. 
  3. Finally, AI becomes an agent, executing decisions autonomously. 


The risks and necessary controls escalate significantly at each stage. Implementing clear control gates between these phases, with rigorous checks before advancing, can help ensure AI remains beneficial without becoming hazardous, said Hsu. 


Financial stability risks of AI tools 

Hsu also highlighted the systemic risks when AI is utilised as a tool without adequate controls. The competitive drive for rapid adoption often leads to a neglect of risk management, creating further vulnerabilities. Hsu stressed the importance of establishing predefined checkpoints where growth pauses to ensure control measures are effective and innovation remains responsible. 


He stressed too the necessity of balance—promoting innovation while safeguarding against runaway risks. This balance, Hsu argued, can be achieved through robust risk management and common sense, ensuring that technological advancements do not outpace the establishment of necessary safeguards. 


AI as a weapon: accountability and trust 

Hsu discussed the dilution of accountability that AI enables. Unlike other technologies, AI’s learning capabilities make it easier to disclaim responsibility for negative outcomes, undermining trust—a cornerstone of banking. To maintain trust while fostering AI innovation, Hsu advocated for a robust accountability model, essential for the responsible use of AI in finance. 


He proposed a “shared responsibility” model to address fraud, scams, and ransomware attacks, where responsibilities are clearly defined and distributed among various stakeholders. This model aims to enhance transparency and accountability, ensuring that AI’s powerful capabilities are not exploited maliciously. 


Hsu concluded by outlining steps to ensure responsible AI innovation. Emphasising the role of well-designed control gates in managing AI’s evolution from input to agent, he called for clear, effective controls and accountability at each phase. This approach, he suggested, could prevent AI from transitioning from a beneficial tool to a dangerous weapon. 


Like Yellen, he also highlighted the importance of international collaboration, noting that financial stability risks associated with AI are not confined to national borders. Engaging with global regulators and stakeholders is crucial in developing comprehensive frameworks to manage these risks effectively. 


Click here to read the full RegInsight on CUBE’s RegPlatform   

 

EBA launches consultation on operational risk loss under CRR 

 

The European Banking Authority (EBA) has launched a consultation on three sets of draft Regulatory Technical Standards (RTS) on data collection and governance of the loss data set as outlined in articles 317(9), 316(3), and 321(2) of the Capital Requirements Regulation (CRR). 

The aim is to standardise the collection and the record of operational risk losses and to provide clarity on the exemptions for the calculation of the annual operational risk loss and on the adjustments to the loss data set that banks must perform in case of merged or acquired entities or activities. 

 

Some context 

The banking package, which implements the Basel III framework in the EU, includes amendments to the Capital Requirements Regulation (CRR) and the Capital Requirements Directive (CRD). This package introduces several innovations in the prudential framework for credit institutions. One significant change is the adoption of a revised framework for own funds requirements for operational risk, which replaces all existing calculation approaches with a single, non-model-based approach called the business indicator component (BIC). 


The EBA mandates cover the elements necessary for calculating capital requirements, specifically concerning the business indicator (BI), the establishment and maintenance of the operational risk loss database, and the requirements related to the governance and risk management framework for operational risk. These draft RTS address the EBA mandates for data collection and governance of the loss data set as outlined in Articles 317(9), 316(3), and 321(2) of the CRR. 

 

Key takeaways   

This package includes draft RTS in three areas: 


  • Draft RTS on establishing a risk taxonomy for operational risk: This draft RTS aims to establish a risk taxonomy that aligns with the event types outlined in the CRR2. It includes Level 1 event types, Level 2 categories that provide more detailed specifications for each event type, and a list of attributes that enhance the flexibility of the framework and the amount of information available to supervisors. The Level 1 event types and Level 2 categories are mutually exclusive and collectively exhaustive, while only some attributes retain this feature.  


  • Draft RTS specifying the condition of ‘unduly burdensome’ for the calculation of the annual operational risk loss: The CRR3 allows the competent authority to grant a derogation to an institution whose BI is between EUR 750 million – EUR 1 billion, when the institution proves that such calculation would be unduly burdensome. The draft RTS specifies that the annual operational risk loss calculation should be deemed unduly burdensome for up to three years when an institution has a BI higher than EUR 750 million following an operation of merger and acquisition. In addition, institutions whose BI temporarily passes EUR 750 million should be waived from the calculation of the annual operational risk loss. Finally, bridge institutions set up according to Article 40 of the Bank Recovery and Resolution Directive (BRRD) should also be waived from this requirement.   


  • Draft RTS specifying how institutions shall determine the adjustments to their loss data set following the inclusion of losses from merged or acquired entities or activities: The draft RTS requires institutions subject to an operation of merger or acquisition, or that start an activity to incorporate the loss data set of the acquired or merged entity or activity in the currency of the reporting institutions. Furthermore, the loss data set of the acquired or merged entity or activity should be incorporated, reflecting the risk taxonomy used by the reporting institution. Finally, the draft RTS provide a formula to calculate, on a temporary basis, the annual operational risk loss when the institution is not able to promptly include the loss data set of the acquired or merged entity or activity into the loss data set of the reporting institution.    


Next steps 

The consultation runs until 6 September 2024. Depending on the feedback received, the intention is to finalise the draft RTSs by the end of 2024. 


Click here to read the full RegInsight on CUBE’s RegPlatform   


EBA issues new set of regulatory products under MiCAR 


The European Banking Authority (EBA) has published another set of final governance regulatory products under Markets in Cryptoassets Regulation (MiCAR). 


Key takeaways 

The package includes: 


  • Final draft Regulatory Technical Standards (RTS) on the minimum content of the governance arrangements on the remuneration policy: This draft RTS applies to issuers of significant asset-referenced tokens (ARTs) and electronic money institutions issuing significant e-money tokens (EMTs). They also apply to issuers of non-significant EMTs if Member States require the application of Article 45(1) of MiCAR. The aim of these RTS is to ensure that remuneration policies encourage sound risk management, discourage the reduction of risk standards, and maintain cross-sectoral consistency. The framework is similar to the remuneration framework for investment firms and shares the same regulatory objectives. 


  • Guidelines on the minimum content of the governance arrangements for issuers of ARTs: These guidelines specify further the various governance provisions in MiCAR, considering the principle of proportionality. In addition, they clarify the tasks, responsibilities and organisation of the management body, and the organisational arrangements of issuers, including the sound management of risks across all the three lines of defence. 


  • Final draft RTS on conflicts of interest for issuers of ARTs: This draft RTS specifies the requirements for policies and procedures regarding conflicts of interest (CoI). Issuers of ARTs must establish and maintain effective policies and procedures to identify, prevent, manage, and disclose conflicts of interest. Adequate resources should be allocated to effectively manage CoI. The final draft RTS emphasises the importance of addressing conflicts of interest related to the reserve of assets. If the issuer of ARTs is part of a group, the policies and procedures must also consider any circumstances that may result in a CoI due to the structure and business activities of other entities within the group. 


Some context 

MiCAR establishes a regime for the regulation and supervision of cryptoasset issuance and cryptoasset service provision in the European Union (EU). Among the activities within the scope of MiCAR are the activities of offering to the public or seeking admission to trading ARTs and EMTs and issuing such tokens (ART/EMT activities). 

These activities will be subject to, respectively, Title III and Title IV of MiCAR from 30 June 2024.