CUBE RegNews: 22nd May

CUBE RegNews: 22nd May
Greg Kilminster

Greg Kilminster

Head of Product - Content

FCA and PRA fine CGML £27.8 million and £33.9 million for trading systems failures

The Financial Conduct Authority (FCA) has imposed a fine of £27,766,200 on Citigroup Global Markets Limited (CGML) due to significant failures in its trading systems and controls. These failures led to the erroneous sale of US$1.4 billion of equities across European markets.  The Prudential Regulation Authority has also fined CGML almost £34 million for a related investigation.


By agreeing to resolve the matter, CGML qualified for a 30% discount, without which the total fine from the regulators would have been in excess of £88 million.


What happened?

On 2 May 2022, a CGML trader intended to sell a basket of equities valued at US$58 million. However, an input error resulted in a basket worth US$444 billion being created. Although CGML's controls blocked US$255 billion of this erroneous order, the remaining US$189 billion proceeded to a trading algorithm. This algorithm began selling portions of the order throughout the day, ultimately executing US$1.4 billion in trades before the order was cancelled. This incident contributed to a brief but significant drop in European stock indices.


Failures Identified

The FCA investigation revealed several critical deficiencies in CGML's trading controls:

  • Lack of hard block: There was no mechanism to completely prevent the erroneous order from being processed.
  • Pop-up alert override: The trader could bypass alerts without reviewing the necessary information.
  • Ineffective real-time monitoring: Internal alerts about the erroneous trades were not escalated promptly.


Regulatory breaches

The FCA found CGML in breach of:

  • Principle 2: Conducting business with due skill, care, and diligence.
  • Principle 3: Organising and controlling affairs responsibly with adequate risk management systems.
  • Rule 7A.3.2 of MAR: Ensuring effective systems and controls for algorithmic trading.


CGML has undertaken remediation work, taking steps to improve and strengthen its trading controls.

 

Click here to read the full RegInsight




ESMA consults on three standards as part of MiFIR review 

The European Securities and Markets Authority (ESMA) has published a consultation paper on non-equity trade transparency, reasonable commercial basis (RCB) and reference data under the Markets in Financial Instruments Regulation (MiFIR) review. ESMA’s proposals aim at enhancing the information available to stakeholders by improving, simplifying and further harmonising transparency in capital markets. The consultation seeks input on three topics. 

 

  • Pre- and post-trade transparency for non-equity instruments: Ensure trade information for bonds, structured finance products, and emissions allowances is accessible to stakeholders. 
  • Data availability on a reasonable commercial basis: Guarantee that market data is accessible, fair, and non-discriminatory for data users. 
  • Provision of instrument reference data: Ensure data is suitable for both transaction reporting and transparency purposes. 


The deadline for comments is 28 August 2024. 


Click here to read the full RegInsight




Gensler reminds markets of T+1 benefits 

SEC Chair Gary Gensler has issued a brief statement regarding the US securities market's transition to a T+1 settlement cycle, set to take effect on 28 May 2024. The change will reduce the standard settlement time for most broker-dealer transactions from two business days (T+2) to one business day (T+1). 

The SEC's rule amendments, adopted on 15 February 2023, aim to facilitate this transition by improving the processing of institutional trades and establishing new requirements for broker-dealers, investment advisors, and central matching service providers. 


Gensler highlighted the benefits of this transition for investors, stating that it allows for faster access to funds and enhances market efficiency by reducing time-related risks. 


While previous reductions in settlement cycles—from T+3 to T+2 in 2017—were successful, the SEC anticipates some initial challenges. Nonetheless, the overall benefits should include reduced credit, market, and liquidity risks. 


Click here to read the full RegInsight





UK Finance publishes annual fraud report 

UK trade association UK Finance, which represents around 300 firms, has issued its Annual Fraud Report 2023. 

The report notes member firms reported losses from fraud of £1.17 billion. This represents a small reduction of 4% on the reported figure for the previous period. 

Unauthorised fraud – where the account holder themselves does not provide authorisation for the payment to proceed and the transaction is carried out by a third-party – fell by 3% from £726.9 million in 2022 to £708.7 million in 2023. 

 

  • £551 million of this was from card fraud, 
  • £5.6 million from cheque fraud, and 
  • £152 million from remote banking. 

 

Authorised fraud – where the account holder themselves is tricked into sending money to a fraudster posing as a genuine payee – fell by 5% from £485.2 million in 2022 to £459.7 million in 2023. The report notes that all case types decreased except: purchase, (highest recorded), romance (highest recorded) and invoice and mandate. 

 

  • £85.9 million was from purchase scams, 
  • £107.8 million from investment scams, 
  • £36.5 million from romance scams, 
  • £31.3 million from advance fee scams, 
  • £50.3 million from invoice and mandate scams, 
  • £11.6 million from CEO scams, 
  • £78.9 million from police/bank staff impersonation, and 
  • £57.3 million from other impersonation. 

 

Click here to read the full RegInsight




APRA speech: no new regulation to deal with AI 

In a speech at the Australian Finance Industry Association Risk Summit, Therese McCarthy Hockey, Executive Board Member of the Australian Prudential Regulation Authority (APRA), discussed the impact of generative artificial intelligence (AI) on the financial services sector and confirmed there are no plans for additional regulation to help counter any concerns. 


Generative AI: prospects and concerns  

McCarthy Hockey discussed the rise of (AI), noting its rapid development and potential for both positive and negative impacts. Proponents believe AI can cut costs, improve decision-making, and automate mundane tasks, while critics warn of job losses, criminal misuse, and existential threats. She emphasised the need for a balanced view, seeing both the opportunities and risks associated with AI. 


AI in the financial sector  

Generative AI, with its ability to generate realistic content, presents significant benefits and risks. The financial services industry, a major investor in AI, stands to gain from improved efficiency, lower costs, and enhanced decision-making. Examples include using AI for document review, customer service, product testing, and fraud detection. McCarthy Hockey noted that AI could lead to savings for customers and higher returns for investors, but she also highlighted the potential for AI-related crimes and financial instability. She elaborated on some of these including AIs potential misuse in scams and fraud, the danger of flawed decision-making, and ethical concerns such as bias and privacy issues. She also pointed out that complex AI systems can be opaque, making it difficult to identify and correct errors, which could undermine public trust and financial stability. 


Regulatory approach 

Despite the significant risks, APRA is currently not planning to introduce new AI-specific regulations. McCarthy Hockey explained that existing prudential standards already cover many AI-related risks, such as cybersecurity and data protection. She maintained that APRA’s high-level, principles-based, and technology-neutral approach remains adequate for now. 


Guidance for regulated entities 

McCarthy Hockey advised entities to tread carefully with AI, conduct due diligence, put appropriate monitoring in place, test the board’s risk appetite and ensure there is adequate board oversight. She elaborated on these: 

  • “Board capability – how does the board ensure it is sufficiently capable to challenge management and make sound decisions on AI strategy and risk management? What learning and development, outside advice or skills might be needed? 
  • Risk culture – how does the board ensure all employees across the three lines of defence understand their role and responsibilities in protecting the business? And how can management monitor the potential for the unauthorised use of AI by employees? 
  • Data quality and reliability – the best AI in the world can’t create good output if your company hasn’t got its house in order on the inputs. Our observation across the financial services industry is that many institutions have a long way to go on data risk management generally.” 


APRA’s use of AI 

APRA itself is exploring AI to improve regulatory efficiency. Initiatives include using machine learning to analyse risk culture surveys and natural language processing to assess incident reports. These projects aim to enhance supervision and reduce regulatory burdens. APRA is also collaborating with other government agencies and regulatory bodies to explore AI applications. 

McCarthy Hockey concluded by encouraging financial institutions to innovate with AI, provided they have the necessary risk management frameworks in place. She stressed that while AI can be a valuable tool, it should not operate independently without human oversight. The goal is to harness AI’s benefits while mitigating its risks to ensure financial stability and protect the community. 


Click here to read the full RegInsight