Greg Kilminster
Head of Product - Content
SFC report on adoption of regtech for AML/CFT purposes
Hong Kong’s Securities and Futures Commission (SFC) has issued a new report detailing advancements in regulatory technology (Regtech) adoption within anti-money laundering (AML) and counter-financing of terrorism (CFT) compliance among licensed corporations (LCs). The report, based on a survey and in-depth discussions with 50 LCs, highlights significant growth in Regtech adoption, driven by its potential to enhance efficiency, reduce human error, and improve the accuracy of risk management in AML/CFT processes.
Some context
The SFC has been monitoring Regtech adoption to understand how technology can support AML/CFT compliance. Increasingly complex financial crimes require efficient detection and prevention measures, as traditional, manual methods are often inadequate. The report reveals that most surveyed LCs have adopted Regtech in at least one core AML/CFT area, including customer due diligence (CDD), name screening, transaction monitoring, and management information reporting.
Key takeaways
- High adoption rates and process-specific solutions: The report finds that name screening leads in Regtech adoption at 92%, followed by CDD at 71%, and transaction monitoring at 69%. LCs are using Regtech to automate name screening with tools that reduce false positives, allowing staff to focus on higher-risk cases. CDD solutions enable more accurate risk assessments by streamlining data collection and incorporating real-time adjustments for changes in customer profiles.
- Efficiency gains and error reduction: More than 85% of LCs reported that Regtech enhanced their capacity to detect and manage money laundering and terrorism financing (ML/TF) risks, while 80% indicated reduced human errors. Transaction monitoring solutions incorporating AI, for example, use machine learning to identify suspicious patterns, providing risk scores that help LCs prioritise high-risk cases.
- Challenges and barriers: Despite the benefits, LCs face adoption barriers, including high implementation costs and data privacy concerns. Around half of the surveyed LCs were concerned about the readiness of their infrastructure and expertise. Furthermore, approximately 35% noted data privacy and cybersecurity concerns, especially for cloud-based solutions. The SFC recommends a phased approach to mitigate these challenges, starting with targeted Regtech solutions before scaling more complex applications.
- Responsible adoption principles: The report underlines the importance of responsible Regtech adoption, urging LCs to implement strong governance, cybersecurity measures, and ongoing monitoring. Ensuring that senior management actively supports and oversees these technologies is critical. The SFC also advises LCs to carefully assess external vendors’ data security practices when adopting third-party Regtech solutions.
Next steps
The SFC intends to continue engaging with the financial sector to address challenges in Regtech adoption and to foster collaboration. LCs are encouraged to adopt a gradual approach, leveraging targeted, cost-effective Regtech applications that align with their unique operational needs and AML/CFT obligations.
By establishing robust governance and data protection frameworks, the financial sector can uphold compliance standards and combat evolving ML/TF threats.
Click here to read the full RegInsight on CUBE's RegPlatform.
FCA increases scrutiny on SIPP operators, urging compliance with Consumer Duty
The Financial Conduct Authority (FCA) has issued a new letter to Self-Invested Personal Pension (SIPP) operators outlining its supervisory priorities for the sector. The letter emphasises the regulator's expectations for enhanced governance, client asset protection, and redress practices, with a particular focus on firms’ adherence to the Consumer Duty requirements.
Persistent concerns over redress and data integrity
Following a comprehensive 2024 data review, the FCA noted a significant increase in total assets under SIPP administration, rising to £184 billion in the SIPP Operator portfolio, with a combined £567 billion across all firms, covering an estimated 5.3 million consumers. While the data indicated growth in secure, non-standard investments such as fixed-term deposits and National Savings, the FCA remains concerned about ongoing deficiencies in data management and oversight, especially around record-keeping and asset valuations.
The FCA's analysis also found that some SIPP operators continue to lag in their Consumer Duty responsibilities, particularly regarding accurate and timely resolution of complaints. Despite a reduction in open due diligence cases since their 2019-2020 peak, around 800 complaints remain unresolved, some more than two years old. The FCA expects firms to address these issues promptly, noting that failing to do so undermines both consumer trust and regulatory standards.
Asset safeguarding and control failings
The FCA's letter highlights growing unease about how certain firms manage pension scheme money and assets. The regulator has identified weaknesses in control systems and record-keeping, which could lead to consumers receiving incorrect valuations or facing delays in accessing funds. These issues may be exacerbated if a firm experiences financial distress and must transfer its SIPP business. The FCA reminded firms of their duty under Principle 10 to ensure client asset protection and has committed to conducting on-site reviews to monitor improvements.
Consumer Duty implementation: gaps in compliance
The regulator also outlined concerns regarding SIPP operators’ progress in implementing the Consumer Duty, particularly in the areas of product governance and fair value. The FCA noted that some firms have not defined target markets precisely or have relied heavily on external advisors to communicate with retail clients. The FCA has also observed inconsistencies in how firms approach fair value assessments, with some relying solely on market comparisons without accounting for production and distribution costs, which the FCA views as essential.
Ongoing engagement and proactive measures
In light of these findings, the FCA has committed to continued engagement with SIPP operators through in-person assessments and individual firm consultations. The regulator intends to hold firms accountable for addressing these issues, particularly in areas of asset protection, complaint resolution, and alignment with Consumer Duty standards. The FCA’s proactive supervisory strategy reflects its aim to ensure SIPP operators deliver consistent, high-quality services to consumers.
Click here to read the full RegInsight on CUBE's RegPlatform.
Eddie Yue speech on opportunities and challenges of technology
In a speech at the HKMA-Bank for International Settlements Joint Conference on "Opportunities and Challenges of Emerging Technologies in the Financial Ecosystem", Eddie Yue, Chief Executive of the Hong Kong Monetary Authority (HKMA), emphasised the dual role of regulators as facilitators and risk managers in overseeing the rise of new technologies, especially artificial intelligence (AI). Highlighting the rapid evolution of these technologies, Yue discussed the potential benefits of emerging AI applications, the risks associated with their adoption, and the responsibility of regulators and banks alike to balance innovation with robust safeguards.
Addressing AI’s growing impact on banking operations
Yue noted the increasing use of AI across various banking functions, from enhancing customer services to managing operational and fraud risks. AI, he said, is being deployed in “remote account onboarding, customer chatbots, risk management, fraud detection and automation of work processes.” However, he cautioned that these technologies are not without risks, particularly when implemented without adequate risk management strategies.
The recent excitement around Generative AI (GenAI) was a keu part of Yue’s remarks. GenAI’s ability to create content in a human-like manner, he explained, holds the potential to transform numerous areas of financial services. Banks, he acknowledged, are “keen to explore its potential to step up their game.” However, he stressed the importance of “ensuring any systemic risks arising from emerging technologies are effectively managed to safeguard the stability and resilience of the banking system.”
A regulatory ‘guardrail’ approach to innovation
Yue likened the financial ecosystem to a city where emerging technologies act as “highways” enabling faster operations and innovation. In his analogy, regulatory frameworks represent the “guardrails” that keep banks on course, allowing them to explore new technological avenues while mitigating risks. He outlined the HKMA’s “iterative approach” to fostering innovation, whereby the regulator provides “enough flexibility to allow trial and error” while ensuring that firms thoroughly assess risks before full-scale deployment. This approach, Yue remarked, is embodied in the HKMA’s new GenAI sandbox, launched in partnership with Hong Kong’s Cyberport, where banks can experiment within a “risk-managed framework” with supervisory guidance.
The sandbox, Yue said, provides banks with “timely supervisory feedback and essential technical assistance,” enabling them to test AI-driven use cases before committing to wider implementation. “I strongly encourage banks to make good use of the sandbox to pilot their novel GenAI use cases,” he stated, emphasising the platform’s potential to enable safe, incremental adoption of GenAI across the industry.
Key risks in focus: concentration, fraud, customer protection, and workforce readiness
Yue identified four major areas of risk in AI adoption, beginning with operational risk. He drew attention, much like his regulatory colleague at the Bank of England in a related speech at the same event, to the “concentration risk of third-party service providers,” noting that reliance on a small group of AI providers could be detrimental if any single provider suffers from “IT failures or cyberattacks.” He urged banks to “conduct holistic risk assessment of AI service providers” and to establish contingency plans to mitigate potential disruptions.
Fraud was cited as another pressing concern. GenAI’s content generation abilities, Yue warned, have already been exploited by bad actors. Such misuse, he explained, has led to fraudulent activities and can exacerbate “public panic, especially during times of market stress.” He called for concerted efforts among financial stakeholders to address these emerging fraud risks, including public education to raise awareness.
Turning to customer protection, Yue acknowledged the complexities introduced by AI decision-making processes. Many AI systems operate as “black boxes,” he said, producing outputs that can appear credible but may be biased. To counteract this, Yue emphasised the importance of “responsible innovation” with an ethical approach to AI development. HKMA’s guidance on GenAI, he noted, stresses the value of “human-in-the-loop” oversight, ensuring that “critical judgement and decision-making remain in the hands of humans” rather than relying solely on automated systems.
Finally, Yue discussed the changing skills landscape in the banking industry as AI-driven tasks alter traditional job roles. Noting the “increasing demand for labour with AI knowledge,” he urged banks to proactively plan for workforce training and development. As part of this, HKMA is conducting a capacity-building study to assess skill gaps and project future needs for banking professionals. “We hope this study will help banks better understand their future training needs, enabling them to adjust their talent development strategies in a timely manner,” he said.
Collaboration and transparency as AI adoption scales
In closing, Yue highlighted the need for collaborative partnerships across sectors to navigate the complexities and unknowns surrounding GenAI’s adoption. “AI has no boundaries; it is advancing at an unprecedented speed; and its impact is revolutionary,” he stated. Addressing the audience, he encouraged a shared approach to tackling these challenges, noting that “no single party – be it a public or private entity – can deal with all the challenges alone.”
Click here to read the full RegInsight on CUBE's RegPlatform.
BoE speech on artificial intelligence
In a speech at the HKMA-Bank for International Settlements Joint Conference, Sarah Breeden, Deputy Governor for Financial Stability at the Bank of England, discussed the power and use of artificial intelligence (AI) in financial services. She identified two primary areas of concern: the potential for AI to undermine microprudential safety at individual firms and the broader macroprudential implications that could affect the financial system as a whole.
Microprudential risks and a technology-agnostic approach
Breeden highlighted that central banks and regulators must determine whether existing “technology-agnostic regulatory frameworks” are capable of mitigating financial stability risks as AI models become increasingly sophisticated and autonomous. She noted that, while the sector has yet to reach the point of needing drastic regulatory change, the rapid adoption of AI necessitates constant vigilance.
One of Breeden’s core concerns is the potential for AI models to develop unintended capabilities over time. She noted, “managers of financial firms [must be] able to understand and manage what their AI models are doing as they evolve autonomously.” This, she argued, is essential to prevent models from taking unpredictable actions that could endanger firms' soundness. She noted that although AI brings efficiency, it also raises “model risk management” issues that are unique due to AI’s evolving nature and sometimes opaque decision-making processes.
A yet-to-be-published survey conducted by the Bank of England and the FCA found that 75% of financial firms were already using AI, with a growing number employing foundation models like GPT-4 for credit risk assessments and algorithmic trading. Although initial AI use cases were lower risk, such as customer support and operational optimisation, Breeden warned that as AI systems are increasingly deployed for trading or risk assessment, it may be necessary to reconsider whether a tech-agnostic approach is enough.
Challenges with model transparency and governance
Breeden cited several unique characteristics of AI that make it particularly challenging for the financial sector, including its dynamic nature, limited transparency, and reliance on extensive data. She warned of the dangers of an oligopoly in foundation models, given that 44% of firms surveyed used third-party AI from only three major providers. This concentration could lead to systemic fragilities if widespread model failures or other disruptions occur.
She also stressed that AI could pose governance issues, with only a third of surveyed firms reporting a comprehensive understanding of their own AI implementations. “As firms consider AI in higher impact areas like credit risk assessment and algorithmic trading,” she said, “we should expect a stronger, more rigorous degree of oversight and challenge by management and Boards.” She suggested that firms might need further guidance on their regulatory responsibilities, particularly around model explainability and data quality standards.
Macroprudential risks and sector-wide resilience
On a systemic level, Breeden expressed concern about the potential for AI to create new financial stability risks. Interconnectedness within the financial system could be exacerbated as firms adopt similar AI models, potentially leading to “correlated responses by market participants” in times of stress. Such scenarios, she cautioned, could amplify volatility and contribute to market instability.
To address these risks, the Bank of England’s Financial Policy Committee (FPC) will publish its initial assessment of AI’s impact on financial stability. Breeden raised the possibility of using stress tests to examine how AI models might interact under extreme market conditions. She highlighted the particular risks posed by automated trading models that might amplify market shocks, stating that the actions of one firm’s AI model could influence others’ responses in ways that are difficult to predict, creating a feedback loop that destabilises the market.
International collaboration and future-proofing regulation
Recognising the global nature of the challenges posed by AI, Breeden called for continued international cooperation by organisations such as the G7 and the Financial Stability Board to establish a unified approach to AI risks. She warned that “policy work might well be premature now, but this part of finance is moving quickly,” encouraging the financial community to stay proactive in understanding AI’s risks before they escalate.
As part of the Bank of England’s commitment to safe AI adoption, Breeden announced the formation of an AI Consortium involving private sector experts and financial firms. This initiative aims to deepen understanding of AI’s implications for financial stability and help spread best practices across the sector. She noted, “we will consider what we can do to spread best practices and whether further regulatory guidelines and guardrails are needed.”
In conclusion, Breeden’s address was a call for vigilance. While the industry is in the early stages of AI adoption, her message was clear: regulators, firms, and international bodies must monitor AI developments closely to ensure its safe integration into the financial ecosystem.
Click here to read the full RegInsight on CUBE's RegPlatform.
Dubai Financial Services Authority imposes fine for misleading financial promotions
The Dubai Financial Services Authority (DFSA) has issued a $100,000 (AED 367,000) fine against Vedas International Marketing Management for promoting financial products without authorisation and making misleading claims regarding regulatory oversight.
The DFSA’s Decision Notice asserts that Vedas International, operating as Vedas Marketing, conducted unauthorised financial promotions on behalf of the Multibank Group, a provider of trading platforms, targeting individuals within the Dubai International Financial Centre (DIFC). Additionally, the DFSA found that Vedas Marketing had falsely represented that certain entities in the Multibank Group were regulated by the DFSA, although none of the promoted entities held such authorisation.
Vedas Marketing contested the DFSA’s findings and sought a review by the Financial Markets Tribunal (FMT). However, the FMT dismissed Vedas’ referral after the firm failed to pay the necessary filing fee.
Click here to read the full RegInsight on CUBE's RegPlatform.