Artificial Intelligence: Implications for risk and compliance professionals

Artificial Intelligence: Implications for risk and compliance professionals
Sylvia Yarbough

Sylvia Yarbough

Regulatory expert and former Head of Compliance

Having worked in financial services since the introduction of laptops, I have witnessed some enormous changes over time, including the rapid rise of Artificial Intelligence (AI).  


Given how often the term is tossed around in the media, I often feel like Rip Van Winkle, as if I had gone to sleep and woken up to a transformed world. 


After spending some time speaking to peers in compliance to get a sense of what’s real and what’s just hype, I found that both a lot and a little has changed, depending on the business line. Some are further down the line than others, but probably not as far apart as both sides feel. 


My path of inquiry focused on:  


  • AI uses and the possible regulatory implications. 
  • Regulators’ position on AI. 
  • AI impact on risk and compliance professionals. 


AI use and the possible regulatory implications 


Chatbots 


Financial institutions (FIs) are leveraging AI primarily in customer services, where online or voice response chatbots have become extremely savvy at engaging with customers.  


Many organizations have purchased this technology from a third party, while some of the largest FIs have partnered with large tech firms to design their own.  


In the chatbot arena, AI is referenced as anything from complex decision trees generating basic responses to large language models (LLM) that can understand and generate human responses. 


Regardless of its sophistication, AI must strike a balance between good customer service and avoiding inaccurate information, which can lead to frustrated customers.  


CFPB tackles rising chatbot complaints 


In June 2023, the Consumer Financial Protection Bureau (CFPB) published an article where they identified a rising number of complaints caused by this type of AI.  


The CFPB’s concerns were categorized into three major areas: 


  • Limited ability to solve complex problems. 
  • Hindering access to timely human intervention. 
  • Technical limitations and associated security risk.  


The CFPB cites concerns such as misrepresentation, inaccurate information, or mishandling of requests. These issues can bring a variety of regulatory violations into play, resulting in significant fines, penalties, and potential lawsuits, especially in mishandling of customer disputes.   


In February 2024, Air Canada found itself on the wrong side of a lawsuit when the British Columbia Civil Resolution Tribunal ruled in favor of a customer due to a chatbot’s misrepresentation of information.  


Although this wasn’t a big dollar amount ($812), the ruling indicated that the company using the chatbot was liable for the error, not the third-party vendor providing the chatbot.  


We all understand the challenges in ensuring our business partners understand that the risk and compliance responsibility cannot be contracted out to a third party.  


Marketing 


The second biggest routine use of AI is in marketing, as firms leverage intelligent tools to identify prime target markets and potential customers for certain products and services.  


AI allows for big data analysis of existing customer transaction patterns with overlays of external data sets. The goal is to increase penetration and conversion rates for every marketing dollar spent as well as aid in the design of product and service offerings to create greater appeal. 


The risk that has always existed in marketing is unfair, deceptive, or abusive acts and practices (UDAAP) as well as privacy and data security concerns.  


It is easy to imagine how the expansive regulatory interpretation of UDAAP can potentially run afoul of AI-generated marketing programs or products.   


AI-embedded biases can lead to an FI being challenged on fair banking practices if demographics for solicitation are not transparent.  


When customers are accepted or rejected into products or services without clarity, this can lead to claims of UDAAP.   


In addition, marketing to customers utilizing external data that may or may not be accurate can lead to concerns about privacy violations. I have heard from some of my peers that there is an increasing number of customer inquiries about privacy concerns. 


Credit decisioning 


Another area in which AI can give an advantage is in credit decisioning, but some businesses are finding this a bit more challenging.  


To compete with fintech and non-traditional lenders, FIs are trying to expand their credit approval criteria while finding a balance between growing the balance sheet and managing charge-offs.  


AI credit decision tools are being sold by third parties and are often termed as “black box” because the third party either won’t or can’t share how exactly the algorithm works. Regulators are not fans of black box solutions, and if investigating an issue will want to know how a decision was reached. 


After speaking with several senior leaders, they all believe integrating this technology into their business model can put them at an advantage.   


The risk here is fair lending non-compliance. The law mandates that FIs must clearly identify and demonstrate how credit decisions are made without disparate treatment or impact on customers.  


Black box designs may get more loans on the balance sheet. However, when an FI cannot clearly demonstrate how credit decisions (approvals or denials) are made, they will have issues with fair lending and adverse action notice requirements. 


Regulator positions on AI 


Regulators, unsurprisingly, have a lot to say about the use of AI.  


One of the more interesting speeches was made recently by the acting Comptroller of the Currency, Michael J. Hsu, at the 2024 Conference on Artificial Intelligence and Financial Stability. 


 His points included: 


  • How AI can be used as a tool or a weapon. 
  • Basic risk management and common sense are still part of the answer. 
  • The risk and negative consequences of weak controls increase steeply when AI is acting as an agent (e.g. executes activities on behalf of the FI personnel). 
  • On the hook for outcomes are those most able to affect them. 


He emphasized how AI in the wrong hands could even cause global market instability. To prevent this, he encouraged good governance and risk practices as organizations work together to figure out the rules of the road. 


The Consumer Financial Protection Bureau has also been shining a spotlight on compliance with regulatory requirements regardless of how AI integrates into the decision-making. It pays close attention to customer complaints and cautions organizations to own and manage their risk. 


AI impact on risk and compliance professionals  


Knowledge is power. The world of technology moves very fast, and it is often difficult for risk and compliance professionals to understand the technology deeply enough to identify the risk.  


In addition, compliance still suffers from the age-old problem of being the last to be invited to the table – sometimes after decisions are made.  When it comes to AI we cannot afford to wait to be invited, we need to be proactive in getting our business partners to engage us from the time discussions are kicking off.   


Regardless of the risk or compliance function you are responsible for, AI will touch it eventually. We need to bring to the table the necessary understanding of the technology so we can add value. 


My suggestions: 


  1. Take the time to learn about AI through webinars and online courses. Whether your organization will pay for the education or not, the worst case is you will need it at your next job. 
  2. Reach out to your peers to see how they are impacted by AI in their organization.  FIs are in various states of progress along their AI roadmaps, so your peers may have some insight to share. 
  3. If you are recruiting, think about filling a position with someone who has AI in their skillset to be a subject matter expert for your team. 
  4. Engage with your business partners at the highest and lowest levels of the organization so that you know what is coming down the pipe in this arena and you are prepared to support your business. 
  5. Know your regulations and how they apply to your business area and processes. AI will move us out of our coverage areas and comfort zone so build a good foundational understanding of regulatory requirements up and downstream from your coverage area.  


There is no shortcut in this learning curve. Comptroller Hsu compared the advent of AI to when the internet first came into existence. It will evolve, but so must risk and compliance to stay in the game. 


Have a question? Get in touch below.