Generative AI: From innovation dream to compliance nightmare

Organisations using generative Artificial Intelligence (AI) could be subject to conflicting AI regulatory frameworks following an international split over how to supervise use.

Amanda Khatri

Amanda Khatri

Editorial Manager

As it did with data privacy, the European Union has drafted laws to serve as a global standard in the AI Act.


However, US lawmakers believe the framework is “vague and undefined” according to an analysis obtained by Bloomberg, adding that it would only help large tech firms with huge compliance budgets.


Overregulation could also hinder the EU’s competitiveness in financial markets, US legislators believe.


Whilst the EU AI policy focuses on how AI models are developed, Washington’s favoured approach would assess the risks associated with how AI models are used.


Aaron Cooper, head of global policy at BSA The Software Alliance, a trade group that has spoken with US and EU officials about AI laws, has said that first and foremost, nations need to agree on the basics, such as definitions.


“The most important thing that the Biden administration can do is continue to have a good candid conversation with their European counterparts about what the objectives are for AI policy”, said Cooper.


The UK has adopted a ‘pro-innovation’ approach and will only regulate where necessary, which is more in line with the US’ light-touch standard.


The generative AI boom and a compliance challenge


Over the last 12 months, generative AI has gone mainstream thanks to the conception of large language model (LLM) machines like ChatGPT, BardAI, and DALL-E.


Using machine learning from patterns learned from data, generative AI creates words, images, videos, music, and computer applications. From producing an entirely new Beatles song that replicates the band’s voices to developing cancer immunotherapy, use cases for generative AI appear endless.


Swiss bank UBS asserts that ChatGPT is the fastest-growing app of all time. By 2028, the generative AI market is predicted to reach a global market size of $51.8bil.


Whilst it presents seemingly limitless opportunities, an abundance of ethical, data privacy, and regulatory challenges are cropping up, presenting a new challenge for compliance functions.


AI governance is being discussed by various regulators in the US, but there is no consensus on how to balance innovation, and economic growth and protect consumers.


US Senate majority leader, Chuck Schumer has met with tech industry luminaries Elon Musk and Mark Zuckerberg, as well as other AI thought leaders to discuss future regulations. However, the US is far from developing federal-level guidance.


Yaron Dori, a partner at Covington & Burling, warns that if Congress fails to act” then “the states will fill the gap with their own laws, leading to a patchwork of AI requirements that will make it more difficult for businesses to know the requirements and more costly and complicated for them to comply”.


Addressing the dark side of AI


Fraud risks are also growing in relation to generative AI use. The technology has been used by criminals to fake voices in order to scam individuals and businesses out of large sums of money.


Many employers are also worried about their employees sharing sensitive information about the firm, which in the wrong hands could be detrimental.


Regulators are concerned that AI tools can also generate biased or discriminatory content, and if used by staff to screen for new hires or formulate answers for customers, it has the potential to diminish an organisation’s diversity and inclusion or ethical practices.


Another concern with generative AI is its potential to produce fabricated or inaccurate answers, which if used by compliance and legal functions, could lead to significant problems for the firm.


The use of generative AI in the workplace could also lead to major data leaks, according to Neil Thacker, chief information security officer (CISO) at Netskope EMEA. He told Computer Weekly that should hackers breach OpenAI’s systems, they could have access to “confidential and sensitive data”.


The regulation of generative AI tools has been addressed by laws like the EU’s AI Act and the UK’s Data Protection and Digital Information Bill. However, there are “currently few assurances about the way companies whose products use generative AI will process and store data”, said Thacker.


During a live-streamed cyber risk conference, Federal Vice Chair for Supervision, Michael S. Barr, advised that financial organisations must invest in generative AI to safeguard against cyberattacks.


“There’s a real risk that we have a cyber arms race using generative AI with defenders and attackers in a constant struggle,” Barr said. “So, we do need to make sure that we are, and banks are, investing in the kind of technology that is useful, not only today, but in the near future.”


A data breach at one firm could have catastrophic effects across the entire financial services industry, even if it is a smaller bank, as well as impact a firm’s payment system and liquidity provision channels, said Barr.


Good tech gone bad


The most notable example of the dangers inherent in using immature technology for sensitive purposes was by New York lawyer Steven Schwartz, who used ChatGPT to produce a brief for a case.


During the trial, when the defence started asking questions, it emerged the brief was based on false information. Schwartz said that he was not aware ChatGPT could create its own opinions and citations.


“AI hallucinations are still a challenge”, said Katie DeBord, vice president at Disco, which sells AI products to law firms.


Generative AI in highly regulated industries


Using generative AI in sectors that are highly regulated carries much greater risks of compliance issues arising.


The EU is seeking to categorise use cases of AI to be ranked from unlawful down to a level where supervision is unnecessary, such as with customer service chatbots. Financial services and healthcare are two of the highest-risk areas, the EU has shared.


Using AI “to make employment or credit decisions can be fraught and potentially implicate anti-discrimination and fair lending laws”, said Yaron Dori, partner at Covington & Burling.


Some organisations may choose to ban generative AI use altogether. CISO Thacker believes this “will not alleviate the problem as it would likely cause ‘shadow AI’ – the unapproved use of third-party AI services outside of company control”.


CUBE comment


Amid all the excitement about what generative AI can do, it’s important to understand the responsibility for policing AI doesn’t just lie with regulators.

The C-suite, CISOs, and risk and security professionals at an organisation will likely need to ensure the safe use of AI software through clear policies, employee training, and by knowing “where sensitive information is being stored once fed into third-party systems, who are able to access it, and how long it will be retained”. 


The combination of regulatory frameworks and workplace safety policies will likely be the way forward to get to a generative AI haven. To navigate the wave of AI-focussed regulations, organisations could leverage CUBE’s Automated Regulatory Intelligence (ARI) and ensure alerts are set up for relevant regulations and overcome compliance blind spots.


To ensure your firm complies with private fund and privacy regulations, speak to CUBE.