Who will regulate AI?

Mark Taylor

Mark Taylor

Senior Editorial Manager

Barely on the radar of governments 12 months ago, generative AI is now the only story in town. 

The public’s imagination has been captured by large language model (LLM) machines like ChatGPT and BardAI, and art generators like DALL-E, but the speed and scale at which they are being incorporated into everyday products is triggering as much alarm as wonder. 

What are Large Language Models (LLM)?

LLMs are effectively large-scale probability machines, trained on user data to predict the word which should appear next, and their sophisticated (if not always accurate) responses to prompts are based on how a human may answer. 

Given that the machines are largely trained on data generated by humans, and often the people using the tools, privacy is a central issue. 

These concerns prompted Italy to block ChatGPT on the grounds it was non-compliant with the General Data Protection Regulation (GDPR).

The developer OpenAI was forced to meet regulator demands over data collection before it could resume service. It served as a sharp reminder that for all its promise, unchecked use of the technology by an enthralled population could spell trouble. 

The growing accessibility of the tools, privacy worries, and the inherent risk of discrimination and bias within the training models have all fuelled regulatory concerns that chatbots can increase toxicity and spread misinformation. 

Even the developers of ChatGPT have warned the technology is moving so fast it needs guardrails, with OpenAI chief executive Sam Altman telling Congress it’s “time to start setting limits on powerful AI systems”. 

“We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” Altman said in prepared remarks, having toured the idea of a global watchdog for AI in a similar vein to the internationally coordinated oversight of atomic bombs. 

With the genie out of the box, and dystopian existential scenarios being aired by the very creators of the tools, the question now is who will regulate AI?

The international approach to AI regulation

At the supranational level, AI governance is being discussed in multiple forums, including:

  • The US-EU Trade and Technology Council (TTC) 
  • United Nations Educational, Scientific and Cultural Organisation (UNESCO) 
  • Global Partnership in AI (GPAI) 
  • Organization for Economic Co-operation and Development (OECD) 
  • Institute of Electrical and Electronics Engineers (IEEE), along with many other stakeholders 

Aspects like trust, transparency and ethics are high on the regulatory agenda as these groups formulate their approach to harnessing the disruptive potential of generative AI. 

A G-7 Leaders Communiqué earlier in 2023 also highlighted the need for cooperation between governments, with the note stating that top nations are effectively stumped as to how they will proceed. The main problem facing lawmakers is “there is very little consensus about how to go about regulating it,” said Dan Reavill, Head of Technology at law firm Travers Smith. 

“Governments across the globe are grappling with how to balance promoting innovation and economic growth with protecting citizens’ privacy, safety and other human rights,” he said. 

Which US agency will regulate AI?

The home of companies like Microsoft and Google, which are doing so much to bring AI to the masses, would seem the obvious choice to lead regulation internationally. However, experts have warned that the US’s fragmented approach may hinder its desire to be the top cop. 

Individual states have their own view on how businesses within their borders comply with consumer protection guidelines, and the picture is similarly fractured at the federal level. The Biden Administration, Congress, Department of Commerce, National Telecommunications, and Information Administration (NTIA) and others are all jostling amongst themselves for center stage with various AI governance initiatives in the works. 

Guidelines are emerging from the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the Blueprint for an AI Bill of Rights, whilst legislators inside the Federal Trade Commission and others are mulling how existing laws and regulations could apply to AI systems, in much the same way crypto asset regulation has evolved. 

“Despite the media headlines that may lead one to believe that there are no laws applicable to AI in the US, existing federal and state laws apply, along with a series of frameworks issued by various federal agencies,” said Ron Del Sesto, tech expert and partner at Morgan Lewis.

The European Union’s AI Act

As it did with data privacy, the European Union (EU) has emerged as a frontrunner in the race to build rules for AI. In June, the bloc announced the AI Act, the world’s first “comprehensive AI law”, but it still has some way to go before entering force. 

Brussels believes it has sketched out guidelines that can also serve as a global benchmark to allay broader privacy and safety concerns. 

The EU has taken a risk-based, product-safety-type approach to regulating the technology through a four-tiered model; the top tier being ‘unacceptable risk’ systems that are banned outright. 

This includes social scoring, harmful behavioral manipulation, real-time biometric identification systems in public spaces, predictive policing, emotion recognition systems in law enforcement, border management, workplace, and educational institutions and scraping of biometric data from social media or CCTV footage to create facial recognition databases. 

The second category is ‘high risk’, which could have a negative impact on safety or fundamental rights, while chatbots fall into the third category of ‘limited risk’, which means they will be subject to transparency requirements. The bottom tier is ‘minimal risk’ and not subject to restrictions, such as gaming or spam filters. 

Negotiations on the final wording will continue, and as if to underline the difficulty of the task ahead, OpenAI has already threatened to pull ChatGPT from the EU entirely in response to what it perceived as over-reach from regulators. 

Altman’s firm pushed back against proposals that would make developers liable for how the tech is used, even if the likes of Google and OpenAI have no control over applications the AI is eventually built into. 

The UK and light-touch regulation

The UK’s approach was outlined in the white paper, A pro-innovation approach to AI regulation, published in March 2023. 

Driving growth is the main objective, and “ensuring that the UK remains attractive to investors”, said Reavill. The white paper recognizes that trust is a critical driver for adoption, but the government is hands-off and will only legislate where absolutely necessary. 

But as in the US, bureaucratic quandaries and regulatory arbitrage are evident. 

“If the co-ordination and cooperation aspects of the government’s plans in the White Paper don’t come together, it is likely to lead to inconsistency and uncertainty,” said Louisa Chambers, partner at Travers Smith.

The UK’s plan also relies on regulators having the knowledge and funding to help each other, and while some supervisors’ co-ordinate (the Information Commissioner’s Office (ICO), Competition and Markets Authority, the Financial Conduct Authority and Ofcom, frequently work together), not all will have the resources to deal with the size of the task ahead. 

In France, for example, The Defender of Rights, which usually handles discrimination cases, said it doesn’t have the capacity to deal with cases where AI is cropping up and has handed priority to the data protection regulator. 

Can self-policing work?

Self-regulation is another path forward, with both the developers of AI tools and businesses who view commercial opportunities in AI recognizing the need to get ahead of the problem. 

Frameworks are being drafted by industry associations across various sectors like financial services, automotive, healthcare and gaming to ensure the ethical use of AI is adopted as good practice. 

The Biden Administration recently announced that seven leading US AI companies had agreed to implement voluntary limitations around their products to help mitigate the societal risks associated with AI, such as algorithmic bias and privacy breaches. 

Google, Microsoft, Meta, Amazon, OpenAI, Anthropic, and Inflection promised their products will meet safety requirements before release. They will also use outside experts to test their systems, report vulnerabilities, and develop mechanisms for users to identify AI-generated content. 

While it continues to formulate a more formal set of guidelines, the EU is aligned with this stance and wants to see the developers take the initiative and sign a voluntary “AI code of conduct”. 

CUBE comment

World leaders have signaled their intentions, and it is a matter of not how or when the rules will appear to govern AI, but who will set and enforce them first. 

During a recent meeting, European Commission executive vice president Margrethe Vestager made it clear there is no time to waste. “We’re talking about technology that develops by the month,” she said. 

With regulators bearing down, any organization intent on implementing AI can stay ahead of the storm with CUBE’s AI-driven RegPlatform. 

RegPlatform enables businesses to streamline complex compliance change management processes with the world’s most comprehensive live single source of regulatory data. 

Using machine learning and natural language processing, CUBE captures and classifies all regulatory content, in every jurisdiction and in every language, and maps it to customers’ compliance frameworks, from AI itself to financial crime, cyber, privacy, tech risk and beyond.