Mark Taylor
Senior Editorial Manager
US President Joe Biden issued an executive order on 30 October 2023, that aims to rein in some of the threats posed by artificial intelligence (AI) by introducing elements of regulation and direction for regulatory agencies.
The order establishes safety standards for AI, requiring developers of the most powerful machine learning systems, like ChatGPT, to share their safety tests and other critical information with authorities.
Industry standards like watermarks for identifying AI-powered products, among other regulations, are also addressed.
Biden also called on Congress to pass data privacy legislation, as lawmakers have failed for several years to reach an agreement despite multiple attempts.
The reforms amount to “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust,” White House deputy chief of staff Bruce Reed said in a statement.
“We’re going to see more technological change in the next 10, maybe the next five years, than we’ve seen in the last 50 years,” Biden said. “And that’s a fact. And the most consequential technology of our time, artificial intelligence, is accelerating that change. It’s going to accelerate it at warp speed. AI is all around us.”
AI presents incredible opportunities, but comes with significant risks, Biden added.
“One thing is clear — to realize the promise of AI and avoid the risk, we need to govern this technology,” he said. “There’s no other way around it, in my view. It must be governed.”
What does Biden’s AI executive order mean for business?
One of the major rules established is a requirement for AI companies to conduct tests of some of their products and report the results to government officials before introducing the functionality to consumers.
The tests, known as “red teaming”, aim to prevent exposing users or the public to potential risks.
A concerning result may trigger government intervention, to either improve the product’s safety levels or force it to be shelved.
The new powers are permitted under the Defense Production Act, which grants the White House a broad role in overseeing industries tied to national security, the Biden administration said.
“These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,” the White House added.
What rules will the order introduce?
The order introduces a broad set of industry standards officials hope will result in more transparent products, avoiding dangerous developments like AI-engineered biological material or cyberattacks.
A new standard would bring in the use of watermarks that inform consumers when they encounter a product enabled by AI, which aims to limit the threat of so-called deepfakes, in which AI is used to manipulate images and audio with the intent to deceive.
Biotech firms would also have more stringent regulations to ensure they take precautions when using AI to develop or alter biological materials.
However, the guidance will not be binding, serving only as a suggestion.
As a significant funder of scientific research, the federal government will use its influence to push for compliance, the White House said.
Government agencies will be required to adopt watermarks in their own AI products, to ensure they act as an example to businesses in the hope the private sector will follow their lead.
The Department of Energy and Department of Homeland Security will take action to reduce the threat that AI poses to critical infrastructure, whilst federal benefits programs and contractors must ensure that AI does not worsen racial bias in their activities.
Likewise, the Department of Justice will establish rules around how best to investigate AI-related Civil Rights abuses.
The order also prioritizes support for the development of “privacy-preserving techniques”. This includes those that enable AI systems to be trained yet “preserve the privacy of the training data.”
It will also fund a Research Coordination Network to develop cryptographic tools.
In calling on Congress to go further, Biden hinted at more action to come around data privacy and AI.
Recently, Democrat Senate Majority Leader Chuck Schumer convened a series of “AI Insight Forums,” which are aimed at developing a broader set of AI and privacy-governing laws.
What was the reaction to Biden’s AI order?
Whilst noting it as a headline-grabbing bid to own the regulation of AI, the order is unlikely to force a major industry-wide shift, said Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University.
“The new executive order strikes the right tone by recognizing both the promise and perils of AI,” Kreps said. “What’s missing is an enforcement and implementation mechanism. It’s calling for a lot of action that’s not likely to receive a response.”
Robert Weissman, the president of Washington D.C.-based consumer advocacy group Public Citizen, similarly praised the executive order whilst noting it has limits.
“[The] executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology,” he said. “But it’s only a first step.”
Tech industry lobby group NetChoich, meanwhile, billed the order as an “AI Red Tape Wishlist” that will stifle “new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation.”
“The truth is the United States is already far behind Europe,” added Max Tegmark, president of Tech policy think tank Future of Life Institute. “Policymakers, including those in Congress, need to look out for their citizens by enacting laws with teeth that tackle threats and safeguard progress,” he said.
The Group of Seven industrial nations (G7) is gearing up to introduce a code of conduct for companies developing advanced artificial intelligence systems, according to a document seen by Reuters.
It was reported recently that the US was critical of Europe’s “aggressive” approach to AI regulation, but White House officials briefed the media that this was not the case, as their view is that legislation “is necessary”.
CUBE comment
As generative AI develops at a breakneck pace, legislation is not far behind.
The US has now officially entered the race to regulate AI, with President Biden’s order a way of distinguishing the United States as a supervisor by focussing on governance and introducing standards that would have a wide-ranging impact.
It appears in part a response to the developing narrative that the European Union is leading on AI regulation, with the US not wishing to be a rule-taker given it is home to many of the world’s largest AI developers.
The EU’s specific AI regulation is likely to enter force in 2025, whilst the UK is not far behind, but has taken a more hands-off approach, positioning itself as a champion of innovation.
With authorities around the world from Brazil to China now in the process of enacting rules to limit the risks, there has never been a more important time for businesses in every sector developing AI products and services to stay ahead of the regulatory curve.