Fragmented AI regulation emerging across US states

Fragmented AI regulation emerging across US states
Amanda Khatri

Amanda Khatri

Editorial Manager

Businesses are facing a patchwork of AI regulations across the US, as several states establish their own rules while federal action remains uncertain.


Since 2019, at least 40 states have considered AI legislation and 17 states have enacted 29 AI-related bills which address data privacy and accountability concerns.


Congress held committee hearings and proposed several AI bills in 2023, however these have yet to pass, leaving states to take matters into their own hands.


Legal experts have warned that just as varied data privacy regulations once led to fragmented compliance approaches, the current landscape of AI regulation is shaping up to be just as complex for compliance, risk, and legal professionals.


“In the absence of comprehensive federal legislation on AI, there is now a growing patchwork of various current and proposed AI regulatory frameworks at the state and local level,” said lawyers from Bryan Cave Leighton Paisner in a note to clients.


Harmonization warning sounded 


The rapid adoption of AI has prompted many governments and regulators to address the accompanying risks, with the European Union (EU) leading the charge with its groundbreaking AI Act.


Some US states are following in the footsteps of the EU and using elements of the AI Act in their own AI regulatory frameworks, whilst others are using existing data privacy rules to govern AI algorithms.


California, Colorado and Virginia, however, are developing regulatory frameworks specific to AI.


The White House and industry lobbyists are pushing states to consider several guiding principles that ensure AI systems protect civil rights, liberties, and privacy.


What guiding AI principles has the White House identified


To ensure AI systems are designed and developed to protect civil rights, civil liberties and privacy and consider democratic values, the White House has asked states to consider the following guiding principles:


  • Ensure various stakeholders have provided dialogue on the design, development and use of AI to gather different perspectives.
  • Safeguard individuals from the negative effects of unsafe or ineffective AI systems.
  • Protect individuals from harmful data practices and give them control over how an AI system collects and uses data about them.
  • Ensure people are aware when AI is in use and offer them the choice to opt for human assistance if needed.
  • Prevent discrimination by ensuring AI systems are designed to be fair and equitable.
  • Hold developers and users of AI accountable for not following rules and standards governing AI systems.


Colorado AI Act


After passing the law in 2024, the Colorado AI Act will take effect from 1 February 2026.


The Colorado AI Act is the first comprehensive US state framework to govern AI, and shares similarities with the EU AI Act.


Key takeaways


The Colorado AI Act defines a high-risk AI system as a system that affects, denies or alters the cost or terms of services of education, employment, financial services, government services, health care, housing, insurance or legal services.


Those developing high-risk AI systems must prevent algorithmic discrimination against age, colour, disability, ethnicity and genetic information, race, religion and veteran status.


Unlike the EU AI Act, the Colorado AI Act does not explicitly cover general-purpose artificial intelligence including systems that process audio, video, text and physical data.

 

Unless the AI technology is used to generate content, make decisions, predictions or recommendations (relating to important decisions), the Colorado AI Act excludes generative AI use from its framework.

 

Violating Colorado’s AI Act would also mean a violation of Colorado’s Unfair and Deceptive Trade Practices Act.

 

AI developers in Colorado must meet the following requirements:


  • Document the anticipated harmful uses of AI systems.
  • Explain the type and lineage of the training data used in the system.
  • Report on the logic of the algorithms and the measures implemented to mitigate algorithmic discrimination.
  • Provide the necessary information for deployers to conduct an impact assessment.
  • Publicly publish a statement detailing how the system was developed and how it manages known or foreseeable risks of discrimination.
  • Quickly report to the attorney general if there are any instances of algorithmic discrimination.

 

Utah AI policy Act


Utah is narrowly focused on the use of generative AI, which it incorporates into its existing consumer protection frameworks.


Utah’s AI policy Act defines two categories of disclosure obligations:


  1. The first relates to those leveraging generative AI in relation to a business regulated by the Utah Division of Consumer Protection - they must disclose whether the user is interfacing with a machine.
  2. The second disclosure category relates to those who provide licensed services, such as healthcare professionals, who must inform consumers in advance that they are interacting with AI.


If found to be non-compliant with the Utah AI Policy Act, fines of up to $2,500 per violation by the Utah Division of Consumer Protection could be handed out.


California’s AI regulatory efforts


California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) aims to promote the safe and responsible use of AI.


The bill mandates transparency, accountability and ethical standards for organisations deploying AI systems and addresses AI risks in relation to discrimination, privacy and consumer protection.


Governor Gavin Newsom vetoed this bill because he believes it could impact innovation and affect the competitiveness of California’s AI industry, however he has signed other bills.


“Very powerful technology brings both benefits and risks, and I want to make sure that the benefits of AI profoundly outweigh the risks”, State Senator Scott Wiener told the Los Angeles Times.


Among the signed legislation:


  • AB 2013: This mandates developers of generative AI to disclose any training data details and will be effective by 1 January 2026 for AI systems released after January 1, 2022.
  • SB 942: This requires developers to provide a free tool to identify AI-generated content to ensure transparency about the content's accuracy.
  • AB 3030: Sets disclaimers for AI-generated healthcare communications, making sure patients know when AI is involved and offering guidance on how to reach human providers.
  • AB 1008: This modifies the California Consumer Privacy Act to specify that "personal information" includes AI outputs, affecting how AI systems handle sensitive data.


CUBE comment


It is a familiar story; firms needed to navigate a patchwork of state data privacy regulations, and now a new landscape is emerging in state-level AI laws.


Other states are on the way to passing bills to manage AI risks and a federal level legislation could also be on the horizon, creating further challenges for compliance teams navigating the world of AI regulation.


At CUBE, our AI-powered solution tracks, analyses and enriches every single regulatory update related to AI, ensuring our customers are always one step ahead of compliance.


Find out how CUBE can deliver an always-current regulatory profile for your business.


Get in touch today.