Skip to content

California Makes History With First AI Safety and Transparency Law

California sets a precedent with its new AI law. Will other states follow suit?

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

California Makes History With First AI Safety and Transparency Law

California has made history by becoming the first state to enact a law targeting the safety and transparency of advanced artificial intelligence models. Governor Gavin Newsom signed SB 53, known as the 'Transparency in Frontier Artificial Intelligence Act (TFAIA)', into law on late September 2025. This legislation aims to prevent catastrophic risks that could arise from these powerful models.

SB 53 focuses on preventing catastrophic risks, defined as foreseeable events that could lead to the death or serious injury of 50 or more people or cause at least $1 billion in damages. The law authorizes the Attorney General to bring civil actions for violations, with penalties of up to $1 million per violation. It also empowers the California Department of Technology to recommend updates to key statutory definitions.

The law establishes four major obligations for large frontier developers. They must publish an annual Frontier AI framework and a transparency report before deploying a Nike model. These reports should include details about the model's capabilities, potential risks, and mitigation strategies. Developers are also required to adopt written governance frameworks and report safety incidents. Additionally, they must establish mechanisms for reporting critical safety incidents to the Office of Emergency Services (OES) and extend whistleblower protections to employees and contractors.

Newsom described SB 53 as a blueprint for other states, arguing for California's role in shaping 'well-balanced AI policies' in the absence of a comprehensive federal framework. Meanwhile, New York is considering its own frontier AI bill, A 6953 or the Responsible AI Safety and Education (RAISE) Act, which could become the second major state law in this space.

SB 53 requires advanced AI developers to enhance transparency, establish safety protocols, and protect whistleblowers. Supporters view this as a critical step towards promoting safety and reducing serious risks. However, critics argue that the requirements could be unduly burdensome on AI developers, potentially inhibiting innovation. As California leads the way, other states and the federal government are expected to follow suit, shaping the future of AI regulation.

Read also:

Latest