In the case of AI, as California goes, so goes the nation. The most important state within the US by inhabitants can be the central hub of AI innovation for your entire globe, dwelling to 32 of the world’s high 50 AI firms. That measurement and affect have given the Golden State the load to turn out to be a regulatory trailblazer, setting the tone for the remainder of the nation on environmental, labor, and shopper safety laws — and extra not too long ago, AI as nicely.
Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a restricted window of alternative to set the stage for the remainder of the nation’s AI legal guidelines. Within the early hours of Saturday morning, the California State Meeting voted in favor of SB 53, a invoice that will require transparency reviews from the builders of extremely highly effective, “frontier” AI fashions. The invoice, which has handed each components of the state legislature, now goes to Gov. Gavin Newsom to both be vetoed or signed into regulation.
The fashions focused symbolize the cutting-edge of AI — extraordinarily adept generative programs that require large quantities of information and computing energy, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude.
AI can provide great advantages, however because the invoice is supposed to deal with, it’s not with out dangers. And whereas there isn’t a scarcity of present dangers from points like job displacement and bias, SB 53 focuses on attainable “catastrophic dangers” from AI. Such dangers embody AI-enabled organic weapons assaults and rogue programs finishing up cyberattacks or different felony exercise that would conceivably carry down crucial infrastructure. Such catastrophic dangers symbolize widespread disasters that would plausibly threaten human civilization at native, nationwide, and international ranges. They symbolize dangers of the type of AI-driven disasters that haven’t but occurred, slightly than already-realized, extra private harms like AI deepfakes.
Precisely what constitutes a catastrophic threat is up for debate, however SB 53 defines it as a “foreseeable and materials threat” of an occasion that causes greater than 50 casualties or over $1 billion in damages {that a} frontier mannequin performs a significant function in contributing to. How fault is set in follow could be as much as the courts to interpret. It’s arduous to outline catastrophic threat in regulation when the definition is much from settled, however doing so can assist us defend in opposition to each near- and long-term penalties.
By itself, a single state invoice centered on elevated transparency will in all probability not be sufficient to forestall devastating cyberattacks and AI-enabled chemical, organic, radiological, and nuclear weapons. However the invoice represents an effort to manage this fast-moving expertise earlier than it outpaces our efforts at oversight.
SB 53 is the third state-level invoice to attempt to particularly give attention to regulating AI’s catastrophic dangers, after California’s SB 1047, which handed the legislature solely to be vetoed by the governor — and New York’s Accountable AI Security and Schooling (RAISE) Act, which not too long ago handed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.
SB 53, which was launched by state Sen. Scott Wiener in February, requires frontier AI firms to develop security frameworks that particularly element how they strategy catastrophic threat discount. Earlier than deploying their fashions, firms must publish security and safety reviews. The invoice additionally offers them 15 days to report “crucial security incidents” to the California Workplace of Emergency Providers, and establishes whistleblower protections for workers who come ahead about unsafe mannequin deployment that contributes to catastrophic threat. SB 53 goals to carry firms publicly accountable for his or her AI security commitments, with a monetary penalty as much as $1 million per violation.
“The science of how one can make AI protected is quickly evolving, and it’s presently tough for policymakers to jot down prescriptive technical guidelines for the way firms ought to handle security.”
— Thomas Woodside, co-founder of Safe AI Venture
In some ways, SB 53 is the religious successor to SB 1047, additionally launched by Wiener.
Each cowl giant fashions which might be skilled at 10^26 FLOPS, a measurement of very important computing energy used in quite a lot of AI laws as a threshold for important threat, and each payments strengthen whistleblower protections. The place SB 53 departs from SB 1047 is its give attention to transparency and prevention
Whereas SB 1047 aimed to maintain firms responsible for catastrophic harms attributable to their AI programs, SB 53 formalizes sharing security frameworks, which many frontier AI firms, together with Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its guidelines making use of solely to firms that generate $500 million or extra in gross income.
“The science of how one can make AI protected is quickly evolving, and it’s presently tough for policymakers to jot down prescriptive technical guidelines for the way firms ought to handle security,” stated Thomas Woodside, the co-founder of Safe AI Venture, an advocacy group that goals to cut back excessive dangers from AI and is a sponsor of the invoice, over e mail. “This mild contact coverage prevents backsliding on commitments and encourages a race to the highest slightly than a race to the underside.”
A part of the logic of SB 53 is the flexibility to adapt the framework as AI progresses. The invoice authorizes the California Lawyer Common to alter the definition of a giant developer after January 1, 2027, in response to AI advances.
Proponents of the invoice had been optimistic about its possibilities of being signed by the governor ought to it move the legislature. On the identical day that Gov. Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier fashions. The ensuing report by the group offered the inspiration for SB 53. “I’d guess, with roughly 75 p.c confidence, that SB 53 might be signed into regulation by the top of September,” stated Dean Ball — former White Home AI coverage adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.
However a number of business organizations rallied in opposition, arguing that further compliance regulation could be costly, on condition that AI firms ought to already be incentivized to keep away from catastrophic harms. OpenAI has lobbied in opposition to it, and expertise commerce group Chamber of Progress argues that the invoice would require firms to file pointless paperwork and unnecessarily stifle innovation.
“These compliance prices are merely the start,” Neil Chilson, head of AI coverage on the Abundance Institute, instructed me over e mail. “The invoice, if handed, would feed California regulators truckloads of firm info that they may use to design a compliance industrial advanced.”
In contrast, Anthropic enthusiastically endorsed the invoice on Monday. “The query isn’t whether or not we want AI governance – it’s whether or not we develop it thoughtfully right now or reactively tomorrow,” the corporate defined in a weblog publish. “SB 53 gives a strong path towards the previous.” (Disclosure: Vox Media is certainly one of a number of publishers which have signed partnership agreements with OpenAI, whereas Future Excellent is funded partially by the BEMC Basis, whose main funder was additionally an early investor in Anthropic. Neither group has editorial enter into our content material.)
The controversy over SB 53 ties into broader disagreements about whether or not states or the federal authorities ought to drive AI security regulation. However because the overwhelming majority of those firms are based mostly in California, and almost all do enterprise there, the state’s laws issues for your entire nation.
“A federally led transparency strategy is much, far, far preferable to the multi-state various,” the place a patchwork of state laws can battle with one another, stated Cato Institute expertise coverage fellow Matthew Mittelsteadt in an e mail. However “I really like that the invoice has a provision that will enable firms to defer to a future various federal commonplace.”
“The pure query is whether or not a federal strategy may even occur,” Mittelsteadt continued. “In my view, the jury is out on that however the risk is much extra probably that some recommend. It’s been lower than 3 years since ChatGPT was launched. That’s hardly a lifetime in public coverage.”
However in a time of federal gridlock, frontier AI developments gained’t anticipate Washington.
The catastrophic threat divide
The invoice’s give attention to, and framing of, catastrophic dangers will not be with out controversy.
The thought of catastrophic threat comes from the fields of philosophy and quantitative threat evaluation. Catastrophic dangers are downstream of existential dangers, which threaten humanity’s precise survival or else completely scale back our potential as a species. The hope is that if these doomsday eventualities are recognized and ready for, they are often prevented or at the least mitigated.
But when existential dangers are clear — the top of the world, or at the least as we all know it — what falls beneath the catastrophic threat umbrella, and the easiest way to prioritize these dangers, is dependent upon who you ask. There are longtermists, folks centered totally on humanity’s far future, who place a premium on issues like multiplanetary enlargement for human survival. They’re typically mainly involved by dangers from rogue AI or extraordinarily deadly pandemics. Neartermists are extra preoccupied with present dangers, like local weather change, mosquito vector-borne illness, or algorithmic bias. These camps can mix into each other — neartermists would additionally prefer to keep away from getting hit by asteroids that would wipe out a metropolis, and longtermists don’t dismiss dangers like local weather change — and the easiest way to think about them is like two ends of a spectrum slightly than a strict binary.
You’ll be able to consider the AI ethics and AI security frameworks because the near- and longtermism of AI threat, respectively. AI ethics is concerning the ethical implications of the methods the expertise is deployed, together with issues like algorithmic bias and human rights, within the current. AI security focuses on catastrophic dangers and potential existential threats. However, as Vox’s Julia Longoria reported within the Good Robotic collection for Unexplainable, there are inter-personal conflicts main these two factions to work in opposition to one another, a lot of which has to do with emphasis. (AI ethics folks argue that catastrophic threat issues over-hype AI capabilities and ignores its influence on weak folks proper now, whereas AI security folks fear that if we focus an excessive amount of on the current, we gained’t have methods to mitigate larger-scale issues down the road.)
However behind the query of close to versus long-term dangers lies one other one: what, precisely, constitutes a catastrophic threat?
SB 53 initially set the usual for catastrophic threat at 100 slightly than 50 casualties — much like New York’s RAISE Act — earlier than halving the edge in an modification to the invoice. Whereas the common particular person would possibly contemplate, say, many individuals pushed to suicide after interacting with AI chatbots to be catastrophic, such a threat is exterior of the invoice’s scope. (The California State Meeting simply handed a separate invoice to manage AI companion chatbots by stopping them from taking part in discussions about suicidal ideation or sexually express materials.)
SB 53 focuses squarely on harms from “expert-level” frontier AI mannequin help in creating or deploying chemical, organic, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “lack of management” eventualities the place AIs go rogue, behaving deceptively to keep away from being shut down and replicating themselves with out human oversight. For instance, an AI mannequin could possibly be used to information the creation of a brand new lethal virus that infects hundreds of thousands and kneecaps the worldwide economic system.
“The 50 to 100 deaths or a billion {dollars} in property injury is only a proxy to seize actually widespread and substantial influence,” stated Scott Singer, lead creator of the California Report for Frontier AI Coverage, which helped inform the premise of the invoice. “We do take a look at like AI-enabled or AI doubtlessly [caused] or correlated suicide. I believe that’s like a really critical set of points that calls for policymaker consideration, however I don’t assume it’s the core of what this invoice is making an attempt to deal with.”
Transparency is useful in stopping such catastrophes as a result of it may possibly assist elevate the alarm earlier than issues get out of hand, permitting AI builders to appropriate course. And within the occasion that such efforts fail to forestall a mass casualty incident, enhanced security transparency can assist regulation enforcement and the courts determine what went unsuitable. The problem there’s that it may be tough to find out how a lot a mannequin is accountable for a selected end result, Irene Solaiman, the chief coverage officer at Hugging Face, a collaboration platform for AI builders, instructed me over e mail.
“These dangers are coming and we ought to be prepared for them and have transparency into what the businesses are doing,” stated Adam Billen, the vp of public coverage at Encode, a corporation that advocates for accountable AI management and security. (Encode is one other sponsor of SB 53.) “However we don’t know precisely what we’re going to want to do as soon as the dangers themselves seem. However proper now, when these issues aren’t occurring at a big scale, it is smart to be form of centered on transparency.”
Nevertheless, a transparency-focused invoice like SB 53 is inadequate for addressing already-existing harms. Once we already know one thing is an issue, the main focus ought to be on mitigating it.
“Possibly 4 years in the past, if we had handed some form of transparency laws like SB 53 however centered on these harms, we would have had some warning indicators and been capable of intervene earlier than the widespread harms to children began occurring,” Billen stated. “We’re making an attempt to type of appropriate that mistake on these issues and get some form of forward-facing details about what’s occurring earlier than issues get loopy, principally.”
SB 53 dangers being each overly slender and unclearly scoped. Now we have not but confronted these catastrophic harms from frontier AI fashions, and probably the most devastating dangers would possibly take us completely unexpectedly. We don’t know what we don’t know.
It’s additionally definitely attainable that fashions skilled beneath 10^26 FLOPS, which aren’t coated by SB 53, have the potential to trigger catastrophic hurt beneath the invoice’s definition. The EU AI Act units the threshold for “systemic threat” on the smaller 10^25 FLOPS, and there’s disagreement concerning the utility of computational energy as a regulatory commonplace in any respect, particularly as fashions turn out to be extra environment friendly.
Because it stands proper now, SB 53 occupies a special area of interest from payments centered on regulating AI use in psychological healthcare or knowledge privateness, reflecting its authors’ need to not step on the toes of different laws or chew off greater than it may possibly moderately chew. However Chilson, the Abundance Institute’s head of AI coverage, is a part of a camp that sees SB 53’s give attention to catastrophic hurt as a “distraction” from the actual near-term advantages and issues, like AI’s potential to speed up the tempo of scientific analysis or create nonconsensual deepfake imagery, respectively.
That stated, deepfakes might definitely trigger catastrophic hurt. As an example, think about a hyper-realistic deepfake impersonating a financial institution worker to commit fraud at a multibillion-dollar scale, stated Nathan Calvin, the vp of state affairs and normal counsel at Encode. “I do assume among the strains between this stuff in follow could be a bit blurry, and I believe in some methods…that isn’t essentially a foul factor,” he instructed me.
It could possibly be that the ideological debate round what qualifies as catastrophic dangers, and whether or not that’s worthy of our legislative consideration, is simply noise. The invoice is meant to manage AI earlier than the proverbial horse is out of the barn. The typical particular person isn’t going to fret concerning the probability of AI sparking nuclear warfare or organic weapons assaults, however they do take into consideration how algorithmic bias would possibly have an effect on their lives within the current. However in making an attempt to forestall the worst-case eventualities, maybe we will additionally keep away from the “smaller,” nearer harms. In the event that they’re efficient, forward-facing security provisions designed to forestall mass casualty occasions will even make AI safer for people.
If Gov. Newsom indicators SB 53 into regulation, it might encourage different state makes an attempt at AI regulation by means of an identical framework, and finally encourage federal AI security laws to maneuver ahead.
How we take into consideration threat issues as a result of it determines the place we focus our efforts on prevention. I’m a agency believer within the worth of defining your phrases, in regulation and debate. If we’re not on the identical web page about what we imply after we speak about threat, we will’t have an actual dialog.
Replace, September 13, 2025, 11:55 am ET: This story was initially printed on September 12 and has been up to date to replicate the end result of the California State Meeting vote.