AI Safety Politics after the SB-1047 Veto

After California Governor Gavin Newsom’s veto of the controversial SB-1047, political dynamics everywhere might be irreversibly changed to the detriment of safety-focused AI policy. 


Introduction

The political process around SB-1047, the Californian AI bill strongly supported by AI safety advocates, has come to an end. But the politics of regulating frontier AI everywhere stand to be affected: For the last few weeks, all eyes were on California. The debate around SB-1047, Gavin Newsom’s ultimate decision and his explanations for it have been duly noted. I discuss four potential international repercussions: The veto might provide cover for laissez faire policy everywhere, raises questions around how frontier AI regulation would be enforced, casts doubt on the use of compute thresholds in AI governance, and leaves the debate irreversibly polarised.

The Veto Gives Cloud Cover for Laissez-faire Attitudes

  • The veto supports economic worries and general disbelief that undermine safety policy.

The veto provides strong political ammunition for opponents of safety-focused policy worldwide. This is a fairly obvious point, so I will just very briefly mention two narratives that can easily be spun from it:

First, the veto motivates an economic narrative: If even California, in its commanding position, is economically wary of safety policy, others can’t afford it, either. The possibility of ‘catching up’ and its interplay with risk-focused policy has long been an important topic in non-US discussions of AI governance, and it is likely to feature even more strongly given e.g. the economic woes of Western Europe and the growing economic relevance of AI. In that political dynamic, anything that can be read as confirmation that safety policy is opposed to productive AI development is harmful - and the veto can be read as just that. 

Second, the veto motivates a broader anti-safety story: In many ways, California is the place most aware of the potential pace and impact of AI progress, and policymakers everywhere know it is. And common international political stereotype, typically lacking some nuance, casts its government as fairly left-wing, anti-industry, pro-regulation. So if, at the nexus of safety awareness, a pro-regulation government is still not moved to pass a safety bill, that ought to make policymakers with less expertise and interest in the issue wary of that bill’s merits. Paradoxically, previous attempts of the safety coalition to downplay the impact of SB-1047 exacerbate this issue: If even a purportedly minimal bill could not get passed in California, policymakers elsewhere might believe the case for safety policy could not possibly be very strong. 

With a Hesitant California, Safety Policy Enforcement Seems Uncertain

  • The veto might make international policymakers doubt whether their regulation can be meaningfully enforced in California. 

SB-1047 received this much global political attention because of California’s outstanding status as the home state of most relevant AI developers. This also makes it the place where, ultimately, much of the enforcement of more restrictive safety policy proposals would have to happen: If illegal training runs were to be interrupted, provably dangerous models to be shut down, next-generation developments probed and prodded, and reckless AI developers held accountable, it would often need to happen in California. Now that the Californian executive has demonstrated its unwillingness to adopt safety-focused policy even at lower enforcement levels, I imagine that many policymakers, especially AI risk skeptics, will rightfully wonder: What good is making all these laws if the Californians won’t cooperate to enforce them? 

Of course, to some extent, that question was always going to be asked - especially where the authority of jurisdictions to regulate frontier AI was called into question anyways, like prominently with the EU. But both the signal of signing SB-1047 and a potentially cooperative Frontier Model Board might have had a reassuring effect. Maybe Sacramento gets no say in the matter of enforcing international agreement or bilateral cooperation after all, and maybe a federal AI safety bill is in the cards, but until then, and given these uncertainties, questions around willingness to enforce and cooperate on enforcement could prove to be useful ammunition to opponents of safety-focused policy. 

To illustrate, picture a safety advocate in conversation with a policymaker hesitant about the range of their mandate and the likelihood of enforcement, and compare the implications of the current outcome to either a successful SB-1047, where a strong Californian commitment and a cooperation-ready Frontier Model Board would have reassured international policymakers, or even a world without SB-1047 to begin with, where at least strategic ambiguity would have remained. It seems the safety advocate is now in a much worse position, and that this question will come back to haunt decisive national regulation and international agreements. 

The Polarisation Genie Is Out Of The Bottle

  • Battle lines around AI safety policy are now drawn clearly. That changes debates everywhere. 

The discussion around SB-1047 has seen the entrenchment of political fronts around frontier AI regulation, with the safety coalition and some incidental allies on one side and a broad alliance of industry, open-source-advocates and safety-sceptic academics on the other. This would be true whether the bill was vetoed or signed - but the veto leaves the safety coalition with all the harms of a polarised debate and still no law.

First, previously, safety-focused policy could be pitched as common sense policy: prudent if  risks manifested, harmless if not. Indeed, the pitch would continue, all sides of the debate accepted that some safety policy would be prudent, even leaders of the large AI corporations.  As long as safety advocates were able to believable make this pitch, it reduced the threshold for initiating safety legislation: Historically pro-industry, anti-regulation parties could be convinced that this policy was different, pro-regulation forces could be persuaded that there was little risk of a drawn-out adversarial process, and policymakers on all sides could be convinced there was little political downside. This climate might have enabled some of the more unlikely sources of political supports for AI safety, e.g. among the UK Tories, German Conservatives, or, earlier, GOP senators. Post veto, this becomes much harder: A clear anti-safety coalition has formed publicly, and its existence near guarantees there will be opposition in the parliaments, in the public debate, and within one’s own political backyard. Maybe this common-sense-pitch was a political illusion to begin with - but it was useful to safety policy, and it is now seriously damaged. To a policymaker, pushing safety policy now clearly spells out a bona fide political fight. 

Second, any future fight will be harder. While the safety coalition was pretty well-established before this entire debate, the opposition was not. In that sense, this was the easiest that AI safety policy was ever going to get: To come out swinging while any opposition was still rallying, and hopefully passing the law before any would-be sceptics knew what hit them. Now, whenever one of the prominent supporters shows any movement on the issue, puts forward a bill or suggestion, somewhat organised opposition will be ready to mount in the early stages, with many pro-safety leaders already cast as broadly scrutinized and observed bogeymen. And given the hardened fronts from this debate will likely transfer to debates in other jurisdictions as well: Many organisations and interests groups are the same, the debates are conducted on international platforms to begin with, and the lineup of Californian / US experts central to the SB-1047 debate is often invoked overseas as well. 

Compute Thresholds Might Be Politically Damaged Goods 

  • It’s noteworthy and impactful that the veto justification specifically mentioned compute thresholds. 

Newsom advances many reasons for vetoing the bill, and much has been said about how seriously they should be taken. They might well be best understood as politically opportune messaging as opposed to his real motivations. But completely dismissing them still seems like a somewhat abrupt overcorrection, especially when it comes to reasons that do not seem like obvious political home runs. At the very least, something must have made Newsom feel that mentioning compute limits would be politically prudent, and potentially, that something might even be that it played a role in his decision. That means two things:

First, it goes to show that the compute limits might have genuinely been an unpopular element of the bill, following salient complaints that the threshold lines are often vague or arbitrary proxies. Even the fiercest advocates of compute-based governance often concede some of these points, but argue there’s no better method. Frustratingly, that might be a good argument for movement-internal discussions around the comparative merits of assessment methods, but it does not help much when discussing whether to pass a law at all or not: If a sceptic argues that a law should not be passed because it cannot accurately discriminate among models, they do not need to provide a constructive alternative; they would be content with no law at all. 

Second, even if the compute thresholds played no role in motivating the veto, they are now a politically damaged tool: They are on record as a purportedly veto-motivating element. Likely enough, wherever next an AI safety bill comes up, it will still include compute thresholds; and it will be easy pickings for any opportunistic opponents of that bill to identify that relevant similarity to SB-1047 and point at Newsom’s specific criticism. This will hang around the neck of safety policy. It’s easy to write in a piece like this, but if they really are the best possible proxy for risk categories, compute thresholds will need a much better defence. 

Outlook 

Policymakers anywhere won’t miss the implications of this veto. Being cognizant of its likely effects might help the safety coalition preempt some of the more dire consequences for its policy case, and so I believe it’s valuable to have a discussion around them. This does not make this piece an indictment of the political strategy around SB-1047 itself - the politics did not work out in the end, but that is not the same as saying they were mistaken from the beginning. I look forward to discussing! 


I try to post updates on my writing, but not on much else, on X / Twitter.


Zurück
Zurück

Problems in the AI Eval Political Economy

Weiter
Weiter

Corporate AI Labs’ Odd Role In Their Own Governance