Corporate AI Labs’ Odd Role In Their Own Governance
Plenty of attention rests on artificial intelligence developers’ non-technical contributions to ensuring safe development of advanced AI: Their corporate structure, their internal guidelines (‘RSPs’), and their work on policy. We argue that strong profitability incentives increasingly force these efforts into ineffectiveness. As a result, less hope should be placed on AI corporations’ internal governance, and more scrutiny should be afforded to their policy contributions.
This post was co-written with Dominik Hermle.
Introduction
Advocates for safety-focused AI policy often portray today’s leading AI corporations as caught between two worlds: Product-focused, profit-oriented commercial enterprise on the one hand, and public-minded providers of measured advice on transformative AI and its regulation on the other hand. AI corporations frequently present themselves in the latter way, when they invoke the risks and harms and transformative potential of their technology in hushed tones; while at the same time, they herald the profits and economic transformations ushered in by their incoming top-shelf products. When these notions clash and profit maximization prevails, surprise and indignation frequently follow: The failed ouster of OpenAI CEO Sam Altman revealed that profit-driven Microsoft was a much more powerful voice than OpenAI’s non-profit board, and the deprioritisation of its superalignment initiative, reportedly in favor of commercial products, reinforced that impression. Anthropic’s decision to arguably push the capability frontier with its latest class of models revealed that its reported private commitments to the contrary did not constrain them; and DeepMind’s full integration into the Google corporate structure has curtailed hope in its responsible independence.
Those concerned about safe AI might deal with that tension in two ways: Put pressure on and engage with the AI corporations to make sure that their better angels have a greater chance at prevailing; or take a more cynical view and treat large AI developers as simply just another private-sector profit maximizer - not ‘labs’, but corporations. This piece argues one should do the latter. We examine the nature and force of profit incentives and argue they are likely to lead to a misallocation of political and public attention to company structure; a misallocation of policy attention to AI corporations’ internal policies; and a misallocation of political attention and safety-motivated talent to lobbying work for AI corporations.
Only Profit-Maximizers Stay At The Frontier
Investors and compute providers have extensive leverage over labs and need to justify enormous spending
As a result, leading AI corporations are forced to maximize profits
This leads them to advocate against external regulatory constraints or shape them in their favour
Economic realities of frontier AI development make profit orientation a foregone conclusion. Even if an AI corporation starts out not primarily motivated by profit, it might still have no choice but to chase maximal profitability: As the track record of Anthropic, the seemingly most safety-minded lab, clearly demonstrates, even a maximally safetyist lab has to remain at the frontier of AI development: To understand the technical realities at the actual model frontier, to attract the relevant talent and motivate the necessary compute investments. The investments that are required to scale up the required computing power to remain at this frontier are enormous and are only provided by profit-driven major technology corporations or, to a much lesser extent, large-scale investors. And a profit-driven tech corporation seems exceedingly unlikely to hinge astronomical capex on an AI corporation that does not give off the unmistakable impression of pursuing maximal profits. If a publicly traded company like Microsoft had serious reason to believe that a model it spent billions of dollars on could just not be released as a result of altruistically motivated safety considerations that go beyond liability risks, their compute expenditure would at least be strategically imprudent and at worst violate its fiduciary duty. This is especially true when compared to counterfactually funding model development at another, less safety-minded, corporation or an in-house lab. And these external pressures might well be mounting in light of rising doubts around the short-term profitability of frontier AI development.
So, even a safety-motivated lab has no choice but to act as if it were an uncompromising profit-maximizing entity - lest their compute providers will take their business elsewhere, and the lab could no longer compete. The effects are clearly visible: When OpenAI board members briefly tried to change course, Microsoft almost took away most of their compute and talent in a matter of hours; in a compute crunch, OpenAI’s superalignment team reportedly got the short end of the stick; and OpenAI might well be gearing up for an IPO doing away with what’s left of its safety-focused structure. The recent, presumable antitrust-motivated, surrender of Apple’s and Microsoft’s respective board seats at OpenAI does cast some doubt on the institutional side of this mechanism, but it is important to note that Microsoft did not have a board seat before they managed to strongarm OpenAI into rehiring Altman - if anything, that episode showed how little the board matters. It would be exceedingly surprising if Microsoft’s main competitors did not have, or at least wrestle for, a comparably iron grip on their respective AI corporations. If they succeed, then these respective labs become full-on profit maximizers; and if they don’t, profit-maximizing labs supported by investors and compute providers might take their place.
This determines the role of AI corporations in policy and governance debate. With AI undergoing rapid technical innovation, financial investment and media awareness, it seems to be destined to be a key technology of the 21st century. This strategic importance will inevitably lead to public attention, political pressure and regulatory intervention in favor of making AI safe, controlled, and beneficial.
Profit maximization is not necessarily at odds with that goal - often, safe and reliable AI is the best possible product to release. Much safety-relevant technical progress, such as advances in making AI more predictable, more responsive to instructions or more malleable to feedback, has also made for meaningful product improvements. This incentive has its limitations - in race situations that offer great economic benefit from being first to reach meaningful capability thresholds, larger and larger risks of unsafe or unreliable releases might become economically tenable.
But more importantly, business interest is often at grave odds with external constraints. It is near-universally accepted that geostrategically and infrastructurally critical industries ought to be externally constrained through regulation and oversight - even where market pressures favor safe products. This oversight is usually thought to ensure strategic sovereignty, reduce critical vulnerabilities, and prevent large-scale failures, etc. Profit maximizers will not willingly accept such constraints, even where they publicly endorse them in principle - because they believe that they might know best, because compliance is expensive, and because unconstrained behavior might sometimes be very profitable. Behind closed doors, the largest AI corporations have been some of the fiercest opponents of early legislative action that might constrain them, whether that is the EU’s AI Act, California’s SB-1047, or early initiatives in D.C. and elsewhere. Where constraints seem unavoidable, business suggests shaping them to be minimally restrictive or potentially even conducive: Companies might advertise for legal mechanisms that best fit their own technical abilities or that prevent smaller competitors from catching up.
This understanding of AI corporations’ profitability motives and their resulting role in policy dynamics should lead us to reassess their activity in three areas: Corporate structure, Responsible Scaling Policies, and corporate lobbying.
Constraints from Corporate Structure Are Dangerously Ineffective
Ostensibly binding corporate structures are easily evaded or abandoned
Political and public will cannot be enforced or ensured via corporate structure
Public pressure can lead to ineffective and economically harmful non-profit signaling
AI corporations, in alleging their binding commitment to responsible development, often point to internal governance structures. But because of the primacy of profitability even in safety-minded labs discussed above, arcane corporate structures might do very little to mitigate it. This is true for three reasons: Firstly, profit maximization can easily be argued to be a necessary condition for a valuable contribution to safety, as it secures the talent and compute required to begin with. So, even under an ostensibly prohibitive pro-safety structure, a single lab might justify pushing for profits and capabilities - look at Anthropic and its recent decision to release arguably the most advanced suite of LLMs to date. Secondly, corporate structures are often not very robust to change driven by executive leadership and major investors - again, it’s easy to point at OpenAI as a recent example of a quick de facto change in governance setup that could not even be halted by one of its founders. Thirdly, in many cases, existing companies might be easily exchangeable shells for what really matters under the hood: Compute that could potentially be reassigned by the compute providers, and research teams that can be poached and will follow the compute. That kind of move might be legally contentious, but between the massive asymmetries in legal resources and potential contractual provisions around model progress in compute agreements, it seems like a highly believable threat. Present corporate structures that suggest a stronger focus on safety or the common good might hence be best understood more cynically: They are only there because they have not interfered with profitability just yet, but can readily be dismissed or dismissed one way or the other. Any other understanding is a set-up for miscalibrated expectations and disappointment, such as in the case of the internal changes at OpenAI. Drawing the right lessons from that is important: Alleging that Sam Altman turned out to be a uniquely deceptive or machiavellian figure, or that OpenAI has undergone some surprising hostile takeover, misses the point and sets up future misconceptions. The lesson should be that there was a failure to understand the real distribution of power in corporate governance.
Expecting corporate governance to enforce public will to make AI safe might not only be mistaken, it could also soften pressure on political institutions to craft meaningful policies that address the societal challenges posed by AI. Insofar as the ineffectiveness of safety-minded corporate structures is not immediately obvious, the public and policy-makers could be inclined to believe that AI corporations were already sufficiently aligned with the public interest. This misunderstanding should be avoided: Public attention should instead pivot towards policymakers and the adequacy of their regulatory frameworks rather than dwelling on corporate board reshufflings. This shift might even benefit AI corporations - by reducing the resources they need to spend on costly charades like complex corporate structures designed to provide an impression of safety focus. Treating them like any other private company might in turn relieve them of the burden of costly non-profit signaling.
Hope In RSPs Is Treacherous
RSPs on their own can and will easily be discarded once they become inconvenient
Public or political pressure is unlikely to enforce RSPs against business interests
RSP codification is likely to yield worse results than independent legislative initiative
Therefore, much less attention should be afforded to RSPs.
Secondly, a rosy view of AI corporations’ incentives leads to overrating the relevance of corporate governance guidelines set by labs. Responsible scaling policies (RSPs) are documents that outline safety precautions taken around advanced AI - e.g. which models should undergo which evaluations, necessary conditions for model deployment, or development red lines - see those from OpenAI, Google DeepMind, and Anthropic. These RSPs are often met with interest, attention, sometimes praise and sometimes disappointment from safety advocates, and often feature in policy proposals and political discussion.
Unfortunately, there is very little reason to believe that such RSPs deserve this attention. Firstly, of course, RSPs by themselves lack an external enforcement mechanism. No one can compel the internal governance of an AI corporation to comply with their RSPs, nor to keep their RSPs once they feel they become inconvenient. RSPs are simply a public write-up of internal corporate governance valid exactly as long as company leadership decides it is. An optimistic view of RSPs might be that they are a good way to hold AI corporations accountable - that public and political attention would be able to somehow sanction labs once they did diverge from their RSPs. Not only is this a fairly convoluted mechanism of efficacy, it also seems empirically shaky: Meta is a leading AI corporation with industry-topping amounts of compute and talent and does not publish RSPs. This seems to have garnered neither impactful public and political scrutiny nor hurt the Meta AI business.
This enforceability is sometimes thought to be answerable through RSP codification. RSPs might be codified, i.e. implemented in the form of binding law. This describes, in effect, a legislative process: For RSPs to be binding or externally enforceable in a meaningful sense, someone would have to be empowered to carry out an external, neutral evaluation of compliance and to enforce the measures often stipulated in the RSPs - for instance whether a model ought to be shut down, planned deployment ought to be cancelled, or training should be stopped. At present, it is difficult to conceive how this evaluation and enforcement should happen if not through executive action empowered by legislative mandate. So RSP codification is in effect simply a safe AI law - with one notable difference: That we do not start from a blank piece of paper, but with an outline of what AI corporations might like for the regulation to entail. The advantages of that approach might firstly come from AI corporations’ relevant expertise - which we discuss later -, or from their increased buy-in. But it seems unclear why exactly buy-in is required: There is substantial political and public appetite for sensible AI regulation, and by and large, whether private companies want to be regulated is usually not a factor in our democratic decision to regulate them. The downside of choosing an RSP-based legislative process should be obvious - it limits, or at least frames, the option space to the concepts and mechanisms provided by the AI corporations themselves. But this might be a harmful limitation: As we have argued above, these companies are incentivized to mainly provide mechanisms they might be able to evade, that might fit their idiosyncratic technical advantages, that might strengthen their market position, etc. RSP codification hence seems like a worse way to safe AI legislation than standard regulatory and legislative processes.
Additionally, earnest public discussion and advocacy for the codification of RSPs may give off the superficial impression to policymakers that corporate governance is adequately addressing safety concerns. All of these are reasons why even strictly profit-maximizing AI corporations might publish RSPs - it quells regulatory pressures and shifts and frames the policy debate in their favor. Hence, affording outsized attention to RSPs and conceiving of RSP codification as a promising legislative approach is unlikely to lead to particularly safe regulation. It might, however, incur a false sense of confidence, reduce political will to regulate, or shift the regulatory process in favor of industry. We should care much less about RSPs.
For-Profit Policy Work Is Called Corporate Lobbying
For-profit work on policy and governance is usually called corporate lobbying. In many other industries, corporate lobbying is an opposing corrective force to advocacy
Corporate lobbying output should be understood as constrained by business interests
Talent allocation and policy attention should be more skeptical of corporate lobbying.
Thirdly, AI corporations’ lobbying efforts currently have a strangely mixed standing in safety policy debates. Leading AI corporations employ large teams dedicated to policy (similar, by all accounts, to government affairs teams at other corporations) and governance (with less direct equivalent). On one hand, extensive lobbying efforts by AI corporations are often viewed critically, especially as they attempt to weaken regulatory constraints. On the other hand, these teams sometimes appear as allies of safety policy advocacy. Labs’ lobbying and governance teams frequently recruit from safety advocates, and entertain close relationships with the safety coalition. In fact, working for an AI corporation is often considered an advisable, desirable career step for non-profit safety advocates. This integration is highly unusual. In other industries, transfers from non-profit work to corporate lobbying happen, but they are usually not considered in service of the non-profits’ goals, and might at best be called green- (or safety-, or clean-,...) washing and at worst betrayal to the cause.
The existence of governance teams, concerned with developing ostensibly impartial ideas on how to govern their technology, is also not very common in other industries. This phenomenon may be best understood in light of the historical context. Given its status as a recently emerging technology with a long prehistory of more theoretical speculation around its capabilities, research labs became generalist hubs on the technology, as no one - beyond the labs and academia - had the technical knowledge of how to regulate it safely, or the political interest to even think about it much. But at least since the ChatGPT release followed by the surge of AI interest and investment, political institutions and civil society organizations are starting to catch up. Dedicated bodies, such as the EU AI Office and the US & UK AI Safety Institutes, have been established and equipped with leading talent, and there is a thriving new ecosystem of AI think tanks and advocacy groups.
It would be a mistake to burden the discussion around frontier AI with tribalistic rhetoric as is often present in regulatory debates of other industries. But its prevalence in virtually all of these other sectors points at which respective roles and relationships have been historically defined for corporate lobbyists, non-profit and academic advisors, and policy-makers. Forfeiting this adversarial dynamic points to a perhaps naive understanding of what profit-maximizing AI corporations will allow their policy and governance teams to do. It is simply implausible that labs would, in due time and following their increased commercialization, pay a substantial department to create policy-related outputs that do not directly further their policy goals. Again, the cynical perspective might be most informative: Profit-maximizing companies pay policy and governance teams to shape policy and governance in a way that maximizes their profits while avoiding researching and publishing any policy proposal that could potentially hinder their AI products and thereby its future financial success. This can be enforced on the executive level by directly vetoing any potent safety policy or - more indirectly - by cutting financial, organizational and compute resources for safety research. On a personal level, employees might be inclined to practice anticipatory obedience and avoid going head-to-head with their employers, in turn protecting their salary, shares, reputation and future impact in steering AI progress from the inside. Besides, working for a big AI corporation building a transformative technology like no other is of course exciting - even a cautious individual could at least be somewhat captivated by the thought of shaping rapid, utopian technological progress and therefore choose to remain on board. No one in these teams needs to be ill-spirited or self-servant for this mechanism to take hold. In fact, there might be a lot of value in creating policy that ensures progress and profit in an industry that promises as much economic and societal value as AI. We just claim that this value does not lie in causing first-order progress on making regulation more safe.
This leads to a dynamic wherein leading governance talent with high standing in non-profit and government AI policy institutions makes suggestions that serve business interests, but face much less scrutiny than corporate lobbying attempts would in other industries. This shapes the policy debate toward the interests of the few AI corporations that currently entertain ostensibly safety-focused governance and policy teams.
Ultimately, this also might result in a misallocation of governance talent. Right now, working for the governance or policy team of a major lab remains a dream job for many ambitious, safety-minded individuals, further fueled by the social and cultural proximity between AI corporation employees and safety researchers and activists. This human capital could potentially be more effectively utilized in less constrained roles, like at governmental institutions or in research and advocacy roles.
This does not apply to safety-minded individuals conducting technical safety work at AI corporations. Much research progress on the technical level is incredibly beneficial to safety, and where it is, incentives of labs and safety advocates align - labs also want to build safe products. The misalignment only exists where policy, i.e. an external force that compels the labs, is concerned. So technical work remains valuable on any cynical understanding of lab incentives - and is also presumably sufficient to cash in on some of the benefits of having safety-minded employees at AI corporations, such as whistleblowing options and input on overall corporate culture.
Conclusion
The days of AI developers as twilight institutions between start-up and research lab are at best numbered and at worst over. The economics of frontier AI development will render them profit-maximizing corporate entities. We believe this means that we should treat them as such: No matter their corporate structure, we should expect them to choose profits over long-term safety; no matter their governance guidelines, ensuring safety should be the realm of policy; and no matter the intentions of their governance teams, we should understand their policy work as corporate lobbying. If we do not, we might face some rough awakenings.
I try to post updates on my writing, but not on much else, on X / Twitter.