[ INDUSTRY ]

· 8 min read

AI companies don't want regulation — they want their regulation

The biggest AI companies spend millions lobbying for regulations that protect incumbents and crush competitors. This isn't safety — it's strategy.

AI companies don't want regulation — they want their regulation

The lobbying numbers

The largest AI companies have collectively spent hundreds of millions of dollars on lobbying in Washington over the past three years. OpenAI, Google, Microsoft, Meta, and Amazon all maintain dedicated government affairs teams and have dramatically increased their lobbying budgets since 2023. These expenditures are not charitable contributions to good governance. They are strategic investments designed to shape the regulatory environment in ways that benefit incumbents and raise barriers to entry for competitors and open-source alternatives.

What regulatory capture looks like

Regulatory capture occurs when the companies being regulated effectively control the regulatory process. In AI, this takes a specific form: large companies advocate publicly for regulation while privately ensuring that the rules they support would be prohibitively expensive for smaller competitors to comply with. Compliance costs that represent a rounding error for a trillion-dollar company can be existential for a startup. The result is a regulatory framework that looks like consumer protection but functions as market protection for the companies that wrote it.

The critics who called it out

David Sacks, among others, has publicly identified this pattern in AI regulation advocacy. The argument is straightforward: when the biggest companies in an industry are the loudest voices calling for regulation, something other than altruism is at work. The regulations these companies propose invariably include requirements for compute thresholds, safety evaluations, and compliance infrastructure that only the largest players can afford. The effect is to freeze the competitive landscape in place, protecting current market leaders from the disruption that smaller, more innovative competitors might cause.

How regulations favor incumbents

Proposed AI regulations frequently include requirements that function as competitive moats. Mandatory safety evaluations that cost millions of dollars. Compute reporting thresholds that only apply to models above a certain size. Compliance frameworks that require dedicated legal and policy teams. Insurance requirements calibrated to the risk profiles defined by incumbents. Each of these requirements is individually defensible as a safety measure. Collectively, they create an environment where only companies with billions in capital can afford to develop and deploy AI systems.

Safety regulation versus competitive moats

Genuine safety regulation would focus on outcomes rather than inputs: what harm did the system cause, not how much compute was used to build it. It would apply equally to all AI systems regardless of the size of the company that built them. It would be developed by independent regulators with no financial relationship to the industry, using frameworks designed by researchers rather than lobbyists. The regulations currently being proposed fail every one of these tests. They regulate inputs that correlate with company size, are designed by industry insiders, and create compliance burdens that disproportionately affect smaller organizations.

Privacy as a right, not a feature

SecureGPT supports genuine regulation that protects users rather than incumbents. We believe privacy should be a legal right, not a product feature that companies can offer or revoke at their discretion. Regulation should require that AI companies cannot train on user data without explicit opt-in consent, cannot retain conversations beyond the time needed to process them, and cannot change privacy terms without meaningful user approval. These are the regulations that would actually protect people, which is precisely why the largest AI companies are not lobbying for them.