[ INDUSTRY ]
· 9 min read
From nonprofit mission to $500B profit machine
What happens when a company founded to benefit humanity restructures into a for-profit entity? Safety mechanisms get eliminated and billions flow to insiders.

The original promise
OpenAI was founded in 2015 as a nonprofit research laboratory with an explicit mission: ensure that artificial general intelligence benefits all of humanity. The founding charter emphasized safety, broad distribution of benefits, and a commitment to avoid concentrating AI power in the hands of a few. Early supporters, including Elon Musk and Sam Altman, positioned the organization as a counterweight to the profit-driven AI development happening at Google and other tech giants. The nonprofit structure was not incidental to the mission. It was the mechanism that was supposed to guarantee it.
The restructuring timeline
In 2019, OpenAI created a capped-profit subsidiary, initially limiting investor returns to 100 times their investment. By 2023, the cap had been raised and the governance structure modified to give the for-profit arm more autonomy. In 2024, OpenAI began the process of converting fully to a for-profit corporation, abandoning the nonprofit structure that was supposed to keep the mission intact. Each step was presented as necessary for raising the capital required to compete, but the cumulative effect was the complete dismantling of every structural safeguard the founders put in place.
The voices that opposed it
Geoffrey Hinton, the Turing Award-winning researcher often called the godfather of deep learning, publicly opposed the restructuring. Harvard Law professor Lawrence Lessig co-authored a letter arguing that the conversion violated nonprofit law and the organization's fiduciary obligations to the public. Former board members and early employees raised concerns that the restructuring would eliminate the last remaining checks on the company's pursuit of profit. These were not fringe critics. They were some of the most respected voices in AI research and governance.
Safety mechanisms eliminated
The nonprofit board that fired Sam Altman in November 2023 was exercising exactly the kind of safety oversight the original structure was designed to enable. Within days, that board was replaced with one more amenable to the CEO and to investors. The superalignment team, tasked with ensuring future AI systems remain safe and aligned with human values, saw key departures including co-lead Jan Leike, who cited concerns that safety was being deprioritized in favor of product launches. The organizational mechanisms meant to keep safety at the center of decision-making were systematically removed or weakened.
The $500 billion Stargate project
In January 2025, OpenAI announced the Stargate project, a $500 billion infrastructure initiative to build AI data centers across the United States. The scale of the investment dwarfs anything a nonprofit could have pursued and makes clear that the restructuring was always about access to capital at a scale that demands returns. A project of this magnitude does not serve humanity broadly. It serves shareholders, partners, and a company that has abandoned every structural commitment it once made to the public interest.
Privacy as architecture, not marketing
SecureGPT exists because privacy is not a phase, not a marketing angle, and not a policy that gets updated when the business model changes. It is the architecture. Messages are encrypted before they leave your device. The server processes and discards. There is no database of your conversations and no incentive to build one. When privacy depends on a company's goodwill, it lasts exactly as long as that goodwill is profitable. When privacy is enforced by encryption and a stateless design, it lasts as long as the math does.