The New Deal for OpenAI
What OpenAI’s policy paper reveals about power, dependency, and self-dealing
The Promise
OpenAI was founded in 2015 as a nonprofit with one stated purpose: make sure artificial intelligence benefits all of humanity.
The charter was explicit. If another organization got closer to building safe AI first, OpenAI would stop competing and help them instead. A board of directors, independent of management, held final authority over the company’s direction. The CEO served at the board’s pleasure.
The whole structure was designed so that no single person could steer the technology toward their own interests.
That structure is gone now. OpenAI is a company valued at $852 billion, preparing for what could be the largest IPO in history.
On April 6, 2026, two documents landed on the same day.
The first, a New Yorker investigation into Sam Altman. The article was 18 months in the making, built on secret memos from OpenAI’s former chief scientist and 200 pages of private notes from the man who left to start Anthropic. The first item on the chief scientist’s list of concerns about Altman was a single word: “Lying.”
The second, OpenAI’s “Industrial Policy for the Intelligence Age,” a 13-page paper proposing robot taxes, a public wealth fund, four-day workweeks, and government-managed containment plans for AI systems that might one day replicate themselves.
One document asks America to trust OpenAI with the future of its economy. The other asks whether OpenAI’s CEO can be trusted at all.
So let’s examine this paper, the company behind it, and what happens when the company being regulated is the one holding the pen.
Already Inside
In August 2025, the General Services Administration, the federal agency that handles purchasing for the entire government, announced a partnership with OpenAI to offer ChatGPT Enterprise to every participating federal agency for just a dollar per year.
OpenAI is losing $14 billion a year. It’s offering discounts it can’t afford. A dollar-per-agency deal gets every agency using your product before anyone has a public conversation about whether they should be. Let the training programs lock in. Let the institutional muscle memory form. By the time anyone asks whether the government should be paying for this, the AI is already embedded in the agency’s workflows. The answer is already “we can’t switch now.”
In February 2026, OpenAI signed a classified contract with the Pentagon worth up to $200 million. The timing tells its own story.
The deal was announced on the same evening that the Department of Defense banned Anthropic, OpenAI’s closest competitor, from classified work. OpenAI went from having no classified Pentagon contract to replacing its closest competitor inside the military’s most sensitive networks in a single day.
Congress can’t fully oversee classified work. Journalists can’t FOIA it. And once a company’s models are processing classified intelligence, replacing them means rebuilding secure infrastructure from scratch. The switching costs are financial, technical, and architectural.
Then in March 2026, OpenAI signed a distribution deal with Amazon Web Services. This one got less attention, but it might matter more than the others. Under the agreement, AWS distributes OpenAI’s models through GovCloud and classified environments to its existing government customers. Agencies can start using them without signing a separate contract or even talking to OpenAI directly. The models just show up inside a platform they’re already paying for.
These three deals are the visible parts of the machine. The full list is longer. OpenAI models are also deployed across Los Alamos, Lawrence Livermore, and Sandia National Labs, the Air Force Research Laboratory, NASA, NIH, and the Treasury Department. The company partnered with Anduril Industries on counter-drone weapons systems in late 2024.
Nobody voted on any of this. OpenAI embedded itself in government through procurement, partnerships, and a loss leader priced at one dollar. By the time they published a policy paper proposing deeper government integration, the integration was already a fact on the ground.
The Money Pit
OpenAI needs the government more than the government needs it.
They have $25 billion in annualized revenue. Nine hundred million weekly users. And $14 billion in projected losses for 2026 alone. The company is getting bigger and losing more money at the same time.
The gross margin sits around 33%, which means for every dollar OpenAI brings in, it keeps roughly a third. The rest goes to compute. Cumulative negative free cash flow through 2029 is projected at somewhere between $115 and $143 billion. And the company has committed to $600 billion in compute spending through 2030.
In November 2025, OpenAI’s CFO Sarah Friar stood up at a Wall Street Journal conference and said out loud what the spreadsheets already showed. The company was looking for “an ecosystem of banks, private equity,” and she floated the idea that the U.S. government could “backstop the guarantee” for financing AI infrastructure.
The reaction was immediate. David Sacks, the White House AI czar, said publicly that there would be no federal bailout for AI. Altman scrambled to distance himself from the comments. Friar said she’d “muddied the point.” CNN described the company as being in “panic mode.”
The backstop request disappeared from the public conversation. The need for it didn’t.
Meanwhile the investors were getting nervous.
In September 2025, Nvidia CEO Jensen Huang had stood on stage with Altman announcing a $100 billion investment commitment. It was supposed to signal confidence. By January 2026, Huang was in Taipei calling it “never a commitment.” The Wall Street Journal reported he’d been privately criticizing OpenAI’s “lack of discipline.” He was building an exit ramp from the most expensive AI bet on the planet.
Then came February 27, 2026.
That morning, Nvidia recommitted to a $110 billion funding round. That afternoon, the Pentagon banned Anthropic from classified work. That evening, Altman announced the Pentagon contract.
Three events. Same day. There had been a week of reporting on Anthropic’s compliance deadline. Everyone in the industry expected them to refuse. You can do the math on whether Jensen knew what was coming before he signed back on.
The backstop Friar asked for in November arrived four months later. A classified Pentagon contract guarantees demand from a customer that doesn’t comparison shop, doesn’t audit unit economics, and doesn’t go bankrupt. That kind of customer changes the risk calculation for every private investor sitting across the table.
OpenAI’s policy paper proposes future government integration as a progressive ideal. The company’s balance sheet already requires it as a survival strategy.
Rules for Thee
In 2023, OpenAI’s federal lobbying spend was $260,000 and it had three registered lobbyists. By 2025, that number had grown to $2.99 million and eighteen lobbyists. The firms on the payroll include Akin Gump, one of Washington’s largest lobbying operations, and Miller Strategies, run by a major Trump fundraiser. Mercury LLC registered former congresswoman Cheri Bustos to lobby on OpenAI’s behalf. Former Senator Norm Coleman lobbies for them on R&D issues.
That’s a tenfold increase in two years. During the same two years, Altman was publicly calling for regulation.
The lobbying had specific targets. California’s SB 1047 was a concrete AI safety bill with enforceable requirements for companies building frontier models. OpenAI helped kill it. California’s SB 53 was its successor, with many of the same provisions. OpenAI fought that one too. At the federal level, OpenAI has pushed for a 10-year preemption that would block all state AI regulation, clearing the field of the only jurisdictions that have actually tried to write enforceable rules.
The strategy is straightforward. Kill the specific bills that would create binding obligations. Advocate for broad federal frameworks that never quite arrive. Propose sweeping principles with no enforcement mechanism. Run out the clock while the embedding continues.
Then there’s the super PAC. Leading the Future launched with $125 million in backing from OpenAI president Greg Brockman, Andreessen Horowitz, and Palantir co-founder Joe Lonsdale, among others. It entered 2026 with $70 million in cash on hand and immediately went after Alex Bores, a New York assemblymember who sponsored the RAISE Act, an AI safety bill.
The message to every state and federal lawmaker considering an AI safety bill is simple: propose something with teeth and a $125 million operation will show up in your district.
The PAC is run by Josh Vlasto, a former Schumer adviser who’s also connected to Fairshake, the crypto super PAC. Fairshake spent over $130 million in the 2024 election cycle defeating candidates who supported cryptocurrency regulation. The architect of that effort was Chris Lehane, a former Clinton crisis operative known in political circles as the “Master of Disaster.” Lehane now serves as OpenAI’s chief of global affairs.
Crypto proved you could buy a regulatory environment by targeting individual legislators who stood in the way. The same people are now running OpenAI’s political operation.
The message found its way to Nathan Calvin too. Calvin runs Encode AI, an advocacy organization that supported SB 53. During the fight over the bill, OpenAI had a sheriff’s deputy serve him a subpoena at his home during dinner, demanding his private messages with California legislators.
You can read the policy paper’s language about “partnership” and “collaboration” and “keeping people first.” Then you can look at what the company does when someone actually tries to write a rule it would have to follow.
With all that said, let’s look at what the paper actually says.
The Paper on Its Merits
On April 6, 2026, OpenAI published “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” Thirteen pages. Twenty proposals. Altman sat down with Axios to sell the proposals. Chris Lehane, the same operative running the lobbying and super PAC strategy described above, told interviewers that policy discussions about AI need to be “as transformative as the technology itself.”
The paper itself compares the moment to the Progressive Era and the New Deal. These proposals deserve a fair hearing.
The paper opens with a genuine problem. As AI automates more labor, the tax base shifts away from payroll and wages, the revenue that currently funds Social Security and most of the safety net. The paper’s answer: tax capital gains more heavily at the top, increase corporate income taxes, and explore new taxes on automated labor. These ideas have been tested or discussed seriously in other countries. They poll well across party lines.
Restructuring taxes captures the money. It doesn’t capture the time. So the paper goes further: incentivize employers and unions to pilot 32-hour workweeks at full pay. Let the productivity gains from AI buy people time, not just buy shareholders profit.
Even that only helps people who still have jobs.
The paper’s most ambitious proposal is a nationally managed public wealth fund, seeded in part by AI companies, investing in “diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI.” The returns would go to every citizen. A janitor in Tulsa and a partner at Andreessen Horowitz would hold the same stake in AI’s upside.
A wealth fund assumes the technology stays manageable. The paper acknowledges scenarios where it might not, where dangerous AI systems “cannot be easily recalled” because model weights have been released, developers can’t limit access, or the systems are autonomous and capable of replicating themselves. The proposed answer is joint government-industry containment playbooks, built in advance rather than improvised during a crisis. If you take the risk seriously, that sounds prudent.
Containment handles the catastrophic risks. The question of who gets to use the technology in the first place is a different problem.
The paper frames AI as foundational infrastructure, like literacy or electricity, and calls for government-funded programs to put AI tools into public schools, rural libraries, and towns the market won’t reach on its own. For those communities, this would mean something real.
Access programs take time to build. Displacement might move faster. So the paper proposes auto-triggering safety nets tied to economic data. When AI displacement crosses certain thresholds, temporary expansions of unemployment benefits, wage insurance, and cash assistance kick in automatically and scale back down when conditions stabilize.
These are, on their merits, serious proposals about real problems. Some borrow from existing policy research. Others have been floated by think tanks and academics for years. Former Senate AI adviser Soribel Feliz noted that much of this material had already been discussed during the 2023-2024 Senate forums on AI.
The Suggestion Box
Twenty proposals. Every one worth examining on its own terms. But line them up and a pattern emerges.
Some of these proposals are genuinely popular. A four-day workweek. A public wealth fund. Robot taxes. Who’s going to argue against those? That’s the point. The popular ideas draw the eye. The structural ones do the work.
The paper asks the government to restructure the tax base. Build and manage a public wealth fund. Pilot workweek programs. Develop containment playbooks for self-replicating AI systems. Fund AI access for schools and libraries. Create real-time displacement tracking systems. Build auto-triggering safety nets. Accelerate the electrical grid. Establish auditing regimes. Modernize transparency frameworks. Codify rules for government use of AI in law.
For government, the paper lays out specific mechanisms: tax structures, funding models, triggering thresholds, infrastructure plans. For AI companies, it offers a single paragraph of generalities. Adopt good governance. Commit to philanthropy. Harden your systems. The paper doesn’t say how, doesn’t say when, and doesn’t say what happens if they don’t.
One list is policy. The other is a suggestion box.
Every concrete mechanism in the paper, every proposal with a timeline, a trigger, a dollar amount, or an enforcement structure, is an assignment for the government or the public balance sheet. Everything directed at OpenAI and its competitors is voluntary, aspirational, and described in language that creates no obligation to do any of it.
The company proposing to reshape the American economy doesn’t propose changing anything binding about how it operates.
The Last Framework
This paper asks America to trust this framework will hold. Here’s what happened with the last one.
OpenAI’s superalignment team was promised 20% of the company’s compute to work on keeping advanced AI under human control. The actual allocation was 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission. The policy paper proposes the government build containment systems for self-replicating AI.
The nonprofit board was empowered to check the CEO. When it tried, the board members who voted to fire Altman were removed. The new members chosen to oversee an independent investigation into his conduct were selected after close conversations with Altman himself. The investigation never produced a written report. The policy paper proposes a national oversight framework.
The original charter included a clause requiring OpenAI to stop competing and assist any organization that got closer to building safe AI first. When the Microsoft deal closed, Microsoft had veto power over that clause. The policy paper proposes restructuring the American economy around the technology it sells.
Read the proposals alongside the lobbying record and you’ll find a company that kills real safety bills while publishing imaginary ones. Read them alongside the financial disclosures and you’ll find a company that needs the government more than the government needs it.
The pattern is consistent. OpenAI’s asking the country to trust external safeguards that look a lot like the internal ones it already broke. A board with authority. A safety framework with real stakes. A structure meant to keep public interests above the company’s own.
Each one held until it mattered.
The paper asks the government to build a system of oversight around OpenAI that OpenAI wouldn’t preserve around itself.
The New Deal had enforcement mechanisms. This one has a PDF.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.


