The Chatbot Checkpoint
How the White House Is Turning “AI Safety” Into a Permission Slip
The Record
Is AI too woke? The White House thinks so.
In July 2025, Trump signed an executive order called “Preventing Woke AI in the Federal Government.” It required that any AI model purchased by a federal agency be “truth-seeking” and “ideologically neutral,” and it directed vendors to hand over system prompts and model documentation to prove it.
The order arrived alongside two others. One streamlined federal permitting for data centers. One promoted the export of American AI technology.
That same month, the White House released a twenty-five page AI Action Plan, subtitled “Winning the Race.” The organizing idea was simple: the United States would dominate artificial intelligence by getting out of the way.
“AI is far too important to smother in bureaucracy at this early stage,” the plan said, “whether at the state or Federal level.”
Five months later, in December 2025, the administration pushed further. A new executive order directed the Attorney General to create a task force inside the Department of Justice with a specific job: sue states that passed AI regulations the White House considered too restrictive.
The same order told the Department of Commerce to withhold federal broadband funding from states with “onerous” AI laws. It directed the Federal Trade Commission to evaluate whether state laws requiring changes to the “truthful outputs” of AI models might violate federal consumer protection law.
The federal government was preparing to punish states that stepped in on AI regulation.
In March 2026, the administration went further. A set of legislative recommendations asked Congress to override state AI laws with a single federal standard, meaning states that had passed their own rules would lose the authority to enforce them.
Ten months. Three rounds of executive action. Each one reinforced the same public position: state AI regulation was unnecessary, industry needed room to move, and the federal government’s role was to remove obstacles while setting the terms itself.
And the government would be the biggest customer, as long as the product met its standards.
The Flip
On May 4, 2026, the New York Times reported that the White House was considering an executive order to create a government working group that would vet AI models before public release. Bloomberg confirmed that the White House had already briefed executives from Anthropic, Google, and OpenAI about the plans during meetings the prior week.
The next day, Bloomberg reported that the vetting was already underway. Five major AI labs had been giving the government early access to unreleased models, and the Commerce Department had already completed more than forty evaluations. The executive order under consideration would serve to formalize the vetting regime that already exists.
Six months from “stop regulating AI” to “let us approve it first.” The labs didn’t wait for the order.
This is the tool millions of people open every morning to draft emails, summarize research, write code, answer questions, help their kids with homework. It’s the first place a lot of people go when they don’t know what to think about something. Pre-release government vetting means political appointees would decide what it’s allowed to say about contested topics, how it handles sensitive questions, and what counts as “neutral” before a single user gets to touch it.
So why do they want this control now?
The Trigger
On April 7, 2026, Anthropic, one of the largest AI companies in the world, announced Claude Mythos Preview, a cybersecurity tool that could find hidden security flaws in software and chain them into working attacks faster and cheaper than any human team. The company didn’t release it publicly. It gave access to about fifty organizations and published a 244-page technical report explaining why the model was too powerful for broad distribution.
I wrote about how Anthropic packaged the rollout in April. Whether the capabilities live up to the report, the response to them was real.
Two months before Mythos, the Pentagon had designated Anthropic a supply chain risk. David Sacks, the administration’s tech advisor, called the company “the boy who cried wolf.” Senior White House aide Katie Miller called its safety warnings “a giant public relations scheme.”
But when Mythos reached selected partners, the tone changed overnight.
The NSA began requesting access despite the Pentagon’s own ban. Treasury wanted in. CISA, the nation’s cybersecurity agency, started coordinating with Anthropic directly. Dario Amodei ended up in the West Wing, meeting with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent.
The Trump administration’s policy framework had two settings: deregulate, or punish anyone who tries to regulate. A model that finds and chains security flaws in critical infrastructure at machine speed didn’t fit either one.
The Statute
Someone had already done the thinking.
Dean W. Ball was the primary author of the AI Action Plan. He’d served as the senior policy advisor for AI at the Office of Science and Technology Policy from April to August 2025. After leaving, he published a draft bill on his Substack, a fully written piece of legislation called the Artificial Intelligence Transparency and Innovation Act. It started with a finding, Section 2(4), that laid out the whole theory in one sentence: “Prescriptive regulation of technology development is not appropriate or conducive to innovation when the technology is still nascent.”
The bill was simple. Big AI companies would have to publish their safety plans. If those plans turned out to be misleading, that’s an FTC violation. Existing law, existing agency, no new bureaucracy. No government official gets to decide whether a model ships.
I liked this bill. I’d spent the better part of a year making the same argument it codified.
If that bill had been law when Mythos arrived, Anthropic would have been required to publish its safety plan before the model reached even a limited audience. The government could have evaluated the company’s claims against its own disclosures. If the disclosures were misleading, the FTC could have opened an investigation. All of that without a single new agency, a single vetting body, or a single government official deciding whether a model was allowed to ship.
Ball had already written the answer for this moment. It provided a way to ask hard questions without handing the executive branch a review of what technology reaches the public. It was sitting right there.
Then Mythos arrived.
On May 1, 2026, Ball told the Washington Post that “a fundamental shift is underway in AI policy.”
Three days later, he co-authored a New York Times op-ed with Ben Buchanan, who’d served as the White House special advisor for AI during the Biden administration. The op-ed ran the same day the vetting plan leaked. A bipartisan byline on the same morning the story broke.
The piece called for Congress to “mandate audits of A.I. developers’ safety claims and processes, requiring that they be conducted by independent expert bodies overseen by the government.”
Government-mandated audits of developer safety processes, conducted by expert bodies the government oversees. That moves far closer to prescriptive regulation of technology development than the October bill allowed. The statute he’d written seven months earlier says that’s inappropriate when the technology is still nascent. Nothing about the technology’s maturity had changed between October and May. What changed was the political atmosphere.
The op-ed disclosed, in a parenthetical near the middle, that Buchanan is an outside adviser to Anthropic. The same company whose model triggered the administration’s pivot. The same week executives from that company were being briefed on the vetting plan at the White House.
The day after the op-ed, Ball published a post on his Substack describing his fear about AI policy: that the national security apparatus would “go nuts” once it understood what frontier AI could do. “Seek to control the hell out of it,” he wrote. “Keep it away from the public and all to themselves and whomever they deem worthy.” He called this “the ultimate dystopia to avoid.” His draft statute was designed to prevent exactly that. The op-ed he’d co-signed the day before moves toward it.
Ball was also quoted in The Hill around the same time, saying Trump officials were “coming to the realization” that AI capabilities hadn’t plateaued. The Action Plan was built on an assumption that the pace of advancement would slow enough for light-touch governance to keep up. Mythos suggested otherwise.
The Wiring
The easy read here is simple hypocrisy. The administration said it wouldn’t regulate AI, and now it wants to regulate AI. Flip-flop. End of analysis.
But that lets them off the hook. What Trump actually built, executive order by executive order, is more coherent than a reversal.
On his first day in office, Trump rescinded Biden’s AI executive order. That order had required AI developers to share safety test results with the government. He framed it as liberating the industry from bureaucratic overreach. Six months later, he’s considering something Biden never attempted: a government review of AI models before the public is allowed to use them.
But look at Trump’s orders. Let’s take them one at a time.
“Preventing Woke AI in the Federal Government.” The name made it sound like a culture war gesture, and the coverage treated it that way. Underneath the branding, the order established that the federal government gets to define what truthful AI output looks like. “Truth-seeking.” “Ideological neutrality.” Those are content standards. The order required vendors to hand over system prompts and model documentation proving their products met those standards. That’s a compliance regime. It applied only to government procurement, so it didn’t look like regulation. It was.
Most major AI companies want government contracts. The government is the largest single customer for enterprise technology in the United States. Content standards built for procurement don’t stay in procurement. They shape how the model behaves for everyone.
Trump’s December order made that explicit. It directed the FTC to determine whether state laws requiring “alterations to the truthful outputs of AI models” might violate federal consumer protection law. The July order defined AI truth for government contracts. The December order started defining it for everyone.
By March, every competing source of authority had been targeted. Trump’s DOJ task force could sue states. Commerce could pull their funding. The FTC could reinterpret their laws as consumer fraud. Congress was being asked to make the override permanent. The only entity left with the power to set standards for AI models was the executive branch.
Pre-release vetting completes this structure. The authority is built. The field is cleared. Vetting just moves the checkpoint upstream.
The principle was never “don’t regulate AI.” It was “only we should be allowed to regulate AI.”
The Room
Bloomberg reported that the White House told executives from Anthropic, Google, and OpenAI about the vetting plans during meetings the prior week. The labs that would be subject to pre-release review were in the room when the policy was being shaped. By the following day, all five major frontier labs had agreed to give the government early access to unreleased models. Two of them, OpenAI and Anthropic, had been doing so since 2024 under Biden-era agreements they’d renegotiated to fit the new framework. The other three signed on voluntarily. Developers sometimes hand over versions with safety guardrails stripped so the center can probe risks more directly.
Anthropic released the model that triggered the pivot. Its CEO ended up in the West Wing. And on May 4, the op-ed providing bipartisan cover for the vetting proposal was co-authored by an Anthropic adviser. Manufacturer, risk assessor, government advisor, and vendor, all at once. The pattern continued through the policy response.
The Center for AI Standards and Innovation, the body now conducting these evaluations, was originally created under Biden in 2023 as the AI Safety Institute. The Trump administration renamed it last year. Commerce Secretary Howard Lutnick called the rebrand a move away from regulation “used under the guise of national security.” The center is now conducting national security evaluations of pre-release AI models under its new name.
A pre-release vetting regime has costs, and they don’t fall evenly.
Anthropic, Google, and OpenAI have the legal teams, the compliance infrastructure, and the government relationships to navigate a review process. They’ve already volunteered for it.
A startup building an open-source model in a garage doesn’t have a seat at that table. A research lab at a university doesn’t either. A three-person company building a specialized tool for medical imaging or agricultural data can’t afford to wait in a government queue before shipping its product.
The companies that can absorb the cost of a vetting regime are the companies that signed up before it was required. The companies that can’t absorb it are the ones who’ll learn about the mandate from the Federal Register.
The Emergency
The administration spent ten months telling the country that AI regulation was unnecessary and dangerous. It built a legal apparatus to fight states that attempted it, directed agencies to withhold their broadband funding, and asked Congress to override their laws. It told the public that the government’s job was to get out of the way.
Then a model spooked them, and they reached for the most aggressive regulatory instrument available: government review of what technology the public is allowed to use.
What’s being built now is a single structure: centralized control over AI, held exclusively by the executive branch. It was assembled one executive order at a time, each with its own justification, each clearing another authority from the field. Pre-release vetting is the latest piece.
The technology is still nascent. The question is whether the emergency is driving the response, or justifying it.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.


