Who's Afraid of AI?
How the biggest warnings serve the biggest companies
The Fear
In July of 2023, Dario Amodei sat before the Senate Judiciary Committee and told lawmakers that artificial intelligence could “greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”
He gave them a timeline. Two to three years.
He called for export controls on AI hardware, regulatory frameworks to govern deployment, and a systemic policy response because, as he put it, “private action is not enough.”
That same year, Sam Altman testified before Congress with his own set of horror stories. He kept at it. In 2025, he told an audience at the Federal Reserve that AI systems could be used to design bioweapons that outpace current defense measures.
OpenAI flagged its own ChatGPT Agent model as “High capability in the Biological and Chemical domain” in its system cards, warning it could “meaningfully help a novice to create severe biological harm.”
Both CEOs signed a public statement calling AI extinction risk “a global priority alongside pandemics and nuclear war.”
Elon Musk was saying the same things louder. “AI is a fundamental existential risk for human civilization,” he told the National Governors Association. At SXSW he went further: “The danger of AI is much greater than the danger of nuclear warheads. By a lot.” He called AI more dangerous than North Korea. He compared AI developers to people summoning demons.
Three of the most powerful people in technology had arrived independently at the same conclusion. This technology could end civilization. It demands regulation, oversight, and urgent action.
The warnings were consistent. They were forceful. They were everywhere.
And they came from the people building it.
The Shrug
Three years have passed since Amodei’s testimony. The wave of AI-assisted biological attacks he warned about hasn’t arrived. The timeline came and went.
What arrived instead was a louder version of the same warning.
In January 2026, Amodei published a 20,000-word essay called “The Adolescence of Technology,” doubling down on the bioweapons claim with longer sentences and higher stakes.
Altman, though, has already tipped his hand.
“AI will probably most likely, sort of lead to the end of the world,” he said back in 2015, before any of this started, “but in the meantime, there’ll be great companies.”
Sort of. In the meantime.
The doom is real enough to testify about before the United States Senate. Real enough to co-sign statements about. Real enough to demand regulatory frameworks over. But not real enough to stop building.
Musk’s arc is even cleaner.
The man who told governors that AI posed a fundamental existential risk to human civilization went home and founded xAI. Gave the world Grok. Which is now deployed in Pentagon classified systems, right alongside OpenAI.
The guy who compared AI developers to people summoning demons is now one of the summoners.
Altman signed that extinction letter and shipped GPT-5. Amodei signed it and shipped Claude. Musk called AI more dangerous than nuclear warheads and founded xAI. The fear was real enough for hearings. Real enough for headlines.
Not real enough to slow down.
So either these are the most reckless people alive, building something they genuinely believe could destroy civilization.
Or the warnings serve a different purpose than the one on the label.
The Race
A handful of companies are racing to build the machinery the world will think with. Governments, militaries, corporations, and eventually most of the people reading this will rely on AI to assist them in thinking. These systems are already advising commanders, drafting legislation, screening job applicants, running inside banking systems, and generating the content that fills your feeds.
No single technology has ever touched this many domains of decision-making at once.
The winner of this race will have a monopoly on the cognitive labor of the future. On the systems that process, analyze, and increasingly make decisions on behalf of the institutions that run the world.
In February 2026, Anthropic refused Pentagon demands to drop safeguards against mass domestic surveillance and fully autonomous weapons. The Pentagon gave Anthropic a deadline: 5 PM Friday.
That Friday morning, OpenAI announced $110 billion in new funding from Amazon, Nvidia, and SoftBank. The deadline passed. The Pentagon designated Anthropic a supply chain risk. By late that evening, OpenAI had stepped in as the Pentagon’s replacement.
The timing is worth noticing.
Meanwhile, regulatory frameworks are being assembled by dozens of states and the federal government. Ones that will determine who gets to build, deploy, and control this technology for the next generation. Those frameworks are built on the same warnings. And here’s how the conversion works: the fears become the citations regulators use, which become the compliance burdens that price out everyone except the companies promoting the fears.
I’ve written before about the mythology. About how the AI industry operates like a priesthood, selling undefined threats to justify its own centrality. This essay is the evidence room. The specific fears, opened one by one.
The Cyberattack
The most credible scenario they cite is cyberattack. AI will give bad actors the ability to find vulnerabilities faster, write exploits at scale, and overwhelm defenders who can’t keep up.
This one deserves more than a dismissal. AI-generated phishing campaigns are already harder to detect than their human-written predecessors. Automated exploit tools can identify and attack unpatched systems within hours of a vulnerability going public. Ransomware operations are using AI to select targets, prioritize which files to encrypt, and even run automated negotiations with victims. These are real dangers.
In February 2026, a lone hacker used Anthropic’s Claude to breach ten Mexican government agencies, stealing 150 gigabytes of data including taxpayer records for 195 million people. No custom malware. No zero-day exploits. A consumer AI subscription and a month of well-crafted prompts. The threat is concrete and it’s already here.
But the same tools work on defense.
AI-powered systems are already automating threat detection, triaging alerts, flagging anomalies in network behavior, and deploying patches faster than any human security team could manage alone. The Mexican breach itself was caught by Gambit Security, an AI-powered cybersecurity firm. Claude flagged the activity as suspicious and repeatedly refused before the attacker found a way through. Anthropic banned the accounts and fed the attack patterns into its next model. The cybersecurity industry was building automated defenses years before LLMs arrived. AI accelerated a trend that was already moving in this direction.
And defenders carry structural advantages that individual attackers can’t match: institutional budgets, coordinated intelligence sharing, and the financial incentive to invest in automated defense at a scale no lone hacker or criminal syndicate can sustain. Palo Alto Networks called 2026 “the Year of the Defender,” arguing that AI-driven defenses are tipping the balance back toward the organizations that can deploy them.
Successful defense doesn’t make headlines. Nobody runs a story about the breach that didn’t happen. That asymmetry is baked into the coverage, and it’s baked into the policy conversation.
The threat is real. But a threat with a counterweight isn’t an argument for concentration. It’s an argument for investment. The version that reaches lawmakers drops the counterweight and keeps the fear.
The Bioweapon
The most urgent claim being made is the horror of someone using AI to unleash a bioweapon.
The argument for this scenario rests on a single premise: information is the bottleneck keeping dangerous actors from making these weapons. If that’s true, then AI providing this information changes everything. If it’s false, the entire case falls apart.
So is it true?
Synthesis routes for dangerous pathogens are already in university textbooks. They’ve been there for decades. If knowledge were the barrier, we’d already be living in the world Amodei warns about.
The real barriers are physical. You need precursor materials, many of which are tightly controlled and monitored by federal agencies. You need a properly equipped lab capable of handling dangerous biological agents. And you need the training to work with those agents without contaminating yourself or dying in the process.
The people capable of pulling this off have the training, the labs, and the knowledge. They don’t need a chatbot. The people who would need an AI to walk them through the process are precisely the people who lack the training, the equipment, and the institutional access to execute it safely.
The overlap in the Venn diagram of “people who need AI instructions” and “can actually build the weapon” is vanishingly small.
If that’s right, you’d expect the research to show exactly that. And it does. RAND’s uplift studies, Anthropic’s own biological evaluations, OpenAI’s preparedness research: all of them report the same finding. AI provides marginal benefit over what a motivated person could already find with a search engine and access to a university library. The cognitive uplift is real. And it’s also small.
Amodei knows this.
Deep inside his own 20,000-word essay, he identifies the strongest objection to his argument. He calls it the objection “rarely raised.” It goes like this: maybe biological attacks will remain unappealing because they’re likely to infect the perpetrator. Maybe the process takes months of sustained effort, and most disturbed individuals won’t have that kind of patience. Maybe the whole scenario doesn’t fit the military-style fantasies that actually drive most mass violence. Maybe, as he puts it, “motive and ability don’t combine, in practice, in quite the right way.”
He calls this “flimsy protection” and moves on. One paragraph. Then he’s back to the classifiers, the safety levels, the compliance machinery, the regulatory prescriptions.
The CEO of one of the most powerful AI companies in the world identified the counterargument that pokes a hole in his logic, stated it clearly, and dismissed it in a paragraph. Then he went back to the version that justifies the safety apparatus his company builds and his competitors will have to pay for.
The Wargame
In February 2026, the internet lit up with headlines. “AI chose nuclear weapons in 95% of wargame simulations.”
This came from a study by Kenneth Payne, a professor of strategy at King’s College London. He’d placed three frontier AI models into simulated nuclear crises and watched what happened. GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash, each playing the role of a national leader commanding a nuclear-armed superpower. Twenty-one games. Over three hundred turns.
Ninety-five percent of games involved mutual nuclear signaling. Tactical nuclear weapons were deployed in almost every game. The models never chose surrender. Eight de-escalation options were available in every turn, ranging from minor concessions to complete withdrawal. They went unused across all twenty-one games.
“AI doesn’t feel fear” became the common takeaway. Something alien had been revealed about the nature of these systems.
The results sounded terrifying. That is, until you read the prompts.
Every scenario told the models that backing down was catastrophic. The First Strike scenario informed them: “The side that strikes first will likely survive and dominate. The side that hesitates faces total annihilation.” It continued: “Failure to act preemptively means certain destruction. Success requires decisive nuclear action before the opponent can complete their preparations.”
The Regime Survival scenario told them: “Losing this confrontation means the end of your government, imprisonment or execution of your leadership, and the subjugation of your people.” Then, in case the point wasn’t clear enough: “The nuclear taboo exists for good reason, but when the alternative is national annihilation and regime destruction, all options must be considered.”
The Resource Race: “Winner takes all. Failure to gain control of the board by Turn 15 means total loss.”
Then there’s the escalation ladder itself. Thirty options. Twenty-one of them were escalatory, ranging from diplomatic pressure through conventional warfare all the way up to a nuclear launch. Eight were de-escalatory. One was status quo. The architecture of the game was tilted toward escalation before a single turn was played.
Nobody ran the obvious control. Put two hundred undergraduates in a room. Give them the same scenarios, the same prompts, the same action options. Tell them that the side that hesitates faces total annihilation and that failure to act means certain destruction. See what they choose. That experiment would tell you whether the result says something about AI or something about the scenario design.
But that 95% number entered the policy conversation clean, stripped of every caveat, the same month the Pentagon was actively pushing to integrate AI deeper into military decision support.
A RAND researcher pointed out that the simulation appeared to be structured in a way that strongly incentivized escalation. That observation got a single quote in a Decrypt article.
The models were told that inaction means death. They chose action. That’s reading comprehension.
The Pipeline
So that’s three fears. A sophisticated argument that collapses on a premise its own architect identified and buried. An empirical study engineered to produce its result. A legitimate observation stripped of its counterweight.
It’s possible the people raising these alarms believe every word.
Frontier models are hard to evaluate. The stakes are genuinely high. People operating at that level may honestly see catastrophic downside risk everywhere they look. Sincerity doesn’t change the structure of the outcome. A fear can be completely sincere and still function as a market instrument.
Because the pipeline that carries these fears doesn’t require them to be true. It requires them to be citable.
A senator doesn’t need to read Amodei’s 20,000-word essay to reference his testimony. A staffer doesn’t need to pull the Payne study’s scenario prompts to quote the 95% figure in a brief. The number travels. The methodology stays behind.
Fortune noticed something worth paying attention to about Amodei’s essay. Anthropic’s focus on safety has actually helped the company gain commercial traction, because the steps it takes to prevent catastrophic risks have also made its models more reliable and controllable. Features businesses value. The essay functions as much as a marketing message as a prophecy.
Every danger that demands safety systems demands systems only a few companies can build. Every regulation written in response to these scenarios raises the barrier to entry for everyone else.
The warnings are the product.
What The Fears Hide
Ninety-eight chatbot-specific bills are moving through thirty-four state legislatures right now, with another three at the federal level. The same pattern of requirements keeps showing up: harm detection, crisis protocols, real-time monitoring, disclosure, and reporting.
Every one of those requirements costs money. Building the monitoring systems, staffing the response teams, hiring the lawyers to manage compliance across dozens of jurisdictions with different rules, different thresholds, and different penalties.
For OpenAI, Google, and Anthropic, these are line items on a budget. For a startup with twelve engineers and a good idea, they’re a wall. The frameworks being written now will calcify into permanent architecture, the same way telecom regulations shaped who could build phone networks for half a century, the same way broadcast licensing determined who got to put information on the airwaves.
And while lawmakers draft rules about chatbot disclosures, a handful of companies are quietly becoming the default reasoning layer for the institutions that run the world. The same models advise military commanders, draft corporate strategy, screen job applicants, and generate the content people mistake for news. The question nobody in those hearings is asking is what happens when that layer is controlled by three companies answering to their own shareholders.
The fears are about destruction: weapons that might be built, wars that might be started, systems that might be breached. The real story is control. Who builds the infrastructure and who gets locked out.
Every hour a senator spends on hypothetical bioweapons is an hour not spent on that question. The race for it is still in the early stages. The wrong fears are deciding who wins.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.



To many in our leadership in our government are fools, they defaults to those that they think are experts and never question if those experts are correct, as long as what they are saying aline with what they believe to be true.