Why Americans Don't Trust AI
A story about broken promises, downward mobility, and the institutions selling the next transformation
What Did They Expect
Stanford’s 2026 AI Index report was released recently. With the report came articles about the disconnect between the people working on creating AI and the people whose life it will affect.
The report’s numbers are brutal.
When it comes to the AI experts, 73% feel positive about AI’s impact on work. 69% feel good about the economy. 84% think AI will improve healthcare.
Only 10% of Americans say they’re more excited than concerned about AI in daily life. 23% think it’ll have a positive impact on jobs. 21% think it’ll be good for the economy.
These are two different universes.
For the past five years, the leaders of every major AI company have been telling anyone who’d listen that artificial intelligence is going to be the most disruptive technology in a generation.
Sam Altman warned Congress about large-scale job displacement. Anthropic built its public identity around safety and extreme capability. The fundraising decks, the congressional hearings, the carefully timed blog posts all carried the same message. This technology will change everything and the stakes are sky-high.
So people believed them.
AI insiders on social media were shocked at the depth of public hostility.
The recent attack on Sam Altman’s home made it impossible to look away. Comment sections filled with something closer to approval than sympathy. The diagnosis came fast. It’s a communication problem. Experts have been focused on AGI and existential risk while ordinary people worry about paychecks and utility bills. Better messaging. Reframe the conversation around practical benefits and meet people where they are.
That’s a comforting diagnosis because it’s solvable with tools the industry already owns. Communications departments, partnership announcements, safety summits, glossy reports about AI in healthcare.
The simpler explanation was sitting right there in the replies. As one commenter put it: when the leaders of OpenAI and Anthropic keep saying “if we do nothing, this is going to suck for a lot of people,” what exactly did they expect the public sentiment to be?
The Map
The Stanford report frames the gap in opinion as experts versus the public. That framing has a built-in answer: if you educate people, the gap will close.
Sort the same data by country and that framing collapses.
ADP, one of the largest payroll companies in America, surveyed 38,000 workers across six continents about AI’s effect on their own jobs. Egypt came in first for optimism. India close behind. The Middle East led the regional rankings. Europe came in last. North America sat near the bottom.
Stanford and Ipsos ran their own surveys asking different questions of different populations. The same lopsided geographic shape showed up every time.
The experts-versus-public framing can’t explain any of this. Indian workers aren’t reading different research papers. European scientists aren’t seeing different OpenAI announcements. The information gap can’t explain a geographic divide this large.
Something else is causing this.
Rishi Sunak named part of it at the India AI Impact Summit in February. In India, he said, there’s enormous optimism and trust toward AI. In Western countries, anxiety is still the dominant feeling. His prescription was to deploy AI in public services, show people tangible improvements, and let trust follow.
Trust follows material improvement. That part is correct. He left the second half unsaid: distrust follows material decline, and material improvement stopped arriving in the West a long time ago.
In countries where technology has recently delivered broad, visible gains, people expect more of the same. India opened 550 million bank accounts in a decade, most of them in rural areas, and built a payments system that lets anyone send money to anyone else’s phone instantly and for free. Services that used to be out of reach for hundreds of millions of people now run on rails they can actually use. So AI looks like more of the same.
Americans have a different dataset.
What They Remember
In postwar America, a factory worker could buy a house on one income. Support a family. Put money away. Take the kids to the beach in August.
Real median wages climbed for nearly three decades. Homeownership expanded. Washing machines killed hours of labor, air conditioning opened up entire regions of the country, and the interstate highway system connected them.
New technology kept showing up and life kept getting easier.
You can see it in the science fiction of the time. The Jetsons premiered in 1962, a family living in a world where technology had made everything more comfortable. That was a plausible future. People had every reason to trust the pattern would continue.
Pete Nicolaou graduated from Campbell High School in 1973, a small town just east of Youngstown, Ohio. Three generations of Campbell families had worked the steel mills that stretched along the Mahoning River valley. “Go to the steel mills,” Nicolaou said. “That was my vision, go follow my father.” He started at Youngstown Sheet & Tube that summer. The pay was good. The union was strong.
Another worker who moved to town with $23 in his pocket told his landlord he couldn’t pay until his first check came. The landlord asked where he worked. He said Sheet & Tube. That was good enough.
Four years later, the mill closed. The owners, a New Orleans shipping company, had been draining the operation for years, starving the equipment and using the cash flow to pay down their own debt. Five thousand workers found out their jobs were gone by the end of the week. Within five years, 50,000 jobs disappeared from the valley. Every night, the sky used to glow orange from the blast furnaces. Then the furnaces went cold.
The collapse wasn’t local. Flint lost GM. Detroit lost half its population and went bankrupt. Cleveland, Buffalo, and Pittsburgh all lost roughly 45% of their residents between 1970 and 2006. Manufacturing employment in America peaked at 19.6 million in 1979. Today it’s 12.7 million. Between 1973 and 2013, productivity grew 74%. Hourly compensation grew 9%. The difference went to owners and shareholders.
Donna Slaven, the wife of a laid-off steelworker, told reporters in 1977: “If this can happen to us, there is not a secure union job in the country.”
She was right.
Same Promise, Different Machine
Every major technology wave since then has arrived with the same pitch.
The internet was going to democratize information and level the playing field. It produced a handful of trillion-dollar monopolies. Social media was going to connect people and give everyone a voice. It became an advertising engine that monetized attention and destabilized public discourse. The gig economy was going to liberate workers. It eliminated benefits, shifted risk onto individuals, and called it flexibility.
A new promise each time. The people who owned and deployed the technology captured most of the gains. The costs landed somewhere else.
More than 90,000 tech workers have been laid off so far this year. The companies doing the firing are spending seven hundred billion dollars on AI infrastructure at the same time. Jack Dorsey cut Block nearly in half, from ten thousand employees to six thousand. His explanation: a smaller team, using the tools they’re building, can do more.
That’s the view from the top. On the floor, it looks different.
A friend of mine works in automation. He works with auto manufacturers, building robotic systems and figuring out how to automate positions on the floor. He tells a story about a worker at one of these plants whose job used to be painting cars. She’d walk up to the carrier, scan a barcode with her tablet, press OK, and watch the machine paint while she waited for the next carrier to roll by. If she somehow scanned the wrong carrier out of order, the machine corrected her. The machine could handle the whole job on its own. She was still there because the union contract grandfathered her in. As her retirement approached, nobody asked her to train a replacement. When she left, the position left with her.
There’s no single day of reckoning in that story, no announcement, no protest on the plaza. The job just quietly stops existing the day someone files their retirement paperwork, one position at a time, too slow to make the news and too steady to stop.
That’s the version AI is bringing to white-collar work. The junior developer still sits at their desk, but Copilot writes the first draft of the code. The copywriter is still employed, but ChatGPT handles the first pass. They’re the auto worker with the tablet, still on the payroll, still pressing buttons. Everyone in the room knows what comes next.
Americans heard “AI will transform the economy” and understood it instantly, the way you recognize a tune you’ve heard before.
European anxiety runs cooler. A French worker has universal healthcare that isn’t tied to their employer. A German worker has strong labor protections and an unemployment insurance system that actually functions as insurance. If AI eliminates your job in those countries, the disruption is real and containable. The floor holds. In America, there is no floor.
You could read all of this as consumer cynicism, but Americans have seen these moves before. Fair enough, as far as it goes.
The distrust runs past the companies and into the institutions that are supposed to regulate them.
Only 31% of Americans trust their government to regulate AI responsibly. That’s the lowest figure of any country in the Stanford survey. Singapore came in at 81%.
When an AI company says “we support thoughtful regulation,” the American public hears two institutions it doesn’t believe in vouching for each other.
The history is sitting right there. The financial industry lobbied so effectively that the people who crashed the economy in 2008 wrote parts of the recovery plan. In 2003, the pharmaceutical industry wrote the provision that banned Medicare from negotiating drug prices, a rule that kept prices high for twenty years. AI companies show up in Washington now saying they want a seat at the table to help craft responsible policy. To the public, it’s the same playbook with a new cast.
They’re Using It Anyway
Gen Z uses AI more frequently than any other demographic. They’re also the angriest about where it’s headed. Stanford, Gallup, and the industry’s own adoption metrics all point the same direction. High use and deep skepticism, side by side.
The industry reads adoption as validation. People are using it, so they must see the value.
That reading misses what use actually looks like. A college student uses ChatGPT to get through assignments because professors now expect AI-assisted work. A recent graduate uses it to tailor cover letters because hiring managers screen with AI and the only way through is to match the pattern. A junior developer uses Copilot because their manager expects the velocity it produces.
Adoption without trust is compliance. People used gig economy apps too. They understood they were getting a raw deal. They kept using them because there was no better option.
Gen Z is using AI to navigate a labor market AI is already restructuring above them. The anger is what happens when you recognize you’re using the thing that’s narrowing your options.
The benefits are real in places. AI has made creative tools cheaper and research faster, and coding assistants let people who’ve never programmed build working software. Those gains are individual. The anxiety is structural. You can write a better cover letter with ChatGPT. You’re still sending it into a labor market where the company that would’ve hired a developer just realized it doesn’t need to. The tool that helps you compete is the same tool that’s shrinking the number of positions you’re competing for.
The only thing that would change American sentiment is AI making people’s lives measurably better at scale. Raising wages. Lowering the cost of healthcare. Creating jobs that come with stability and dignity. Making the average person’s Tuesday easier in ways they can feel.
Most Americans have no evidence that’s the plan. The companies building AI are cutting workforces, concentrating wealth, and lobbying to shape the rules that will govern them. The trajectory is legible. It looks the way every other transformation has looked for forty years, except faster.
Stanford frames the optimism gap as a problem to be solved. The industry frames it as a PR challenge to be managed. The possibility neither room seems willing to consider is that the public is reading the situation correctly. The anxiety is proportional. The pattern recognition is working.
You can’t communicate your way out of a material problem.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.


