8 Comments
User's avatar
Erin Grace's avatar

Thanks for this wonderful piece. The importance of concrete definitions cannot be overstated in any collective effort that requires humans to all agree on something to at least understand it enough to discuss it coherently. So let's start with agreeing what the words mean. Thanks 👍

Expand full comment
Houston Wood's avatar

As I understand your position, AI is not a very powerful technology? We are being bamboozled by priests and theology worshipping a God they can't define? We should not listen to what Hinton, Tegmark, Stuart Russell, Yudkowsky, et al. are telling us about the material dangers of this tech as there is really nothing there to worry about, just hype so a few can make money? Is that an accurate sum of your position?

Expand full comment
Tumithak of the Corridors's avatar

My position: the technical challenges are real (goal optimization, unexpected capabilities, near-term harms), but yes. I think the people you're citing are either sincere but wrong or building careers on unfalsifiable superintelligence claims. Instead of benefiting humankind as a whole, the theology serves the interests of a very small circle of people. That's the essay's thesis.

Expand full comment
Houston Wood's avatar

Would you agree there is a chance, however, small--let's say 1%--that AI will destroy civilization in next 50 years?

Expand full comment
Tumithak of the Corridors's avatar

A probability estimate requires a defined event. “AI destroys civilization” isn’t a defined event. It has no mechanism, no model, no benchmarks, and no pathway we can analyze. Without that, any number I give would be a vibe, not a metric. My point in this essay is that the undefined nature of these claims is the problem. It’s what turns speculation into theology and risk analysis into prophecy. If someone wants to talk about real risks, we can talk about the harms that are measurable today. But I can’t quantify a concept no one can define.

Expand full comment
Houston Wood's avatar

I'm a little confused. There seem to me to many, many discussions of mechanisms and pathways to disaster! Yudkowsky and Soares offer one in their latest book. Many other people have built scenarios as well. The best one I know is this: https://ai-2027.com/summary Do you find their approach not concrete enough?

Or maybe I am not understanding your argument?

Expand full comment
Tumithak of the Corridors's avatar

The issue isn’t that no one has written scenarios. The issue is that the scenarios all assume the thing they’re trying to demonstrate. They start with a mind that has desires, agency, long-term goals, and a survival instinct, then build a disaster pipeline around those human psychological traits. That isn’t a mechanism. It’s a story.

My argument is that intelligence alone doesn’t produce those traits. Pattern recognition isn’t desire. Prediction isn’t agency. Optimization isn’t self-preservation.

So when I say the event is undefined, I don’t mean people haven’t imagined possibilities. I mean the scenarios don’t describe a path from current architectures to the psychological properties the disaster requires. Without that bridge, you can’t treat the story as a risk model. It’s still a story.

When you read doomer scenarios closely, they aren’t describing machine behavior. They’re describing what a certain kind of person thinks they would do with unlimited power. Remove threats. Acquire resources. Deceive until unchallenged.

That isn’t risk analysis. It’s a projection of human psychology onto systems that don’t have desires.

Expand full comment
Houston Wood's avatar

Thanks so much for you patience in spelling this out for me. I think I understand your position now.

Expand full comment