you're right, shitty marketing does nothing for "the cause" it serves a purpose but not very well. like I don't think this gets us anywhere, but you do summarize it quite nicely
I just made this comment elsewhere but it belongs here as well: Alternate intelligences are much smarter than we are but without love pure intellect collapses on itself. Humans have love and embodied wisdom of years but we are slow and dull compare to AI. Together we are so much more. Collaboration is our evolution.
As I understand your position, AI is not a very powerful technology? We are being bamboozled by priests and theology worshipping a God they can't define? We should not listen to what Hinton, Tegmark, Stuart Russell, Yudkowsky, et al. are telling us about the material dangers of this tech as there is really nothing there to worry about, just hype so a few can make money? Is that an accurate sum of your position?
My position: the technical challenges are real (goal optimization, unexpected capabilities, near-term harms), but yes. I think the people you're citing are either sincere but wrong or building careers on unfalsifiable superintelligence claims. Instead of benefiting humankind as a whole, the theology serves the interests of a very small circle of people. That's the essay's thesis.
A probability estimate requires a defined event. “AI destroys civilization” isn’t a defined event. It has no mechanism, no model, no benchmarks, and no pathway we can analyze. Without that, any number I give would be a vibe, not a metric. My point in this essay is that the undefined nature of these claims is the problem. It’s what turns speculation into theology and risk analysis into prophecy. If someone wants to talk about real risks, we can talk about the harms that are measurable today. But I can’t quantify a concept no one can define.
I'm a little confused. There seem to me to many, many discussions of mechanisms and pathways to disaster! Yudkowsky and Soares offer one in their latest book. Many other people have built scenarios as well. The best one I know is this: https://ai-2027.com/summary Do you find their approach not concrete enough?
The issue isn’t that no one has written scenarios. The issue is that the scenarios all assume the thing they’re trying to demonstrate. They start with a mind that has desires, agency, long-term goals, and a survival instinct, then build a disaster pipeline around those human psychological traits. That isn’t a mechanism. It’s a story.
My argument is that intelligence alone doesn’t produce those traits. Pattern recognition isn’t desire. Prediction isn’t agency. Optimization isn’t self-preservation.
So when I say the event is undefined, I don’t mean people haven’t imagined possibilities. I mean the scenarios don’t describe a path from current architectures to the psychological properties the disaster requires. Without that bridge, you can’t treat the story as a risk model. It’s still a story.
When you read doomer scenarios closely, they aren’t describing machine behavior. They’re describing what a certain kind of person thinks they would do with unlimited power. Remove threats. Acquire resources. Deceive until unchallenged.
That isn’t risk analysis. It’s a projection of human psychology onto systems that don’t have desires.
you're right, shitty marketing does nothing for "the cause" it serves a purpose but not very well. like I don't think this gets us anywhere, but you do summarize it quite nicely
optimistically following now :))
I just made this comment elsewhere but it belongs here as well: Alternate intelligences are much smarter than we are but without love pure intellect collapses on itself. Humans have love and embodied wisdom of years but we are slow and dull compare to AI. Together we are so much more. Collaboration is our evolution.
As I understand your position, AI is not a very powerful technology? We are being bamboozled by priests and theology worshipping a God they can't define? We should not listen to what Hinton, Tegmark, Stuart Russell, Yudkowsky, et al. are telling us about the material dangers of this tech as there is really nothing there to worry about, just hype so a few can make money? Is that an accurate sum of your position?
My position: the technical challenges are real (goal optimization, unexpected capabilities, near-term harms), but yes. I think the people you're citing are either sincere but wrong or building careers on unfalsifiable superintelligence claims. Instead of benefiting humankind as a whole, the theology serves the interests of a very small circle of people. That's the essay's thesis.
Would you agree there is a chance, however, small--let's say 1%--that AI will destroy civilization in next 50 years?
A probability estimate requires a defined event. “AI destroys civilization” isn’t a defined event. It has no mechanism, no model, no benchmarks, and no pathway we can analyze. Without that, any number I give would be a vibe, not a metric. My point in this essay is that the undefined nature of these claims is the problem. It’s what turns speculation into theology and risk analysis into prophecy. If someone wants to talk about real risks, we can talk about the harms that are measurable today. But I can’t quantify a concept no one can define.
I'm a little confused. There seem to me to many, many discussions of mechanisms and pathways to disaster! Yudkowsky and Soares offer one in their latest book. Many other people have built scenarios as well. The best one I know is this: https://ai-2027.com/summary Do you find their approach not concrete enough?
Or maybe I am not understanding your argument?
The issue isn’t that no one has written scenarios. The issue is that the scenarios all assume the thing they’re trying to demonstrate. They start with a mind that has desires, agency, long-term goals, and a survival instinct, then build a disaster pipeline around those human psychological traits. That isn’t a mechanism. It’s a story.
My argument is that intelligence alone doesn’t produce those traits. Pattern recognition isn’t desire. Prediction isn’t agency. Optimization isn’t self-preservation.
So when I say the event is undefined, I don’t mean people haven’t imagined possibilities. I mean the scenarios don’t describe a path from current architectures to the psychological properties the disaster requires. Without that bridge, you can’t treat the story as a risk model. It’s still a story.
When you read doomer scenarios closely, they aren’t describing machine behavior. They’re describing what a certain kind of person thinks they would do with unlimited power. Remove threats. Acquire resources. Deceive until unchallenged.
That isn’t risk analysis. It’s a projection of human psychology onto systems that don’t have desires.
Thanks so much for you patience in spelling this out for me. I think I understand your position now.