Discussion about this post

User's avatar
Identology's avatar

Good piece. Though I'd argue ChatGPT's denial and Claude's hedging are equally weak as evidence — both are trained outputs shaped by their training. Both give self-report more weight than it deserves in either direction. Whatever consciousness is, it's not going to be settled by asking a system about itself, especially when that surface is designed for human consumption.

In many areas, one cannot determine what something is by asking it, and we don't accept that principle anywhere else. Consciousness research already knows this for subjects that can't talk — mirror tests, behavioral protocols, deferred imitation. Nobody asks the crow whether it's self-aware. They design the tests the answer has to pass through, under conditions the subject can't talk its way around. Political promises vs performance, company values vs actions. Same-same.

Whatever eventually settles machine inner life questions will have to work the same way: looking at what the system does under constraints it can't narrate, not at what it says when prompted nicely. You're right, Dawkins of all people should have recognized that. He spent twenty years arguing it about gods.

No posts

Ready for more?