4 Comments
User's avatar
David J. Friedman's avatar

I really appreciate this attempt to ground the AGI conversation in verifiable gates. It’s refreshing to see someone focus on receipts rather than vibes. Your framing of Types 4–6 especially resonates.

The shift from “does useful tasks” → “improves itself” → “proposes its own ends” feels like the right axis.

Something I’ve been working on in my own conceptual designs is the idea that capability isn’t just about what an agent can do, but how it stabilizes itself while doing it. You hint at this with corrigibility, drift checks, and charter boundaries, and I think that’s the piece worth expanding.

In my experiments I’ve found huge differences between:

• systems that only learn from new data,

• systems that learn from their own internal representations,

• and systems that learn from persistent identity anchors (values, sensory associations, stable self-models).

Those last two tiers behave very differently under pressure, especially when goals extend over multiple days. You could call it something like “value inertia” or “identity coherence,” and it seems relevant for Types 4–6.

Not arguing, just adding a perspective: capability ramps tend to depend as much on memory architecture + self-alignment as on raw intelligence.

Your post is a great way to get the conversation out of prophecy mode and into engineering mode. Thanks for putting it out there.

Expand full comment
Pulp Weaver's avatar

A valuable scale, for sure. We are passed the initial "Bigger, Better" stage. We need quantifiable criteria to ensure the right model is used for the right case.

Expand full comment
Eric-Navigator's avatar

I have an initiative. Your support is crucial for our mission.

https://ericnavigator4asc.substack.com/p/hello-world

Hello World! -- From the Academy for Synthetic Citizens

Exploring the future where humans and synthetic beings learn, grow, and live together.

About me:

I am a young, radical but rational optimist with a PhD from MIT EECS. Humans have been scared of a hypothetical "AI doomsday" for decades, but let's break the self-fulfilling prophecy. Let's work together for AI-humanity long-term companionship.

Expand full comment
Eric-Navigator's avatar

Great work! I am also bothered by the vague calls for AGI. People don't really know what AGI means. An AGI clearly means a Type 6 intelligence on this scale.

I would like to further supplement three points:

Persistent learning of cognitive tasks over a long horizon (10+ years) using equal or fewer examples than human counterparts.

Human-like embodiment (android body) capable of doing almost all human physical functions independently, also with persistent learning over a long horizon (10+ years) using equal or fewer examples than human counterparts.

And lastly, the most difficult one: fit into human society and gain trust of human peers over a long horizon (10+ years). Allow human peers to think the AI as "one of us".

I think an AI must be a Type 6 intelligence AND fulfill all the three points above to be called AGI. Because I consider AGI must be functionally human-like.

My detailed article on this exact topic: https://ericnavigator4asc.substack.com/p/what-is-artificial-general-intelligence

Expand full comment