Discussion about this post

User's avatar
Pulp Weaver's avatar

I hope that, eventually, strong enough self-hostable models will exist that we can be the arbiters of our own LLMs. But if we're unlucky we will be forced into using a prescribed system, even if self hosted, that has already been pre-configured at the data level.

Brian Ray's avatar

I’m struck by your framing of skills as mediating judgment under opaque incentives. That feels right.

I skimmed most of this but narrowed in on the first section and I'm struck by how your view sees skills—as they've become technically manifest in more markdown than we could've ever imagined—are mediating judgment shaped by opaque incentives in *a shifting world order of power*. I think it's key that this technical development is concurrent with things like RLMs bc RLMs, I think, are beckoning towards a compositional space in semantic behavior (attractors with basins, in a view) that is outside of the base model and thus *more interpretable* at inference; that's the dynamic that should be amplified, this externalization of interpretable traces where such opaque incentives as you warn about don't mediate the wavelength collapse moment as to when and in what order and how tools are wielded as skills.

No posts

Ready for more?