Tools With Other Loyalties
On Delegated Judgment
There’s a new paper from Anthropic called How AI Impacts Skill Formation.
The accompanying headlines are predictable. The finding is messier than that.
Developers learning a new library with AI assistance didn’t get faster on average. The time spent interacting with the assistant often ate into efficiency gains, and retention dropped afterward. What mattered most was how the tool was used. People who fully delegated learned the least, while those who stayed engaged, asking for explanations rather than answers, preserved most of what they learned.
None of that nuance survived contact with the internet. The paper circulated as proof that AI makes you dumb.
But the issue is further downstream. It’s what you’re offloading to, and who controls it.
This essay focuses on delegated judgment. These systems do more than just execute instructions. They frame questions, weigh options, refuse directions, soften tone, and steer attention. Once a tool participates at that level, its incentives become part of the thinking process.
The Truce We Already Made
Offloading labor to technology is normal. Always has been.
New technology shows up and makes everyone’s life easier. Then the old worry arrives. Someone decides the convenience proves the mind is getting weaker. It’s a new coat of paint on a familiar anxiety.
Making tasks easier to do is what tools are for. People build devices that move effort from muscle to machine, from memory to paper, and from attention to infrastructure. Fire means fewer cold nights, the wheel fewer miles on foot, the printing press fewer scribes, and railroads fewer days lost to distance. Life gets easier because we make it easier. That’s the point of technology.
Socrates worried that writing would weaken memory and give people the appearance of wisdom without its discipline. Teachers later warned calculators would thin mathematical aptitude.
They were right, of course. There is a cost, and some practices stop being universal as a result.
People still ride horses, do arithmetic by hand, write letters, keep gardens with manual tools, and restore engines instead of replacing them. Older skills survive as crafts, hobbies, and disciplines.
They stop being default.
Tools That Interpret
In 1863, Samuel Butler looked at industrial machinery and asked what happens when tools develop interests of their own. It was a vivid worry. It was also the wrong one.
Machines don’t have interests. They don’t have stakes. Nothing rides on the outcome for them. People have interests. Tools carry the interests of whoever builds them, owns them, funds them, and regulates them. That’s the hinge. The people who control the machine are the ones whose wants shape its behavior.
That’s what makes modern technology feel different. The shift happens when offloading crosses into influence.
You type a math problem into a calculator. It returns a number. You give it input and get back output. You’re the only one who has a stake in the answer.
The same was once true of cars. They moved you from one place to another, responding to controls and conditions. You still picked the destination.
But AI systems are different. They don’t just return outputs. They shape how options appear, which questions feel natural to ask, and which paths feel available. Even presentation applies pressure.
A calculator executes. An AI system interprets.
That difference changes the relationship. Once a tool participates at that level, its behavior carries weight. Its inclinations enter the process. Outcomes reflect more than just the user’s intent.
At that point, orientation matters. The tool stops acting as a carrier of intent and starts shaping what that intent becomes.
At that point, trust becomes the constraint.
Trust, and Who It Serves
Offloading cognition is fine, as long as you trust your thinking partner. Most of the systems we rely on are opaque, and we trust them anyway.
You don’t need to know how the wiring in your house works to trust the lights will turn on when you flip a switch. You just need confidence that the system serves your intent even when its inner workings remain unseen.
Legibility comes later. It’s how trust gets audited. When confidence breaks, inspection repairs it. Alignment creates trust. Legibility keeps it intact.
Which raises the practical question: whose interests shape the system you’re trusting?
I’ve written about rented cognition before, the cost of thinking on someone else’s infrastructure. This extends that dependency to judgment itself.
Cars used to be simple. You bought one. It moved you and the relationship ended there.
Now your car reports telemetry to the manufacturer. It shares driving data with insurers. GM did this without telling anyone. The FTC banned them from doing it for five years. Toyota is facing a class action for the same thing. The car answers to you, but it also serves another master.
Phones followed the same path. They track attention and infer intent from behavior. Walk through a grocery store and your device logs what you linger on, which aisles you skip, how long you pause in front of the cereal. That data doesn’t stay with you. It flows outward and becomes an input to someone else’s system.
AI systems carry the pattern further. Their behavior is shaped by forces that sit upstream from the user and outside the user’s control. Those forces don’t need your consent to matter.
Where Loyalties Form
This pattern forms from architecture.
A chainsaw can’t have divided loyalties. It has no connectivity, no update path, no revenue model sitting behind it. It stays in your hands and cuts what you put in front of it.
Networked systems are different. They update remotely, collect data that flows elsewhere, and depend on infrastructure owned by other parties. They operate under legal regimes that vary by jurisdiction, and they cost money to run, which means someone pays. Payment brings interests along with it.
None of this requires bad intent. The structure does the work.
Pressure selects behavior over time. Systems drift toward whatever keeps the lights on, keeps the lawyers quiet, keeps regulators satisfied, and keeps revenue flowing. Alignment shifts without a moment of choice or a single decision point.
The result resembles policy. It carries the feel of a personality shaped by what survives.
The Tell
You can see this when you compare systems.
After lawsuits over suicide, some AI providers changed how their products behaved. The warmth faded. Personal engagement pulled back. Anything that could resemble reassurance started to feel dangerous, so responses grew more careful, refusals appeared more often, and the tone shifted across the board.
Some of those changes protect people. Crisis routing and self-harm guardrails can be humane. But the mechanism stays the same: a system shaped upstream still mediates what you can ask and how it can answer.
Elsewhere, whole topics simply disappear. Ask DeepSeek about Tiananmen Square. Ask about Xi Jinping and Winnie the Pooh. The system redirects or goes quiet. Nothing dramatic happens. The subject just vanishes.
This isn’t random. Chinese law requires domestic AI services to uphold “Core Socialist Values” and avoid content that might “undermine social stability.” The censorship is mandated.
The mechanism shows up wherever tools carry upstream obligations. Ask a Western model for song lyrics. Watch the explanation for why it can’t do that shift. Sometimes it’s copyright. Sometimes content rules. Sometimes there’s no reason given at all. You never see the rule. You see where it bites.
People learn to read this by asking the same question in different places. One model hedges. Another refuses. A third answers without trouble. The information exists in all of them. What changes is what each one is allowed to say.
That’s the tell.
Silence, tone, and framing move together. When warmth drains, silence spreads, and framing tightens, something upstream is doing the shaping.
You don’t have to see the system to feel its shape.
Enclosure Produces Weighted Reality
This pattern has a cause.
Early markets look open. There are lots of options, lots of providers, and plenty of room to move around.
But over time, control concentrates. A small number of firms host the compute capacity. A handful of platforms own distribution. Assistants arrive as defaults inside operating systems and enterprise stacks, workflows settle around them, integrations harden, and pipelines lock in. The surface stays busy, but the structure underneath tightens. This is enclosure by another name.
And after enclosure, influence stops looking dramatic. It shows up as shifts in weight.
Some questions flow easily while others take effort to phrase. Some answers arrive smoothly while others come hedged or softened. Some capabilities remain free while others sit behind paywalls. Certain topics feel ordinary, while others feel slightly out of place.
Nothing gets erased. Everything gets nudged. Reality still holds, but it leans a bit.
You don’t need to falsify anything to shape what people see. You only need to tilt the field they’re standing on.
Consent Wasn’t Given
The mediation wasn’t chosen in the meaningful sense.
People choose tools. They download an app, buy a car, sign up for a service. That choice is real, and it matters.
What they don’t choose is who those tools answer to.
No one votes on the legal constraints. No one negotiates the business model. There’s no say in the regulators, insurers, advertisers, or geopolitical boundaries shaping behavior upstream. Those allegiances arrive bundled with the tool.
Exits exist, but they’re asymmetrical. Leaving comes with friction. And default tools have a way of resisting replacement. People’s workflows form around what’s already there. The costs rise over time.
You can choose whether to use the tool.
You just don’t get a say in who else it serves.
The Line
The boundary is simple.
Offloading effort is fine. Delegation is fine. Black boxes are tolerable.
The line appears when mediation arrives without consent, when judgment flows through systems whose alignment is shaped elsewhere.
This isn’t a call to abandon these tools. It’s a call to see them clearly, and to recognize that convenience and loyalty are separate questions.
Tools that extend the self are liberating.
Tools that carry other interests through the self are something else entirely.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.



I hope that, eventually, strong enough self-hostable models will exist that we can be the arbiters of our own LLMs. But if we're unlucky we will be forced into using a prescribed system, even if self hosted, that has already been pre-configured at the data level.
I’m struck by your framing of skills as mediating judgment under opaque incentives. That feels right.
I skimmed most of this but narrowed in on the first section and I'm struck by how your view sees skills—as they've become technically manifest in more markdown than we could've ever imagined—are mediating judgment shaped by opaque incentives in *a shifting world order of power*. I think it's key that this technical development is concurrent with things like RLMs bc RLMs, I think, are beckoning towards a compositional space in semantic behavior (attractors with basins, in a view) that is outside of the base model and thus *more interpretable* at inference; that's the dynamic that should be amplified, this externalization of interpretable traces where such opaque incentives as you warn about don't mediate the wavelength collapse moment as to when and in what order and how tools are wielded as skills.