Output as Authority
How AI turned guesses into governance
The World as Sensor
Your phone knows where you slept last night. It knows when you woke up, when you left home, where you went, and how long you stayed.
That’s the default setting now.
This is the world into which modern AI emerged. It inherited a surveillance system that older regimes could only fantasize about. The tracking was already there, baked into apps and services, the background plumbing of daily life. AI just makes the data easier to search, easier to cross-reference, easier to act on.
Most of the time, this feels like convenience. Targeted ads, suggested routes, playlists that know your mood. Ambient tracking doesn’t always look like surveillance.
But the same systems that recommend restaurants can flag a visit to a clinic, a lawyer’s office, or a protest. The record exists even when no one is pointing a camera.
And once the system points at a specific face, the abstraction can turn into an arrest.
The Computer Says So
In January 2020, Robert Williams was arrested in front of his children because a computer told the police he was a criminal.
He wasn’t.
He got handcuffed, booked, and shoved into a cage because an algorithm returned a match, and an institution decided the match was good enough. When Williams said the face in the image wasn’t his, the response was the modern form of a shrug: the computer says it’s you.
A year earlier, Nijeer Parks spent ten days in jail over a blurry fake ID photo. He’d never been to the town where the crime happened. The charge took months to dismiss.
This is what AI harm looks like in real life: paperwork, procedure, and a chain of responsibility so diffused that nobody feels like the author of what happened.
A low-quality image goes in. The model returns a ranked match. Vendors call it “investigative.” Departments say a human made the final call. Responsibility spreads across the chain until nobody can be blamed.
The machine doesn’t need desires to do damage. Incentives that reward speed and reach are enough. Vendors selling certainty are enough. Institutions that would rather outsource judgment than own it are enough.
And the person on the receiving end learns how quickly a statistical system turns into a moral one.
But police aren’t the only ones learning to trust the output. The rest of us are learning to feed it.
The Confessional
Chatbots create a new kind of personal record: one you write yourself. The interface feels private, so users talk like they’re alone.
They aren’t.
Cameras take what they can. But a chatbot gets what you type. And what you type comes with context.
People tell these systems about medical fears, sex lives, money problems, family conflict, workplace disputes, addiction, custody fights, and legal risk. They do it in plain language. In one sitting. The result is clean text that’s searchable and linkable to an account.
That’s the difference.
Companies have incentives to retain data, study it, and use it to improve the product. The legal system has its own incentives. Lawyers subpoena records. Police seek warrants. Once the record exists, it becomes something other people try to obtain.
What feels like confession becomes evidence. Evidence attracts attention. And “I told the bot” can turn into Exhibit A. A diary can be seized from your home, but a diary that exists on someone else’s servers is easier to seize.
This is already happening. According to OpenAI’s own reporting, between July and December of 2024, it processed 71 government data requests involving 132 user accounts. That’s real, and it’s current.
And once you can harvest trust and context, you can also fake it.
Fraud Becomes Industrial
Scams used to be artisanal. A guy. A script. A phone.
Now? AI makes scammers more believable and lets them work faster.
Voice impersonation is the cleanest example. You get a call with the right cadence and the right urgency. A “family emergency.” A “boss” who needs a wire sent right now. The details can be thin because the voice is right, your panic does the rest.
Then there’s the business con. Fake invoices. Vendor changes. “New bank account, same supplier.” Deepfake meetings that add a face to the lie. It doesn’t need to fool everyone. It just needs to fool one person on a busy day.
Attackers only need a tiny success rate. They can iterate fast. They can jump channels when one gets blocked. And humans stay the weak link, especially when the message feels urgent and personal.
Here’s what changed. Personalization across millions. Rapid variation. A lower cost per target. Trust signals that used to be expensive to fake are now cheap and accessible. The scammer’s main expense becomes volume.
And once plausibility is cheap, it gets used for more than just theft.
Harassment and Synthetic Coercion
Synthetic media turns your reputation into an attack surface. Your face becomes a tool. Your name becomes a lever.
The most common version is non-consensual imagery. The point is humiliation. Then comes sextortion. Pay. Comply. Stay quiet.
The content can be fake and it still works, because the threat is social fallout.
This isn’t limited to sex. It can be a deepfaked confession, a fake recording of a slur, a fabricated “leak” sent to an employer. Anything that creates shame on contact.
A victim has to fight everywhere at once. Friends. Family. Employers. School administrators. Platforms. Search results. Group chats.
The attacker posts once. The copies multiply. Screenshots. Reuploads. Private shares that never surface until they do. Even when the original gets taken down, the damage keeps moving.
That delay is part of the harm. So is the uncertainty.
You don’t know who saw it. You don’t know who saved it. You don’t know who will get it next week, or next year, at the worst possible moment.
Humiliation creates leverage. Leverage drives compliance. The content spreads fast, and the victim gets handed an impossible task: clean the internet.
This is what reach looks like when it’s aimed at a person.
And it isn’t only criminals who use this logic. Employers do too.
Bossware
Workplace monitoring gets pitched as a productivity tool. In practice it’s control.
The tools log keystrokes, take screenshots, flag idle time. Some activate webcams or run silently in the background. Workers become a stream of signals.
There are legitimate uses for monitoring in narrow cases, like security and fraud prevention. What’s spreading is different. It’s measurement as a form of domination.
The evidence that this kind of surveillance reliably improves performance is thin. The result is stress and resentment. People learn to game the metric instead of doing their job well, and once you measure the wrong thing you get the wrong behavior everywhere.
Then comes algorithmic management. Warehouses are the cleanest example. Quotas get set by system logic, warnings get generated automatically, and terminations can follow with minimal human review. Supervisors outsource their judgment to a dashboard that makes decisions for them.
In June 2024, California’s Labor Commissioner cited Amazon for nearly $6 million under the state’s warehouse quotas law, tied to failures to properly disclose quotas to workers at two Southern California facilities. That’s what it looks like when the dashboard becomes policy.
Metrics become management. Management becomes punishment. Punishment becomes injury and churn. And when something goes wrong, nobody owns it. The system did it.
This is already deployed. It’s profitable. It’s spreading. And it doesn’t stop at monitoring. It changes what work is worth, and who gets to do it.
Displacement and Wage Pressure
AI replacing workers dominates headlines. The reality for most workers is a slow erosion.
Bargaining power. The ability to start at the bottom and climb. That’s what’s at risk.
A junior analyst used to get hired to build spreadsheets. Now the spreadsheet builds itself, and that analyst never gets hired.
Senior people keep their jobs while entry-level roles get absorbed by automation. That’s the real danger: the missing rung at the bottom of the ladder.
Wage pressure shows up before layoffs. The threat of replacement keeps workers compliant even when nobody gets fired.
Managers don’t have to say it out loud. “Do more with fewer people” becomes the default. Pay flattens anywhere outputs can be standardized, scored, or reviewed by a system that looks like it could replace you next quarter.
Some work is more exposed. Call centers. Routine back-office tasks. Basic content production. Entry-level coding work. These are the first stress points.
Other work is less exposed. Jobs that require physical presence, where someone carries real liability. Jobs built on long relationships where “pretty good” isn’t good enough.
Outcomes will vary. The trend is already visible. And when companies sell “automation,” they also sell a story about where the labor went.
Ghost Work
AI tools arrive gift-wrapped from Palo Alto. The demos are glossy. The labor that made them possible stays off stage.
Start with data labeling. Repetitive microtasks. Precarious contracts. Pay that can vanish with a policy change or a bad score. Work done under surveillance, with productivity targets and penalties that feel automatic. The system stays “smart.” The worker stays replaceable.
Then there’s content moderation. The work that keeps the platforms “clean.”
LLMs are trained on vast amounts of human text. Some of it is useful. Some of it is poison. But before the system can be deployed, someone has to sort it, classify it, and decide what gets through.
Workers in the Global South filter the abuse, the gore, the child exploitation, the threats. People spend full days staring at material they can’t unsee, with weak support, low wages, and a predictable psychological toll.
The product stays “safe.” The cost is paid by someone far from corporate HQ.
Companies push this work to countries with low-protection labor markets. Vendor chains pile up so responsibility never touches the labs. The labor stays invisible, and that invisibility is part of the business model. If the public saw the pipeline, “automation” would stop sounding magical.
The machine doesn’t have to suffer for suffering to be part of the pipeline.
Electricity and Local Sacrifice
Compute is infrastructure now. Infrastructure has neighbors.
The numbers tell the story. By 2030, global data center electricity use is projected to double, tightening grids, sparking fights over new transmission, and turning rates into a political battleground. The question underneath it all is simple: who pays, and who gets told to wait?
And the power sources for AI data centers won’t be tidy.
Renewables like solar will expand because they’re cheaper in a lot of places. Gas fills gaps because it’s fast to build and easy to dispatch. Coal still lingers in regions that can’t quit it cleanly. Nuclear shows up later, even when everyone agrees it should’ve come sooner.
Memphis shows what this looks like on the ground. xAI’s Colossus facility became a flashpoint after the NAACP and the Southern Environmental Law Center alleged the site was running large numbers of methane-burning gas turbines without the right permits. The county later approved 15 turbines. Critics say the permit still doesn’t match the real footprint.
This is what “AI boom” means at street level. Siting fights, rate hikes, turbines and exhaust. People breathing the downside while somebody else books the upside.
Deployment speed outruns grid buildout. The costs hit locally, even when profits don’t.
Critical Minerals and Extraction Labor
AI runs on hardware, and hardware runs on minerals. Cobalt. Copper. Nickel. Lithium. The supply chains that deliver them are long, opaque, and brutal at the source. Every new server rack has a footprint that starts long before the data center.
That chain begins at the least protected point in the system: the extraction site. That’s where the costs get paid first.
Mining is dangerous work. In the DRC, artisanal cobalt miners dig without protective gear and sometimes without real structural support. Children work alongside adults. Injuries go unreported. Deaths get swallowed by the system.
And it doesn’t stop at the mine.
Corruption follows the minerals. Criminal networks move product across borders. Local officials get paid to look away. Communities near extraction and processing sites live with poisoned water and soil. The health costs never reach the balance sheet.
Spikes in demand make it worse. When the market wants more compute, the squeeze flows downhill to the workers with the least leverage. Deadlines tighten. Safety slips. The miners absorb it.
The supply chain is built to keep this out of sight. Layers of contractors. Refiners in one country, smelters in another, components assembled somewhere else. By the time the chip reaches a data center, the mine has been turned into an abstraction.
That’s the point. We call it the cloud because it sounds ephemeral, distant.
But the cloud has a mine under it.
The Pattern the Evidence Forces You to Admit
Across all of this, the pattern is hard to miss.
It’s always sold as safety, efficiency, or progress. That wrapper makes the system feel reasonable.
Peel it back and you keep finding extraction: data pulled from daily life, labor pushed offshore, resources ripped out of the ground. It’s messy by design.
Accountability stays out of reach. Vendor chains blur responsibility. “Proprietary” becomes a shield. Institutions defer to outputs because it lowers friction and lets people move faster.
The harms don’t hit evenly. They hit the people with the least leverage. And none of this required a machine that wants anything.
Now notice what the public conversation keeps emphasizing instead.
Why Everyone Keeps Talking About Doom
Notice what none of this required: a machine with a will.
We got here through incentives and deployment. Institutions that love speed. Vendors that sell certainty. People who’d rather outsource judgment than own it.
That’s the point.
The doom narrative pulls attention away from the parts we can actually govern. It keeps the conversation stuck on questions nobody can answer, while the fixable harms keep compounding.
It also launders power. If the stakes are cosmic, then any new control system looks responsible. Any new monitoring looks prudent. Any new permission gate looks like safety.
And it creates a priestly class. Their status depends on permanent crisis. The red line stays just ahead of the next release.
Here’s what that does in practice. When you regulate around prophecy, you miss the systems already hurting people. You build compliance hurdles that only big firms can clear, while the ledger keeps growing.
The Capture Move
Let’s be clear. The harms are real. They deserve real governance.
But that’s not what the doom frame produces.
When you regulate around prophecy, you get rules built for metaphysics: licensing regimes, safety boards, audit rituals, “alignment” certifications. The costs grow with legal teams and compliance staff. Paperwork. Reviews. Reporting pipelines. “Independent” assessments that only the biggest firms can afford, and only the biggest firms can survive.
The hurdles are real. The price is the point. Big firms pay it and keep shipping. Small teams can’t. The rules select for incumbents.
That’s the capture move.
A large company hires a compliance team and keeps shipping. A small lab stalls out. An open ecosystem gets treated like a threat because it can’t afford the rituals. The moat gets built in the name of safety, and the people who already own the market get to set the toll.
And there’s a second layer that matters even more now. “Safety enforcement” tends to mean logging. Identity checks. Retention. Moderation records. The chatbot becomes a compliance device. The confessional gets a log.
Retention turns into discovery. Discovery turns into leverage. And leverage rarely stays in the hands of the public.
A safety regime that requires pervasive logging becomes a power regime.
Governance: Deployments, Records, and Infrastructure
The harm keeps showing up in the same places: deployments, records, and incentives. Prophecy won’t help here. Rules will, especially rules that force accountability back onto the institutions doing the deploying.
Start with institutions that use these systems on people.
If a public agency uses biometrics, require due process. No vendor output treated like probable cause. No secret scoring that a defendant can’t challenge. If a workplace uses monitoring, workers deserve limits, notice, and access to what’s collected. They also deserve a way to contest automated discipline.
Then treat fraud and coercion like the crime wave it is. Make impersonation and synthetic harassment easy to report and expensive to run. Faster takedowns. Clear liability. Coordination across banks, carriers, and platforms so scams can’t just hop channels and keep going.
Compute has become a public-utility problem. Require real reporting for large data centers and on-site generation: who supplies the power, what gets emitted, and what nearby residents absorb. If a company wants to park turbines next to a neighborhood to feed GPUs, it should have to show its work.
Tie procurement and subsidies to supply-chain transparency and basic labor standards. If you want public money, prove you aren’t buying abuse. For companies running data centers, make supply chains auditable. If violations surface, operations pause until the chain is clean. A fine is a cost of doing business. A pause forces change.
One design rule matters across all of it. The compliance path has to work for small teams, or you’ve built a moat. Industrial-scale operations get industrial-scale scrutiny. Everyone else gets clear thresholds and simple rules until they’re big enough to cause industrial-scale harm.
Close
The doom talk reads like a quasi-cult selling cosmic stakes. It keeps everyone staring at the sky while the damage stays on the ground. A priesthood of permanent crisis. A market for salvation. And a convenient trade: argue about the end of the world, and you never have to reckon with the world you’re already breaking.
Meanwhile the real harms get booked as externalities. The arrests. The scams. The coerced silence. The broken career ladders. The ghost labor. The exhaust in somebody else’s air. Nobody calls it evil. They call it a cost.
But the bill’s already here, and it has names. Some are public. Most never are.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.


