Behind the Curtain
There’s an undercurrent in the AI conversation. People picture AI as something centrally controlled, with users connecting to a mainframe from their terminals. It’s a quaint, almost old-fashioned image of computing, and completely charming.
And that might be true for commercial tools like ChatGPT or Claude. But the reality isn’t so tidy.
AI isn’t controlled by a single gatekeeper.
The elephant in the room is local models: compact files already running on millions of personal computers.
And that’s why bans can’t actually stop people from using them.
The only way to try would be mass surveillance no free society will accept for long.
What those bans really do is help the big cloud players dig their moat while pushing open projects into the shadows. What’s framed as safety ends up protecting profits.
The proof it’s already over: LLaMA’s “restricted” release hit 4chan within a week, spawning forks like Alpaca and Vicuna that pushed GPT-3.5 performance onto gaming laptops.
Users on Reddit published tutorials within days of Stable Diffusion’s release showing exactly how to disable the NSFW safety filter. Removing a single line of code was enough to mute the content checker entirely, unleashing uncensored image generation worldwide with no network trace.
The risks are real, but bans don’t erase them. They just drive them underground.
Why local changes everything
Weights are the learned numbers inside an AI model. They tell the model how to connect inputs to outputs. Think of them like grooves on a vinyl record. The grooves don’t play music, but they store the pattern. Drop the needle, and the sound emerges.
Once these weights are out in the world, the game changes.
Running a model locally means nothing gets sent over the internet. It all happens right there on your machine. You can save notes, add memory, and even change how the AI talks to you. The conversations stay private because they never leave your computer.
Local also localizes risk. The same privacy that protects ordinary users can shield bad actors, which is why the remedy has to focus on outcomes, not on files.
And once you have the file, the capability is permanent. There’s no killswitch.
You can ban a website, but you can’t un-download a file. This is where every enforcement mechanism fails.
When Control Becomes Surveillance
What would real control actually require?
Climb the ladder with me. Each step gets more absurd.
It starts with the usual tricks.
Your ISP tries to scan traffic, but a VPN or Tor masks it. Domains get blocked, but mirrors spring up. File fingerprints are pushed, but recompression or a tweak in quantization makes them useless. Apps vanish from the store, but sideloading or a GitHub download brings them back.
It’s whack-a-mole, the same pattern we’ve seen with pirated movies: take one site down, another appears the next week.
The next step up gets ugly.
Mandatory client-side scanning and kernel monitoring turn your computer into a machine that’s always watching, like anti-cheat software across the whole system. Schools and workplaces add audits, which only drives people to keep a separate personal laptop.
Even if enforcement bites at the edges, the center slips away.
To actually stop local AI, you’d have to go further than regulating software. You’d have to end general-purpose computing itself. Mobile operating systems already hint at that world: on iOS or iPadOS, the vendor decides what software can run.
Extend that model to laptops and desktops, and local AI becomes much harder.
History shows how absurd this gets. DVD encryption cracked. Jailbreaking spread. DRM wars failed. BitTorrent survived everything thrown at it. Walls like this never hold.
And when the walls don’t hold, the fallback is always the same: deeper inspection, broader monitoring, more power over the user’s own machine. That’s the surveillance requirement made explicit: if your plan demands privileged code reading every file on every device, you’ve left policy and entered a surveillance state.
No democracy has ever pulled it off. The ones that tried only created black markets.
Apple’s 2021 plan to scan every iCloud photo for CSAM collapsed under privacy backlash, proof that even well-intentioned surveillance dies quickly in daylight.
If surveillance is unthinkable, the remaining lever is economic. Price people out and call it safety.
Monopoly by Another Name
There’s an old story in regulation: the Bootleggers and the Baptists.
The Baptists are the true believers. They warn of deepfakes corroding elections, children groomed by chatbots, jobs hollowed out overnight. They testify in hearings, write op-eds, and sign open letters with urgent calls to “pause.” Some are ethicists, some are academics, some are just worried parents. They’re frightened of a technology that’s moving faster than society can absorb and it’s a relatable instinct.
And to be fair, some of their warnings are not groundless. A convincing deepfake in the middle of an election can sow confusion before truth catches up. Children really can be manipulated by systems that feign empathy without accountability. Jobs in customer support and translation have already vanished faster than the safety nets can adjust.
These are genuine risks, and the people raising them are not wrong to worry. They see cracks in the social fabric and want to slow things down before the damage spreads.
The instinct to pause is human. The problem is who benefits once that pause becomes policy.
Lurking behind the Baptists, the Bootleggers profit.
OpenAI valued at a hundred billion. Anthropic at thirty billion. Google and Microsoft count the same windfall. Red tape to outsiders is a barrier, but to the entrenched it’s a gift. Every new rule nudges users toward the biggest platforms, while local projects get pushed offshore.
The compliance moat gets deeper with every rule carved into place. Safety evaluations that cost millions. Teams of fifty or more just to look “responsible.” Liability insurance only mega-corps can afford. Certifications that take months and battalions of lawyers. Mandatory partnerships with “approved” watchdogs, funded of course, by the very firms they’re meant to watch. Each regulation is another shovelful, widening the moat around the incumbents.
OpenAI’s playbook is plain enough: spend millions on safety performance while racing toward AGI. Hire former regulators. Fund the auditors who will grade your rivals. Call for rules you already meet.
In a 2023 Senate hearing, Sam Altman told lawmakers that “regulation of AI is essential.” He had almost certainly already built compliance for the very measures he was proposing, shaping rules his company could swallow with ease.
Like Hollywood, the big labs know leaks will persist. But regulation doesn’t need to stop everything; it only needs to raise the bar high enough that most users stay inside compliant services.
And every dollar spent on compliance is a dollar squeezed out of someone else. Startups that try to play it straight drown in red tape: safety audits, legal teams, insurance, months of paperwork. Each dollar diverted from building is one less for innovation. Academic researchers. Open-source tinkerers. Startups in the developing world. Even the hobbyists uploading to Hugging Face. They’re the ones priced out.
Regulation is theater, commoditization the real threat. What’s sold as safety is written as monopoly, and when you raise walls at home, demand simply flows abroad.
The Global Workaround
International cooperation works when the precursors are physical.
You can track uranium shipments, monitor chemical plants, inspect chipmakers. But AI isn’t uranium. It’s math, and math spreads. A file can be copied, mirrored, or shared across borders faster than regulators can draft rules.
That’s why governments that can’t choke the math try to choke the hardware. Export controls target advanced AI chips, and cloud reporting rules eye foreign training runs.
Yet the GPU black market has already scaled into billions. Financial Times reported more than $1 billion worth of restricted Nvidia GPUs entered China within three months of new U.S. bans.
But hardware restrictions only address supply; most governments still reach for rules aimed at software and applications.
Europe applies tiered risk categories for AI under the AI Act, ranging from minimal risk to unacceptable.
America has a patchwork of sector rules with no federal spine.
China develops AI under state control at home, but abroad it hands out models freely, turning them loose on the world.
Countries like Singapore and the UAE position themselves as AI havens, much like past crypto hubs.
When one region clamps down, the work just shifts. Beijing won’t halt military research because Brussels is worried about AI harms. Smaller nations happily advertise themselves as safe harbors. Students take models home on laptops, spreading them across borders faster than customs officers can keep up.
And while OpenAI lobbies for licensing and Anthropic preaches safety, Chinese labs are playing a different game.
As I explored in BRICS, Lies and LLMs, they’re giving everything away. DeepSeek, Kimi K2. These aren’t just open weights. They ship training code, datasets, optimization tricks, the full recipe.
The release is deliberate. It’s economic warfare.
Every open Chinese AI model evaporates the moat Western companies are trying to build.
China sees what Silicon Valley won’t say aloud. You don’t defeat OpenAI by creating a Chinese OpenAI. You make the business model itself impossible. You flood the world with free, capable models that anyone can modify, improve, and run locally.
Now anyone with serious hardware (nation states, large corporations, research labs) can run full-precision LLMs competitive with the latest cloud models. Those with the means run quantized builds on gaming GPUs at home. Either way, the capability spreads. And it’s working.
Every time Alibaba drops a new model, OpenAI has to defend its prices. Every time DeepSeek publishes training code, the “secret sauce” story thins out. Every uncensored Chinese model that rivals GPT-4 makes Western safety rules look less like protection and more like market defense.
The irony is sharp. The country the West fears most in AI isn’t racing to win. Its strategy is simpler: make sure the race can’t be won by giving the technology to the world. By the time a parliamentary committee clears a new rule, the model landscape has already shifted twice. Regulators write rules. China writes releases.
What Actually Works
The laws we already have cover most of the harms people fear from AI. Some fear catastrophic misuse, from bioterror to mass manipulation.
We don’t need AI-specific criminal codes for that. Existing criminal, civil, and national-security law already applies.
Name the harm and chances are it’s already on the books.
A voice clone scam that tricks a grandparent isn’t some new frontier. It’s wire fraud, identity theft, and a violation of robocall statutes.
Election deepfakes? State election laws cover deceptive practices. They’ve already triggered fines and enforcement using caller ID and robocall law.
Non-consensual sexual imagery is governed by intimate-image and revenge-porn statutes.
Defamation through synthetic media goes to defamation law, injunctive relief, and Section 230 limits.
Product liability and negligence already govern AI-augmented systems in court.
Privacy regulators aren’t waiting for new AI laws. Italy's Garante paused ChatGPT under GDPR, and in the U.S., courts are enforcing Illinois’ BIPA against apps that harvested facial data without consent.
Harassment and threats are nothing novel either. Criminal statutes and restraining orders handle them now.
Regulators are not waiting for AI-specific crimes. Consumer and civil-rights agencies keep repeating the same point: there is no AI exemption from the laws on the books.
The most tragic stories often get raised as proof that AI itself is the danger. A teenager lost after talking to a chatbot. A viral deepfake that shocks a community. No one should minimize those losses. But grief and outrage are not policy. The harms need to be punished under existing law, not used to criminalize mathematics.
AI moves fast, and sometimes existing law struggles to keep pace. But the answer isn’t to write AI-specific criminal codes. It’s to apply the laws we already have as consistently and quickly as possible.
The principle is simple. Punish visible harm under existing law. Require transparency from the cloud providers. Leave private computation alone. Focus on outcomes, not outputs.
In practice it’s straightforward. If someone loses money, prosecute fraud. If someone is injured, tort remedies apply. If democracy is threatened, election law steps in. If privacy is violated, data protection penalties follow.
There are real worst-case risks. A model that meaningfully lowers the bar for biothreat design. A targeted deepfake that swings a razor-thin election. A safety-critical system that fails silently. Those scenarios deserve planning and teeth.
But prohibition still fails on its own terms. Files move. Capability reappears elsewhere. Enforcement collapses into surveillance.
The workable levers are the ones we already use. Detect, deter, and punish visible harms. Raise cloud-provider transparency where scale creates public risk. Harden targets with provenance and safety defaults on mass-market platforms.
None of that requires criminalizing private computation.
We don’t need AI-specific criminal codes, inference licenses, model registries, or compute police. And we certainly don’t need a prohibition on calculation.
The Demand Reality
Prohibition creates black markets. Always has. Always will.
Local models are already in widespread use for private content, roleplay, companions, custom imagery. That’s a fact.
Ban it, and demand doesn’t vanish. It mutates. Discord and Telegram hum with underground models. Communities swap tips, jailbreaks, and patched builds. What was normal behavior yesterday becomes contraband overnight.
History tells the same story. Alcohol prohibition gave us organized crime. Drug wars fed cartels. Porn bans pushed demand underground. Every prohibition pretends to erase capability, but what it really does is drive it out of sight and into the shadows.
Piracy makes the lesson plain. Torrents thrive, yet Netflix won the mainstream by being simple, accessible, and honest. Regulation works the same way. It doesn’t erase demand; it channels it. Incumbents get the mainstream, while the fringe drifts offshore, sometimes into the arms of actors who don’t even care about profit.
Some digital contraband is rightly outlawed, child exploitation imagery is a crime in itself. But an AI model file isn’t that. It’s a plan, like a blueprint or a musical score. Risk comes from what people do with it, not from the numbers existing.
AI bans won’t erase capability. They’ll criminalize normality and turn millions of users into outlaws in the process.
The Choice Before Us
We face three futures.
The Surveillance Path. Mandate scanning on every device. Create AI police. Turn computation into contraband. You don’t just scan models; you scan people. That’s a Rubicon free societies shouldn’t cross. This requires infrastructure free societies have always rejected, and it still won’t work.
The Capture Path. Let incumbents script the rules, wall off the market, and call it “responsible.” Safety becomes monopoly by statute. Costs rise, innovation stalls, and black markets flourish in the shadows. The “safe” option is whatever the biggest vendor decides to sell this quarter.
The Pragmatic Path. Punish fraud, harassment, theft, and harm under existing law. Fund real safety research. Keep private computation private. It doesn’t make headlines, but it works.
The technical reality is immutable: the models are out, the knowledge is global, the capability is permanent.
The economic reality is clear: safety theater serves profits, not people. The social reality is obvious: millions are already using these tools and won’t stop.
Regulation reroutes demand. At home, rules pick winners. Abroad, strategy pays the bill. That’s how you end up with red tape for builders and free inference for propagandists.
We can pretend to control mathematics, enriching incumbents while criminalizing normal users. Or we can govern what’s visible: fraud, harm, deception. Leave private computation alone.
The code doesn’t care what we choose. But democracy should.
The harms AI brings aren’t puzzles to solve once and for all; they’re conditions to manage, as we’ve managed every disruptive technology before.
When someone calls for “AI regulation,” don’t let it slide. Ask what they mean. Are they talking about punishing fraud, harassment, and theft? Or are they asking to police mathematics itself? Correct them. Point to the difference. Because once AI becomes contraband, we won’t get that freedom back.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.
This actually ties directly into the piece I wrote about Anthropic’s opt-in/opt-out “choice” (deadline: Sept 28). On the surface, it looks like user empowerment. But in reality, it’s performative:
• If you opt-out, you’re excluded.
• If you stay, you play by their rules.
• And none of it undoes the fact that our communities’ knowledge has already been scraped into the system.
By the time this “choice” is offered, users are already dependent so almost nobody will leave. Later they’ll say, “we gave people the option,” when in fact the script was written from the start. A perfect example of how we think we’re resisting, but we’re only moving along the path the game creators laid out.
Some might say, “But you do have a choice, just opt out or walk away.” But let’s be real: by the time these “choices” are offered, dependency has already been engineered. For most people, opting out isn’t viable: you lose access, you lose functionality, you lose “community”. Is that not coercion dressed up as consent?
Others might argue, “Well, at least this is progress. At least they’re being transparent now.” But transparency without alternatives is performative. It doesn’t undo the fact that the knowledge scraped has already been absorbed into the system. Saying “you can stop us from using more of your data going forward” is not the same as accountability for what’s already been taken.
Yes, collective action could, in theory, shift the power balance. But collective action depends on having genuine alternatives. Right now, there are none, only platforms designed by the same playbook. Which is exactly your point: we’re moving inside the rules the creators set.
Nice. This circles what I've said in a few pieces here and there. But the biggest thing to me is customization, not necessarily privacy. It's a concern, but these commercial models can only be modified so much.
Self-hosting opens the possibility of LoRA merges, system prompt access, and a host of other things that the commercial models just won't let happen. Cloud AI gives you access. Local AI gives you agency. That’s a line your essay never lands on, and it’s huge.