The Brush And The Wall
On AI And Copyright
The Celebration
On March 2nd of this year, the Supreme Court declined to hear an AI copyright case.
Social media lit up. Creative communities exhaled like something had been won. Everyone treated this as closure. The AI copyright question, settled at last. Machines confirmed as non-authors. Case closed.
Except the case that closed was never the right one. It was asking a question nobody needed answered, and in the process it made the questions that actually matter harder to ask.
Declining to hear a case isn’t a ruling. The Supreme Court said nothing about AI, authorship, or copyright. It just walked away.
The case it declined to hear: Stephen Thaler v. Perlmutter.
Thaler is a computer scientist from Missouri who wanted copyright protection for an image his AI system DABUS generated. The Copyright Office said no. A district court said no. An appeals court said no. The Supreme Court didn’t even take the call.
Thaler wanted to know if a machine could be an author, but the real question was never about the machines.
The Wrong Question
The fight is over whether human direction and selection through AI counts as authorship, or whether the law will hand the advantage to incumbents by calling the output “ownerless.” Meaning hard or impossible for the human creator to copyright.
Here’s what makes the Thaler case so frustrating.
He isn’t some guy who typed a prompt into Midjourney. He built DABUS from the ground up over thirty years. Custom architecture, custom hardware, decades of research. By any reasonable standard, he has a stronger claim to authorship over his system’s output than most people who use AI tools ever will.
He created the instrument. He shaped its capabilities. But he refused to take credit.
He listed DABUS as the author. Insisted the AI created the work autonomously. He believes his machine is conscious, that it experiences something like an inner life, and that it deserves recognition for its own work.
A man who spent thirty years building a machine, decided it was alive, and loved it too much to put his own name on what it made. So the court ended up with The Bicentennial Man argument. Robot personhood. Taken to court.
What would it even mean for a machine to own a copyright? Ownership is a bundle of rights you exercise. You license, sell, enforce, bequeath. A machine can’t do any of that. No legal standing, no interests, no capacity to enter contracts. If DABUS “owned” the copyright, Thaler would still be the one making every decision.
It’s circular.
Same structure as the monkey selfie case, where PETA tried to manage the rights “on the monkey’s behalf.” Which really just meant PETA wanted the rights.
Copyright exists to incentivize creation. It’s there to give people exclusive rights so they’ll bother making things.
A machine doesn’t need incentives. It doesn’t choose what to create. It doesn’t withhold labor when the deal is bad. The entire economic rationale collapses when you remove the human from the equation.
Every AI output begins with a person who had intent, typed instructions, iterated, selected, refined. The AI sat there doing nothing until someone showed up with a vision. There’s always human intention and selection. The question is whether the law will treat that as authorship.
Thaler had the most intuitive framing available. “I used my tool to make this.” He threw it away. The man who had the strongest possible claim to authorship voluntarily surrendered it, and in doing so he invited every court that heard the case to have the wrong conversation entirely.
Bad Plaintiff, Good Wall
Precedent doesn’t care about nuance. What matters is that the answer now exists at the Copyright Office level, at the district court level, and at the appellate level. The Supreme Court’s refusal to hear the case adds psychological weight even though it carries no legal force of its own. People will treat this chain of rejections as settled law.
The wall is lower-court language that everyone will quote as if it came from the top.
“Human authorship is a bedrock requirement of copyright.” That’s the line. It didn’t come from the Supreme Court, but it’ll be cited as though it did. Anyone who wants to argue for any expansion of AI authorship in the future has to climb over that sentence.
There’s a saying in law: hard cases make bad law. Meaning edge cases tempt judges into bending principles, and the result is messy precedent that causes problems down the line. This is the rare inverse. An easy case that produced broad, clean language. A bad plaintiff who handed the courts a clean shot at a clean principle.
They took it.
Every human creator who uses AI now works against the rhetorical gravity of a case that treated software like a person. The foundational case for AI copyright in America is about robot rights.
That’s a terrible foundation. And it’s going to shape every conversation that follows.
The Wall Is Everywhere
The US case wasn’t an isolated filing. It was part of a coordinated global legal campaign.
Ryan Abbott’s Artificial Inventor Project saw the opportunity in Thaler’s conviction. They used him and DABUS as a test vehicle to challenge IP law in nearly twenty jurisdictions simultaneously. The US Copyright Office. The US Patent Office. The European Patent Office. The UK Intellectual Property Office. Australia. New Zealand. Switzerland. All filed around the same time. All using the same argument. All naming the machine as creator.
They lost everywhere.
South Africa granted a patent, but only because it’s a registration-only system that doesn’t do substantive review. Every country that actually examined the question built its own version of the same wall. Machines aren’t authors. Machines aren’t inventors. Human involvement is required.
This wasn’t one bad precedent in one country. It’s a web of rulings spanning nearly every major IP jurisdiction on earth, all built on the same framing, all answering a question that was never the right one to ask.
The cruelest part: Thaler could have won. Multiple courts noted that if he’d listed himself as the creator and described DABUS as his tool, the applications likely would have gone through. He refused. His sincerity made the case bulletproof for every legal system that heard it. There was no ambiguity to wrestle with. Just a man saying “the machine did it” and every jurisdiction on earth saying “then you get nothing.”
One campaign. One plaintiff. One framing. And now human creators who use AI tools face the same uphill fight in every major market on the planet.
What the Copyright Office Actually Says
Away from the Thaler circus, the Copyright Office has been quietly building a framework for AI-assisted works. It’s more restrictive than most people realize.
In January 2025, Register of Copyrights Shira Perlmutter said the framework turns on “the centrality of human creativity to copyright“ and that creativity expressed through AI systems “continues to enjoy protection.”
That sounds expansive. It isn’t.
The same report specifies that AI output only gets copyright where a human has determined “sufficient expressive elements.” A human-authored work has to be perceptible in the AI output, or a human has to make creative modifications after the fact. Prompting alone doesn’t count.
The key test case is Zarya of the Dawn. Kristina Kashtanova used hundreds of prompts and iterations in Midjourney to create a graphic novel. The Copyright Office granted protection for her text and for her selection and arrangement of text and images together. It denied copyright on the individual AI-generated images.
Too much distance, they said, between what Kashtanova directed Midjourney to create and what it actually produced. The unpredictability of the output meant she lacked “sufficient control.”
Then the Office compared her role to that of a client who hires an artist and gives general directions. Think about what that analogy requires. It needs Midjourney to be an independent creative agent, one that interprets a brief and makes its own expressive choices. The same Office that says machines can’t be authors needs the machine to have artistic judgment in order to deny the human’s claim.
They want it both ways. The machine is a mindless tool when the question is authorship. It becomes a creative professional when the question is whether the human did enough.
Kashtanova’s case wasn’t an outlier. Jason Allen used over 600 detailed prompts to create Théâtre D’opéra Spatial, specifying genre, tone, color, and style. The piece won first place at the Colorado State Fair’s fine art competition. The Copyright Office denied protection there too. Volume and specificity of creative direction didn’t matter. Six hundred prompts specifying every aesthetic dimension of the work still counted as mere ideation.
Allen is now challenging that decision in Colorado federal court.
The Copyright Office treats post-generation selection as human expression and pre-generation direction as abstraction. Choosing which images to use after the fact gets you protection.
But the selection after the fact is bounded by what you were able to coax out through that iterative process. You directed it. You evaluated the results. You adjusted. You redirected. The final selection is downstream of the direction. The two can’t be separated so cleanly.
Iterative prompting is closer to what a film director does than to abstract ideation. The director says “camera low, tracking left, light through the window.” The cinematographer executes it. The director can’t predict every detail of how the light will fall. There’s unpredictability in execution. The director still gets authorship, because the director is the creative mind coordinating the final expression.
The Copyright Office drew its line based on familiarity. Cameras and Photoshop sit on one side. AI sits on the other. They even reached for a camera analogy in the Zarya decision, comparing what photographers control to what Midjourney users control. They acknowledged the parallel, then drew an arbitrary boundary through it. The logic is the same on both sides. The comfort level is what changes.
And there’s a reason for the discomfort.
Six Words and a Click
The fear is real and it’s worth taking seriously.
If the threshold for AI copyright is just “a human typed a prompt,” the Copyright Office gets buried in registrations for millions of trivially generated images, texts, and compositions. Every one of those registrations dilutes the system. Traditional artists who spent months on a painting end up sharing a legal framework with someone who typed six words and clicked generate.
Imagine you’ve spent four months on a piece. You learned the anatomy. You studied the light. You redrew the hands six times. And the person next to you in the registration queue got there in forty seconds. Same form. Same legal protection. Same status as a creator. That feeling isn’t irrational. It’s the entire history of craft telling you something is wrong.
That devaluation is a legitimate concern. Dismissing it makes the pro-access argument look careless.
The answer is to build a standard that can tell the difference. Require evidence of iterative process. Documentation of creative choices. Meaningful human editing and selection.
Copyright already handles spectrums of creativity through thin and thick protection. A thin copyright covers works with minimal creativity. A thick one covers works with substantial creative investment. The tools exist. The Copyright Office just isn’t applying them to AI-assisted work with any consistency.
The Honesty Penalty
There’s a deeper problem with the current approach. The framework punishes transparency.
Most copyright registrations don’t get inspected. They sail through. Kashtanova was honest about using Midjourney, and that honesty made hers one of the few that got a closer look. It became the lever the system used to carve her work apart. If she’d kept quiet, the registration likely would have gone through without a second glance.
The system as built creates a strong incentive to hide the toolchain. The Copyright Office is making policy based on the cases where people disclose, while an unknown volume of AI-assisted work slides through registration without scrutiny. The dataset they’re building their framework on is self-selected for honesty.
That’s a trap for people who play by the rules and a terrible foundation for policy.
And it isn’t only the law. There’s a social stigma layered on top. In a lot of creative communities, “AI-assisted” reads as “less real,” even when the human work is obvious. Direction, iteration, editing, composition. All of it gets discounted the moment someone learns a machine was involved. Disclose and you risk losing legal protection and taking reputational damage. Stay quiet and you avoid both.
Disclosure becomes the worst of both worlds. Fewer rights. More hostility.
The system made this incentive. Creators are responding to it rationally. A regime that punishes disclosure doesn’t produce honesty. It produces silence, and then it builds policy on the tiny slice of people who still tell the truth.
The Remix Precedent
This pattern has played out before. When hip hop producers started building music out of samples, courts had to decide whether assembling fragments of existing work counted as creation. They decided it didn’t. Grand Upright Music v. Warner Bros in 1991 made sample clearance mandatory. Bridgeport v. Dimension Films in 2005 said even unrecognizable samples needed licensing.
The result was a tollbooth. The major labels owned the masters. They set the prices. Independent artists who couldn’t afford clearance got priced out. An entire mode of expression got choked off because the system decided the people assembling the fragments were something less than authors.
Paul’s Boutique, an album built from dozens of samples that critics call the Sgt. Pepper of hip hop, was cleared for $250,000 in 1989. Today the cost would run into the millions. The legal framework made that kind of art economically impossible.
Selection is expression. This was already understood. The legal system is forgetting it.
The big IP holders sit on enormous catalogs of fully copyrighted human-created work. If AI output stays hard to copyright, their libraries get more valuable by comparison. Every competitor flooding the market with AI content is producing public domain material. The incumbents’ moats get deeper without them doing a thing. And they have the legal infrastructure to play the Copyright Office’s game. Case-by-case determination of “sufficient human involvement” rewards organizations that can afford lawyers and documentation trails. An individual creator at their kitchen table can’t do that as easily.
All of this amounts to the same pattern. A new tool democratizes creation. The legal framework tightens around it. The people with resources navigate the system. The people without resources get locked out.
It’s happened before. The question is whether it has to happen again.
The Brush
There’s a window right now. These tools are available and the legal framework hasn’t fully hardened. Someone who’s carried a story in their head for twenty years and never had the skill to draw it can finally put it on the page. Someone who lost the use of their hands can create visual art again by describing what they see.
The law is being built in real time.
Right now it’s being built on the foundation of a case about machine personhood and a regulatory framework that treats prompting as an abstraction instead of a creative act.
The case that matters is already in court. When Jason Allen’s case is decided, the court will have to say whether creative direction through AI is fundamentally different from every other form of creative direction humans have ever used. Cameras. Synthesizers. Samplers. Film crews.
The answer should be obvious. Every AI output is human-directed. The machines aren’t people. They’re brushes.
The question is whether the law recognizes that before the window closes. Paul’s Boutique can’t be made today. The legal framework saw to that. The same thing is happening here, in real time, and it started because one man spent thirty years building a machine, decided it was alive, and loved it too much to take credit for what it made.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.


