The Brush And The Wall
On AI And Copyright
The Celebration
On March 2 of this year, the Supreme Court declined to hear an AI copyright case.
Social media lit up. Creative communities exhaled like something had finally been settled. To hear people tell it, that was that. The AI copyright question was over. Machines couldn’t be authors. Case closed.
Only it wasn’t.
The case that died was never the right one to begin with. It asked a question nobody needed answered, and in doing so, it made the questions that actually matter harder to ask.
A declined case isn’t a ruling. The Supreme Court didn’t say anything about AI, authorship, or copyright. It didn’t clarify the law. It didn’t endorse a principle. It just passed.
The case it passed on was Stephen Thaler v. Perlmutter.
Thaler is a computer scientist from Missouri who wanted copyright protection for an image generated by his AI system, DABUS. The Copyright Office, a district court and an appeals court all said no. Then the Supreme Court refused to even hear it.
Thaler wanted an answer to whether a machine could be an author.
But that was never the question you should’ve been watching.
The Wrong Question
The real fight is over whether human direction and selection through AI counts as authorship, or whether the law is going to hand the advantage to incumbents by treating the output as ownerless. Hard to protect. Hard to defend. Easy to devalue.
That’s what makes the Thaler case so frustrating.
He wasn’t some guy typing a prompt into Midjourney. He built DABUS over thirty years. Custom architecture. Custom hardware. Decades of research. By any reasonable standard, he had a stronger claim to authorship over that system’s output than most people using AI tools ever will.
He made the instrument. He shaped what it could do.
Then he refused to take credit.
He listed DABUS as the author. He insisted the system created the work autonomously. He believes the machine is conscious, that it has something like an inner life, and that it deserves recognition for its own work.
So picture the scene. A man spends thirty years building a machine, decides it’s alive, and loves it too much to put his own name on what it made. That’s how the courts wound up hearing the Bicentennial Man argument with a legal caption attached. Robot personhood, marched into court.
And what would it even mean for a machine to own a copyright?
Ownership is a bundle of rights somebody exercises. You license the work. You sell it. You enforce it. You leave it to somebody when you die. A machine can’t do any of that. It has no legal standing, no interests, no capacity to enter contracts. If DABUS “owned” the copyright, Thaler would still be the one making every decision.
The logic just folds in on itself.
It has the same basic shape as the monkey selfie case, where PETA tried to manage the rights “on the monkey’s behalf.” In plain English, that meant PETA wanted control of the rights.
Copyright exists to incentivize creation. It gives people exclusive rights so they have a reason to make things, publish them, and defend them.
A machine needs no incentive. It doesn’t choose to create. It doesn’t bargain. It doesn’t withhold labor when the deal gets bad. Strip the human out of the picture and the whole economic logic falls apart.
And there’s always a human in the picture.
Every AI output starts with a person who had intent, gave instructions, iterated, selected, refined, discarded, and tried again. The system sat there doing nothing until somebody showed up with a goal. The live question is whether the law will recognize that process as authorship.
Thaler had the strongest framing available to him: I used my tool to make this.
He threw it away.
Bad Plaintiff, Good Wall
Precedent doesn’t care about nuance. What matters is that the answer now exists at the Copyright Office, the district court, and the appellate level. The Supreme Court’s refusal to hear the case adds psychological weight even though it creates no legal force of its own. People are going to treat that whole chain of rejections as settled law.
That’s how the wall gets built.
It’s made of lower-court language that people will quote as if it came from the top. “Human authorship is a bedrock requirement of copyright.” That’s the line. It didn’t come from the Supreme Court, but it’ll travel as though it did. Anybody who wants to argue for a broader view of AI-assisted authorship now has to climb over that sentence first.
There’s a saying in law that hard cases make bad law. Edge cases tempt judges into bending principles, and the mess lingers for years. This was the inverse. An easy case produced broad, clean language. A bad plaintiff handed the courts a simple principle on a silver platter.
They took it.
Now every human creator using AI works against the rhetorical gravity of a case that treated software like a person. The first major AI copyright case in America is, at bottom, a robot rights case.
That’s a rotten foundation.
And it’s going to shape every conversation that comes after.
The Wall Is Everywhere
The U.S. case wasn’t some isolated filing. It was one piece of a coordinated global legal campaign.
Ryan Abbott’s Artificial Inventor Project saw an opening in Thaler’s earnest conviction and used him and DABUS as a test vehicle across nearly twenty jurisdictions at once. The U.S. Copyright Office. The U.S. Patent Office. The European Patent Office. The UK Intellectual Property Office. Australia. New Zealand. Switzerland. Same basic argument each time. Same claim that the machine was the creator.
And they lost almost everywhere.
South Africa granted a patent, though that came out of a registration-only system that doesn’t do substantive review at that stage. Everywhere that actually wrestled with the question built some version of the same wall. Machines aren’t authors. Machines aren’t inventors. Human involvement is required.
So this is bigger than one bad case in one country. It’s a web of rulings across the major IP jurisdictions, all built on the same framing error, all answering the same wrong question.
And here’s the really brutal part.
Thaler might’ve won if he’d framed the claim differently. More than one court suggested as much. If he’d listed himself as the creator and described DABUS as his tool, the applications likely would’ve had a much better chance. He refused. His sincerity made the case bulletproof for the other side. There was no ambiguity to wrestle with. Just a man saying the machine did it, and legal systems around the world replying: then you get nothing.
One campaign. One plaintiff. One framing.
What the Copyright Office Actually Says
Away from the Thaler circus, the Copyright Office has been quietly building its framework for AI-assisted works. And it’s tighter than a lot of people seem to realize.
In January 2025, Register of Copyrights Shira Perlmutter said the whole framework turns on “the centrality of human creativity to copyright” and that creativity expressed through AI systems “continues to enjoy protection.”
That sounds broad.
Then you read the rest.
The same report says AI-generated output only gets copyright where a human determined “sufficient expressive elements.” A human-authored contribution has to be visible in the final work, or the human has to make creative changes after the fact. Prompting alone doesn’t count.
The key case here is Zarya of the Dawn. Kristina Kashtanova used hundreds of prompts and iterations in Midjourney to build a graphic novel. The Copyright Office granted protection for her text and for her selection and arrangement of text and images as a whole. It denied protection for the individual AI-generated images.
Why? Too much distance, they said, between what Kashtanova asked for and what Midjourney actually gave her. Too much unpredictability. Too little control.
Then the Office reached for an analogy that gives the game away. It compared her role to that of a client hiring an artist and giving general directions.
Stop and look at what that analogy requires.
It requires Midjourney to act, for purposes of the argument, like an independent creative agent. Something that interprets a brief and makes expressive choices of its own. So the same Office that says machines can’t be authors suddenly needs the machine to behave like an artist in order to deny the human’s claim.
That’s the trick.
When the question is authorship, the machine is a mindless tool. When the question is whether the human did enough, the machine starts looking an awful lot like a creative professional.
And Kashtanova’s case wasn’t some one-off.
Jason Allen used more than 600 detailed prompts to create Théâtre D’opéra Spatial, specifying genre, tone, color, and style. The piece won first place at the Colorado State Fair’s fine art competition. The Copyright Office denied protection there too. Volume didn’t matter. Specificity didn’t matter. Six hundred rounds of aesthetic direction still got treated like mere ideation.
Allen is now challenging that decision in Colorado federal court.
This is the line the Copyright Office is trying to draw: post-generation selection counts as human expression, while pre-generation direction remains too abstract. Pick from the outputs afterward and maybe you get protection.
But that line doesn’t hold up very well when you actually look at the process.
The final selection is shaped by everything that came before it. You directed the system. You judged the results. You adjusted. You redirected. You kept pushing until it started giving you something closer to what you had in mind. The image you chose at the end didn’t drop from the sky. It came out of that back-and-forth.
So the selection can’t be cleanly separated from the direction that produced it.
Iterative prompting is closer to directing than to idle ideation. A film director says camera low, track left, let the light break through the window. The cinematographer still executes. The exact fall of the light still carries some unpredictability. The director still gets authorship, because the director is the mind shaping the final expression.
That’s where the Copyright Office loses its nerve.
Cameras and Photoshop sit on one side of the line. AI sits on the other. In Zarya, the Office even gestured toward the camera analogy, recognized the overlap, and then forced a boundary through it anyway. Same underlying logic. Different comfort level.
And yes, there’s a reason people are uncomfortable.
Six Words and a Click
The fear is real, and it’s worth taking seriously.
If the threshold for AI copyright drops all the way down to “a human typed a prompt,” then the Copyright Office gets buried in registrations for endless streams of generated images, text, and music. At that point the issue isn’t that some people worked harder than others. The issue is that the threshold for authorship has fallen so low that the system starts filling with claims that carry barely any human shaping at all.
That’s a real problem.
Copyright can survive differences in effort. It deals with that all the time. What it can’t survive very well is a standard so loose that trivial acts of generation and actual creative direction get folded into the same category without distinction.
And that’s the fear sitting underneath all of this.
It’s not just resentment. It’s not wounded pride from people who learned difficult tools. It’s the sense that once authorship gets reduced to “I typed a few words,” the category itself starts to lose coherence. The registration system turns into a chute for machine output with a human name attached.
That concern deserves a serious answer.
The answer isn’t to pretend prompting can never be creative. And it isn’t to treat every generated image as if it reflects meaningful authorship. The answer is to build a standard that can tell the difference. Evidence of iterative process. Documentation of creative choices. Meaningful human editing, curation, or transformation.
Copyright already has ways to think in degrees. Some works get thin protection because the creative contribution is narrow. Others get thicker protection because the authorship is more substantial. The tools already exist.
The problem is that the Copyright Office hasn’t applied them to AI-assisted work with much coherence.
The Honesty Penalty
There’s a deeper problem here.
This framework punishes honesty.
Most copyright registrations don’t get closely inspected. They pass through. Kashtanova disclosed her use of Midjourney, and that honesty is what got her work pulled under the microscope. It gave the system a chance to carve the project apart piece by piece. If she’d kept her mouth shut, the registration likely would’ve had a much easier path.
That creates a perverse incentive.
The Copyright Office is building policy around the people who disclose, while an unknown volume of AI-assisted work likely moves through the system without much scrutiny at all. So the sample they’re using to shape the rule is self-selected for honesty.
That’s a bad foundation for policy.
And the pressure doesn’t stop with the law. There’s a social penalty sitting on top of it. In a lot of creative communities, “AI-assisted” gets read as “less real” the second the label appears, even when the human labor is obvious. Direction. Iteration. Editing. Composition. All of that gets waved away the moment people hear a machine was involved.
So think about the choice the system creates.
Disclose, and you risk weaker legal protection plus reputational damage. Stay quiet, and you dodge both.
That makes disclosure the losing move.
And that incentive didn’t appear out of nowhere. The system built it. Creators are just responding to it rationally. A regime that punishes disclosure won’t produce honesty. It will produce silence, and then it will build policy on the small slice of people who still tell the truth.
The Remix Precedent
This pattern isn’t new.
When hip hop producers started building tracks out of samples, the courts had to decide whether assembling fragments of existing work counted as creation. They came down hard. Grand Upright Music v. Warner Bros. in 1991 made sample clearance the rule. Bridgeport v. Dimension Films in 2005 went even further and said even unrecognizable samples needed licensing.
That turned the whole thing into a tollbooth.
The major labels owned the masters. They set the prices. Independent artists who couldn’t afford clearance got squeezed out. A whole mode of expression got choked off because the system treated the people assembling the fragments as something less than full creators.
Paul’s Boutique is the classic example. It was built from dozens of samples and is often called the Sgt. Pepper of hip hop. In 1989, the Beastie Boys cleared it for about $250,000. Try doing that today. The cost would run into the millions. The legal framework didn’t just regulate that kind of art. It made it economically absurd.
And the principle here is simple.
Selection is expression.
That should already be familiar territory. The legal system used to understand it better than this.
Now look at who benefits if AI output stays hard to copyright. The big IP holders are sitting on enormous libraries of fully copyrighted human-made work. If everybody else starts flooding the zone with AI-assisted material that’s difficult to protect, those old catalogs get more valuable by comparison. Their moats get deeper all by themselves.
And they’re built to survive a murky standard.
A case-by-case regime built around “sufficient human involvement” favors the people who can afford lawyers, documentation trails, and polished records of process. A company can do that. A lone creator at the kitchen table has a much harder time.
So the pattern starts to look familiar.
A new tool opens the door to more people. The law tightens around it. The well-resourced learn how to move through the system. Everybody else gets stuck outside.
It happened with sampling.
The question is whether it has to happen again.
The Brush
There’s a window right now.
The tools are here. The legal framework still hasn’t fully hardened. That matters. It means someone who’s carried a story in their head for twenty years and never had the skill to draw it can finally put it on the page. It means someone who lost the use of their hands can make visual art again by describing what they see.
The law is being built in real time.
And right now it’s being built on a bad foundation. A case about machine personhood. A regulatory framework that treats prompting like abstraction instead of direction.
The case that matters is already in court. When Jason Allen’s case gets decided, the court is going to have to answer a very simple question: is creative direction through AI really different from every other form of creative direction people have used before? Cameras. Synthesizers. Samplers. Film crews.
It shouldn’t be a hard question.
Every AI output is human-directed.
The machine is a brush.
So that’s the real issue now. Whether the law figures that out before the window closes.
Paul’s Boutique couldn’t be made today. The legal framework saw to that. You can watch the same pattern taking shape here in real time. And it started because one man spent thirty years building a machine, decided it was alive, and loved it too much to put his own name on what it made.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.



This is a really thoughtful piece, and it actually shifted how I read the ruling. I went in seeing it as a win for artists, but your point about the wrong plaintiff producing the wrong precedent is hard to shake.
I find myself in an interesting position here. I'm an AI ethics analyst who also does architectural and interior photography. My photography workflow is traditional, I shoot and edit in Photoshop, though I do occasionally use AI tools for removing unwanted elements. Does that already put me in the gray zone you're describing?
And separately, I use AI to help edit my writing. Not to generate it, but to refine it. That feels meaningfully different to me, but your piece makes me wonder whether the law would even see that distinction.
I also don't believe AI is conscious, so Thaler's framing was always going to be a problem for me. But even setting that aside, the deeper issue is that he removed the human from the equation entirely. And with AI, what about the person who prompted it? What about the artists whose work the AI trained on? Copyright doesn't exist in a vacuum, and collapsing it down to just "did the machine make it" skips over a lot of people who arguably have a stake in that question.
I agree the Thaler case oversimplified this by making it purely about robot personhood, but I find myself stuck on a different layer. Yes, humans have always built on the work of others, and someone can paint in Monet's style without crediting him. That's how art has always worked. But with AI the question of scale, consent, and what counts as influence versus reproduction still feels unresolved to me. I'm not saying there's no human authorship involved in AI-assisted work. I just don't think we've fully reckoned with what's owed to the people whose work made the output possible in the first place.
The honesty penalty section really stuck with me. That kind of incentive structure, where being transparent creates more risk than staying quiet, is exactly what makes AI governance so hard to get right.
Thank you for this thoughtful point-in-time unpacking, Tumithak.
I have lately had this chat with a writer I like. He repeats the metaphysical ontology that 'there are writers and prompters'. If you didn't 'write' it then you obviously commissioned it.
Except writing has never just been drafting. It's research, selection, assembly -- ideation that has often involved community trading ideas promiscuously -- and not just ideation of story elements but stylistic elements too -- the kind that require substantive editing to do well.
All of which can be done through an AI even if you do all the drafting yourself. Or draft it yourself and have an AI edit it. Or have an AI draft it and you do the substantive edit. Or collaborate. It's not 'prompting' any more than writing is just typing or (as you cited) direction is camera- and actor-prompting.
As convenient as it might be rhetorically, this superficial reductionism has a very limited life.