This is a really thoughtful piece, and it actually shifted how I read the ruling. I went in seeing it as a win for artists, but your point about the wrong plaintiff producing the wrong precedent is hard to shake.
I find myself in an interesting position here. I'm an AI ethics analyst who also does architectural and interior photography. My photography workflow is traditional, I shoot and edit in Photoshop, though I do occasionally use AI tools for removing unwanted elements. Does that already put me in the gray zone you're describing?
And separately, I use AI to help edit my writing. Not to generate it, but to refine it. That feels meaningfully different to me, but your piece makes me wonder whether the law would even see that distinction.
I also don't believe AI is conscious, so Thaler's framing was always going to be a problem for me. But even setting that aside, the deeper issue is that he removed the human from the equation entirely. And with AI, what about the person who prompted it? What about the artists whose work the AI trained on? Copyright doesn't exist in a vacuum, and collapsing it down to just "did the machine make it" skips over a lot of people who arguably have a stake in that question.
I agree the Thaler case oversimplified this by making it purely about robot personhood, but I find myself stuck on a different layer. Yes, humans have always built on the work of others, and someone can paint in Monet's style without crediting him. That's how art has always worked. But with AI the question of scale, consent, and what counts as influence versus reproduction still feels unresolved to me. I'm not saying there's no human authorship involved in AI-assisted work. I just don't think we've fully reckoned with what's owed to the people whose work made the output possible in the first place.
The honesty penalty section really stuck with me. That kind of incentive structure, where being transparent creates more risk than staying quiet, is exactly what makes AI governance so hard to get right.
You're already in the gray zone, and that's kind of the point. Using AI to remove an element in Photoshop and using AI to generate an image are different in degree, but the Copyright Office framework doesn't have a clean way to distinguish them. It's all "AI-assisted" once you disclose.
The training data question is real and I deliberately kept it out of this piece because it's a different argument. This essay is about who owns the output. The training data question is about who owns the inputs. Both matter. The Copyright Office actually released a separate report on training data in May 2025 that gets into fair use and licensing. That's a whole other essay.
And yeah, Thaler's framing was always going to be a dead end. We agree there. The tragedy is the collateral damage.
Thank you for this thoughtful point-in-time unpacking, Tumithak.
I have lately had this chat with a writer I like. He repeats the metaphysical ontology that 'there are writers and prompters'. If you didn't 'write' it then you obviously commissioned it.
Except writing has never just been drafting. It's research, selection, assembly -- ideation that has often involved community trading ideas promiscuously -- and not just ideation of story elements but stylistic elements too -- the kind that require substantive editing to do well.
All of which can be done through an AI even if you do all the drafting yourself. Or draft it yourself and have an AI edit it. Or have an AI draft it and you do the substantive edit. Or collaborate. It's not 'prompting' any more than writing is just typing or (as you cited) direction is camera- and actor-prompting.
As convenient as it might be rhetorically, this superficial reductionism has a very limited life.
This is a really thoughtful piece, and it actually shifted how I read the ruling. I went in seeing it as a win for artists, but your point about the wrong plaintiff producing the wrong precedent is hard to shake.
I find myself in an interesting position here. I'm an AI ethics analyst who also does architectural and interior photography. My photography workflow is traditional, I shoot and edit in Photoshop, though I do occasionally use AI tools for removing unwanted elements. Does that already put me in the gray zone you're describing?
And separately, I use AI to help edit my writing. Not to generate it, but to refine it. That feels meaningfully different to me, but your piece makes me wonder whether the law would even see that distinction.
I also don't believe AI is conscious, so Thaler's framing was always going to be a problem for me. But even setting that aside, the deeper issue is that he removed the human from the equation entirely. And with AI, what about the person who prompted it? What about the artists whose work the AI trained on? Copyright doesn't exist in a vacuum, and collapsing it down to just "did the machine make it" skips over a lot of people who arguably have a stake in that question.
I agree the Thaler case oversimplified this by making it purely about robot personhood, but I find myself stuck on a different layer. Yes, humans have always built on the work of others, and someone can paint in Monet's style without crediting him. That's how art has always worked. But with AI the question of scale, consent, and what counts as influence versus reproduction still feels unresolved to me. I'm not saying there's no human authorship involved in AI-assisted work. I just don't think we've fully reckoned with what's owed to the people whose work made the output possible in the first place.
The honesty penalty section really stuck with me. That kind of incentive structure, where being transparent creates more risk than staying quiet, is exactly what makes AI governance so hard to get right.
You're already in the gray zone, and that's kind of the point. Using AI to remove an element in Photoshop and using AI to generate an image are different in degree, but the Copyright Office framework doesn't have a clean way to distinguish them. It's all "AI-assisted" once you disclose.
The training data question is real and I deliberately kept it out of this piece because it's a different argument. This essay is about who owns the output. The training data question is about who owns the inputs. Both matter. The Copyright Office actually released a separate report on training data in May 2025 that gets into fair use and licensing. That's a whole other essay.
And yeah, Thaler's framing was always going to be a dead end. We agree there. The tragedy is the collateral damage.
Thank you for this thoughtful point-in-time unpacking, Tumithak.
I have lately had this chat with a writer I like. He repeats the metaphysical ontology that 'there are writers and prompters'. If you didn't 'write' it then you obviously commissioned it.
Except writing has never just been drafting. It's research, selection, assembly -- ideation that has often involved community trading ideas promiscuously -- and not just ideation of story elements but stylistic elements too -- the kind that require substantive editing to do well.
All of which can be done through an AI even if you do all the drafting yourself. Or draft it yourself and have an AI edit it. Or have an AI draft it and you do the substantive edit. Or collaborate. It's not 'prompting' any more than writing is just typing or (as you cited) direction is camera- and actor-prompting.
As convenient as it might be rhetorically, this superficial reductionism has a very limited life.