Article image
SEIKOURI Inc.

The Copyright Mirage

Markus Brinsa 26 March 26, 2026 9 9 min read Download Web Insights Edgefiles™

Sources

What the law actually says about fully AI-generated images and why the answer still depends on where the human begins

The sentence that keeps getting simplified

The easiest sentence in this debate is also the most misleading one. People hear that a fully AI-generated image cannot be copyrighted and immediately turn it into something broader, cleaner, and more dramatic. They start saying that AI images cannot be owned. Or that nobody can ever have copyright if a generative system was involved. Or that the entire category is basically lawless, floating around the internet like a ghost with good lighting and terrible provenance.

That is not quite what the law says.

What the law is doing, especially in the United States, is something more precise and therefore much more important. It is drawing a line between a work that still contains legally recognizable human authorship and a work where the machine determined the expression. That distinction sounds technical until you realize it now sits underneath commercial design work, advertising visuals, publishing, brand assets, licensing terms, platform claims, agency contracts, and a growing number of executive assumptions that were formed by listening to people who wanted the answer to be simpler than it is.

The real question is not whether a machine touched the workflow. The real question is whether the law can still find a human author in the final image.

The United States is no longer especially vague

For a while, the American position was easy to dismiss as guidance, policy language, or agency preference. People could pretend the hard question had not really been answered because the issue had not yet been tested high enough in court. That excuse has largely expired.

The major U.S. case is the Stephen Thaler dispute over an image he said was generated autonomously by his AI system. Thaler did not argue that he had shaped the image through substantial creative intervention. He argued something much cleaner and much riskier: the image was generated by AI, and copyright should still exist. That made the case unusually pure. It removed the gray area and forced the court to confront the underlying issue directly.

The D.C. Circuit’s answer in 2025 was blunt. Copyright under the U.S. Copyright Act requires a human author. The court did not treat this as a soft preference or a policy instinct. It treated it as the best reading of the statute itself. The statutory system, in the court’s view, assumes that authors are human beings. In March 2026, the Supreme Court declined to hear the case, which means that this holding now sits in place untouched.

That matters because it gives the American position a harder edge than many casual summaries reflect. If an image is truly autonomous machine output, with no human authorship in the legally relevant sense, the current U.S. answer is not “maybe later” or “still unsettled.” It is functionally no.

The phrase everybody talks around

The most revealing phrase in this area is “no human authorship,” because it sounds obvious until people try to apply it to actual workflows.

No human authorship does not mean no human was present. It does not mean nobody typed a prompt, paid a subscription fee, or clicked regenerate for forty minutes while muttering at the screen. It means the law does not recognize the human as the creator of the expressive content that copyright is supposed to protect. That is a much narrower and much harsher test.

In practical terms, if a user writes a prompt and the model chooses the pose, the face, the details, the composition, the lighting, the textures, the atmosphere, the folds of fabric, the background objects, the visual emphasis, and the thousand little expressive decisions that make the image what it is, the legal system may conclude that the machine, not the human, determined the protected expression. At that point, the user may have initiated the process without becoming the legal author of the result.

This is why so much public discussion feels slippery. People use the word “created” in an ordinary language sense. The law uses authorship in a much more demanding one.

Prompting is not the magic wand people hoped for

One of the most important developments in the U.S. is not just the rejection of fully autonomous AI authorship. It is the Copyright Office’s increasingly clear position that prompting alone usually does not solve the problem.

That is a hard truth for people who built entire businesses around the idea that elaborate prompt writing is equivalent to artistic authorship. It may feel creative. It may require taste. It may involve iteration, judgment, persistence, frustration, and a strange level of emotional commitment to a machine that keeps giving everyone six fingers. But none of that automatically converts the output into copyrightable human expression.

The Copyright Office’s 2025 report is important precisely because it narrows the fantasy. It says prompts by themselves generally do not provide enough control over the resulting expression for the user to be treated as the author of the output. That does not make prompting meaningless. It makes prompting legally insufficient on its own in many cases.

This is a serious distinction. The market has often treated prompt craft as if it were the legal equivalent of composing the image. The law is leaning toward a different view. Giving instructions is not the same thing as authoring the resulting expression if the system decides how those instructions become visual reality.

The gray zone is where business people get into trouble

The clean Thaler scenario is not actually the hardest problem. The harder problem is everything people do after the first output appears.

They generate multiple images. They reject most of them. They select one. They crop it. They inpaint areas. They combine outputs. They repaint details in Photoshop. They alter structure, color, and emphasis. They composite human-made and machine-made elements. They keep refining until the final piece no longer looks like a raw model output.

This is where the law gets more interesting and much more commercially relevant.

The U.S. Copyright Office has not taken the position that all works involving AI are unprotectable. Quite the opposite. It has repeatedly said that human-authored contributions may still qualify. Selection, coordination, arrangement, and sufficiently original modification can all matter. In other words, the machine-generated parts may be excluded while the human-authored aspects remain protected.

That is exactly why the simplistic slogan fails. The law is not really asking whether AI was involved. It is asking whether the final work contains identifiable human authorship.

The Zarya of the Dawn decision made that visible. The Copyright Office did not protect the Midjourney-generated images as such, but it did recognize protection in the text and in the selection and arrangement of the comic as a whole. The Théâtre D’opéra Spatial decision pushed the line further. Extensive prompting by itself did not persuade the Office that the generated image was human-authored, even though the Office accepted that separate human edits could potentially qualify.

That is where the next phase of conflict will live. Not in the clean claim that AI alone should be an author, but in the messier argument over how much human control, editing, and transformation is enough.

Europe agrees on the direction and disagrees on the mechanics

Europe is often discussed as if it offered a single answer, which is convenient and wrong. Inside the EU, the basic originality test is strongly human-centered. Copyright protects a work that is the author’s own intellectual creation. In the case law, that idea is tied to free and creative choices and to the expression of the author’s personality. That framework does not fit comfortably with a fully autonomous image generator producing the final expression on its own.

So the broad direction is similar to the United States. A purely AI-generated image with no meaningful human authorship is on weak ground. But the European path is different. The EU does not yet have one clean Union-wide rule that says, in one sentence, that fully AI-generated images cannot receive copyright. Instead, the conclusion emerges from the originality doctrine and from the way recent policy analysis has applied that doctrine to generative systems. The result is not legal chaos, but it is less neat than the current U.S. position.

That makes Europe more subtle and, in some ways, more dangerous for sloppy legal summaries. People often compress the issue into a false binary. Either Europe is supposedly more flexible, or it is supposedly identical to the U.S. Neither is quite right. The likely outcome is similar, but the doctrinal route is different and the edge cases remain highly fact-specific.

The UK is the awkward exception everyone forgets

Then there is the UK, which has a habit of making any continental summary slightly more annoying.

British law still contains a specific regime for certain computer-generated works. Under that framework, if a work is generated by computer in circumstances where there is no human author, the author can be treated as the person who made the arrangements necessary for the creation of the work. The term of protection is also different from the usual life-plus-years model.

That makes the UK the awkward outlier in this conversation. If someone says, “Europe does not allow copyright in fully AI-generated images,” they are oversimplifying. If they mean the EU trend under current originality doctrine, they are roughly pointing in the right direction. If they mean the entire European continent without qualification, they are flattening away an important statutory exception.

At the same time, the UK is not sitting comfortably with this forever. The government is actively reviewing copyright and AI, and the continued fit of the older computer-generated works provision in the age of modern generative systems is part of a broader policy debate. So even the outlier comes with an asterisk.

What this means in the real world

The practical consequence is that copyright in AI-generated imagery is becoming a documentation problem as much as a doctrine problem.

If you want to claim rights in a final visual work, you need more than a hand-wavy statement that AI was involved somewhere in the process. You need to know where the human contribution sits, how substantial it is, whether it shaped the expressive result, and whether that contribution can be described in a way a court or copyright office could actually understand.

That changes the risk profile for agencies, publishers, designers, startups, and enterprise teams building visual assets at scale. It also changes how comfortable anyone should be when a vendor, creator, or platform representative casually says, “yes, of course you own it.”

Maybe. Maybe not. Maybe you own rights in the human-authored arrangement, edit, or composite, but not in the generated image underneath it. Maybe your contract says more than copyright law does. Maybe your exclusivity is commercial rather than statutory. Maybe what you really have is a license structure, not authorship in the old sense. Those are very different things, and they should not be sold to clients as if they were interchangeable.

The deeper shift here is that generative AI is forcing copyright law to reveal what it was protecting all along. Not labor in the abstract. Not mere instruction. Not software usage. Human authorship.

And that is why this question matters beyond image tools. The fight is not really over whether machines are impressive. They obviously are. The fight is over whether legal systems still require a human mind at the center of protected expression, and what happens when the workflow becomes so machine-heavy that the human starts looking less like an author and more like an unusually persistent customer.

The illusion that broke first

For years, the easier story was that AI would scramble creative ownership because the technology was moving too fast for doctrine to keep up.

What is happening now is more interesting. The law is not collapsing. It is tightening. It is forcing a distinction that many people preferred not to make.

A fully AI-generated image is not becoming easier to protect because the tools are better. If anything, the opposite is happening. The better the systems get at generating finished expression on their own, the more pressure they put on the human-authorship requirement that copyright still depends on.

That is the real inversion. The most visually impressive outputs may be the ones with the weakest claim to human authorship. The more complete the machine’s expressive role becomes, the harder it may be for a person to stand beside the result and say, legally, this is mine in the copyright sense.

That does not kill creativity with AI. It just kills a comforting myth. The myth was that using the tool and authoring the work were basically the same thing. The law is increasingly saying they are not.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.