Article image
SEIKOURI Inc.

Rules for Thee, Copilot for Me

Courts are punishing AI mistakes in briefs while quietly normalizing AI inside chambers

Markus Brinsa 18 Apr 22, 2026 7 7 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

The Bench Is Not Above the Bot

The legal system has entered its most predictable AI phase: the one where everybody agrees the technology is dangerous right up until the moment they decide their own use case is special.

Lawyers have already learned the public lesson. If you file fake citations generated by AI, the court may fine you, humiliate you in writing, refer you to disciplinary authorities, or all three. The message has been loud, expensive, and increasingly theatrical. The age of judicial patience is over.

And yet the institution delivering that lecture is using the same class of tools itself. That is the part that matters.

This is not a cheap hypocrisy story. It is something more revealing and more consequential. It is a governance story. It is about what happens when an institution punishes uncontrolled AI use at the edges while quietly improvising its own AI practices at the center.

The result is not consistency. It is a two-speed accountability regime.

The sanctions are real and they are escalating

The courts are no longer treating AI filing failures as cute little tech mishaps. Reuters reported a $12,000 sanction in a Kansas patent case in February and a $30,000 sanction from the Sixth Circuit in March tied to fabricated citations and misrepresentations. Other courts have layered in fee awards, bar referrals, and reputational damage that may outlast the fines themselves. By April 10, Ethics Reporter, drawing on public cases and sanctions tracking, put first-quarter 2026 sanctions tied to AI-fabricated citations at at least $145,000.

None of that is irrational. If a lawyer signs a filing, that lawyer owns the filing. The profession cannot outsource its duty of accuracy to a language model and then act wounded when the court notices that half the authorities are fictional. But the legal system has moved past the simple morality play in which reckless lawyers meet deserved consequences. That story is now too small.

Because while courts have been turning lawyers into cautionary tales, the bench has been becoming an AI workplace.

The courtroom is no longer outside the rollout

Reuters reported on March 30 that 60% of surveyed federal judges are using at least one AI tool in judicial work. Only 22.4% said they use those tools weekly or daily, which means we are not looking at total operational dependence yet. But that is not a minor pilot either. It is broad institutional adoption. The Sedona Conference and New York City Bar summary adds an even more revealing detail: legal research is the dominant judicial use case, followed by document review. One in three judges permit AI use in chambers, and about one in four report having no official AI policy.

Read that again slowly. The system is sanctioning AI-tainted legal research mistakes from advocates while a meaningful share of judges are already using AI for legal research themselves.

It is warning lawyers to verify outputs while many chambers are still operating in a policy environment that ranges from partial permission to fuzzy internal norms to no formal policy at all.

That is not mature governance. That is institutional drift wearing a very serious facial expression.

Then the judges’ own errors arrived

The most awkward part of this story is that the judiciary did not stay above the problem long enough to preserve the illusion.

Reuters reported in October 2025 that two federal judges acknowledged staff use of AI in preparing court orders that contained errors. One chamber involved an intern who used ChatGPT without authorization or disclosure. Another involved use of Perplexity as a drafting assistant. Both judges said the decisions did not go through their normal review process, and both adopted additional safeguards after the fact. Reuters later reported that the episode intensified calls for permanent judicial AI guidance, while the federal court system was still relying on interim guidance that was not publicly shared.

That matters for one reason above all others. The standard argument in defense of institutional AI use is that professionals remain in the loop. But that was also the argument in half the lawyer cases. Human review is the magical phrase everybody invokes right before you discover there was less of it than advertised.

The problem is not that judges are uniquely reckless. The problem is that they are human in exactly the way everybody else is human. They are busy. They delegate. They trust familiar systems. They normalize convenience. They believe they will catch the bad output before it matters. So do the lawyers.

The real double standard is not moral, it is structural

This is where the conversation gets more interesting than a simple complaint about fairness. The legal system is building separate categories of permissible AI risk.

When lawyers use AI and the output fails, the system frames the event as professional misconduct, negligence, or a breach of duty. When judges or chambers use AI, the instinct is to frame it as experimentation, modernization, or a governance challenge that still needs to be worked out.

The same technology produces different institutional narratives depending on who controls the disciplinary machinery. That is the real paradox.

The courts are not wrong to demand that lawyers verify everything. They are wrong if they imagine the judiciary can operate on a softer version of the same rule while preserving public confidence. The legitimacy problem begins the moment the institution behaves as if AI risk becomes more respectable when it moves behind the bench.

This is what enterprise AI failure looks like in a legal costume

There is a broader business lesson here, and it has nothing to do with whether a judge prefers Westlaw AI, Lexis+ AI, ChatGPT, or something more locked down.

Every enterprise AI rollout begins with a fantasy of bounded assistance. The tool will save time. It will accelerate repetitive work. It will not touch core judgment. Everyone will use it carefully. Policies will catch up later.

Then reality arrives.

The repetitive tasks turn out to be adjacent to important judgments. Drafting bleeds into reasoning. Research shapes framing. Suggested language quietly influences the human who is supposedly still fully in control. Oversight becomes procedural theater. Internal exceptions multiply because experienced professionals assume they know when the machine is safe.

The legal system is not somehow exempt from this pattern because it owns gavels. It is running straight into the same operational truth that has destabilized AI deployments across corporate environments: once people trust the workflow, they stop treating the output as radioactive.

The public danger is legitimacy, not just hallucination

The easiest version of this debate is still the wrong one. It is not “judges are hypocrites” versus “lawyers deserve sanctions.” Both are too shallow.

The deeper problem is that courts are trying to govern AI use before they have fully governed their own institutional relationship with AI. That creates a credibility hazard. If the bench wants strict verification culture, transparent disclosure, documented review, and real accountability, it has to embody those norms in chambers as well as impose them on filers. Otherwise every sanction starts to look slightly performative.

And once that happens, the system invites exactly the suspicion it cannot afford: that AI is unacceptable when it threatens judicial order but acceptable when it helps judicial throughput.

That is a terrible message for a legal system that survives on public confidence.

The next phase will be about control, not prohibition

None of this leads to the serious conclusion that courts should ban AI. They will not. The economics, the workload pressure, and the vendor integration path all point the other way. Judges are already using legal-specific AI tools more than general consumer models. Pilot programs are already underway. Public discussions are already shifting from whether AI belongs in chambers to where it belongs and under what conditions. IAPP’s report on comments from Judges James Boasberg and Allison Burroughs made that unmistakable. The question inside the judiciary is no longer whether AI can be useful. The question is where utility ends and unacceptable substitution begins.

That means the winning issue is not adoption. It is control architecture.

Who is allowed to use what tool, for which task, under what review standard, with what recordkeeping, under whose policy, and with what consequence when the output is wrong.

Lawyers already know the answer to the last part. Judges are about to discover that the public may want the same answer from them.

The machine does not care who wears the robe

AI does not become institutionally safe because the person using it has chambers, clerks, or lifetime tenure. It does not become more truthful because it is integrated into a premium legal research platform. It does not become less distortive because the user has the authority to sanction everybody else in the room.

The machine remains a machine.

If courts want to punish false confidence in AI, they should. But they should do it with the humility to recognize the problem is not confined to the bar. It is already inside the building. And once the technology has crossed that threshold, the real test is no longer whether lawyers can be scared into behaving better. It is whether the institution can govern itself by the same standard.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.