Article image

The Privilege Trap

Too many companies still treat public AI like internal infrastructure

Markus Brinsa 30 Mar 31, 2026 6 6 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

The mistake is older than the technology

Most AI governance failures do not begin with the model. They begin with a category error. A public AI tool is treated like a coworker. Or like software already inside the company perimeter. Or like a neutral utility that simply helps move words around. That fiction has been convenient for a while because it lets organizations enjoy the speed of generative AI without confronting the legal meaning of disclosure.

A recent court development puts pressure on that fiction. In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York held that communications with a publicly available generative AI platform were not protected by attorney-client privilege or the work product doctrine. Reuters correctly framed the broader significance. This was not just a narrow dispute about one defendant and one chatbot. It was a warning that courts may apply old confidentiality logic to new AI behavior without much sympathy for the people who assumed the technology changed the rules.

That matters because many companies are still running on a dangerous operating assumption. They believe the risk of AI begins when a model says something wrong. In practice, some of the bigger risk begins much earlier, at the moment sensitive information is fed into the wrong system.

The real issue is disclosure

The legal reasoning here is not futuristic. It is almost boring in its simplicity, which is exactly why business leaders should take it seriously.

The court’s position was not that AI is uniquely dangerous in some science-fiction sense. The position was that disclosure to a public AI platform can look a lot like disclosure to a third party. In the court’s memo, the documents at issue were not treated as confidential communications with counsel. The memo also points to the platform’s privacy terms and to the absence of a reasonable expectation of confidentiality. Reuters then extends the implication in the direction many executives still underestimate: if confidentiality is central not only to privilege but also to trade secret protection, careless use of public AI tools can create a much larger exposure than most companies have modeled.

That is the part too many organizations still miss. They are debating whether employees got a good answer from the model when they should be asking a more basic question first. Should this information have been there at all.

The market has been trained to think about AI through productivity theater. Faster drafts. Faster summaries. Faster analysis. Faster decisions. But confidentiality law does not care how useful the tool felt in the moment. It cares about what was disclosed, under what conditions, with what expectation of privacy, and whether the company behaved as if the information actually mattered.

A great many firms now have employees quietly pasting board material, legal fact patterns, customer data, investigation notes, pricing logic, draft deal language, and product plans into consumer AI systems because it feels efficient. That is not innovation. That is unmanaged outbound data movement disguised as convenience.

Why this reaches beyond lawyers

It would be a mistake to read this as a niche legal story relevant only to litigators and general counsel. Legal teams may feel the heat first because privilege is a cleaner frame for courts to evaluate. But the operational lesson is much broader. The same bad assumption shows up across strategy, finance, product, HR, compliance, and corporate development.

A founder pastes draft acquisition scenarios into a public chatbot to pressure-test options. A strategy lead uploads competitive plans to get a cleaner synthesis for a board deck. A product manager asks a model to improve a confidential roadmap. A finance executive drops sensitive forecasts into a model to get narrative language for earnings prep. Each of these actions looks harmless when viewed as a task. Each looks very different when viewed as a control failure.

That is why this development matters in SEIKOURI terms. It is not really a story about chat interfaces. It is a story about boundary confusion. Organizations adopted generative AI faster than they developed clean internal distinctions between public tools, managed enterprise systems, protected workflows, regulated use cases, and non-negotiable data classes. They moved the interface before they matured the operating model. Courts are unlikely to rescue them from that immaturity.

The fantasy of the invisible perimeter

Many companies still behave as though a digital interface that feels smooth, smart, and work-friendly must somehow sit inside a safe perimeter. That assumption was always fragile. Now it looks reckless.

The court memo in Heppner is useful because it strips away the marketing fog. A public AI service is not automatically your confidential environment just because it is accessed from a laptop, used during work hours, or wrapped in the language of productivity. If anything, the opposite lesson is emerging. If the platform’s terms, architecture, or practical use patterns do not support confidentiality, organizations should expect traditional doctrines to remain traditional.

That should not be controversial. It should be obvious. Yet many AI rollouts still operate as if “approved for experimentation” means “safe for meaningful information.” Those are not the same thing. Not even close.

This is where AI governance often fails at the executive level. Leaders look for a model policy, a vendor promise, or a training session that lets them say the issue has been handled. What they actually need is a sharper concept of informational boundaries. Which systems are external. Which uses are prohibited. Which data classes never cross into public tools. Which enterprise deployments are genuinely ring-fenced. Which claims by vendors matter contractually and which are just reassuring prose on a website. Without that clarity, the company is not using AI strategically. It is improvising disclosure.

What disciplined companies do differently

The right response is not AI panic. It is operational seriousness. Serious companies stop pretending that all AI usage belongs in one bucket. They distinguish sharply between consumer tools and enterprise deployments. They examine contract terms, retention practices, training rights, disclosure rights, access controls, logging, and internal permissions. They do not let employees guess their way through confidentiality. They make data boundaries legible.

They also stop treating governance as a communications exercise. A policy that says “use AI responsibly” is almost useless. Employees need practical instructions tied to real workflows. Can they paste customer information into a public model. Can they use public AI to summarize contracts. Can they run internal investigation notes through a chatbot. Can they upload product architecture. Can they ask a model to analyze legal exposure. If the organization cannot answer those questions clearly, it is not governed.

The better operators will also understand something else that is now becoming unavoidable. Enterprise AI is not just a procurement category. It is a legal and control question. A platform with negotiated confidentiality terms, tighter access controls, and explicit restrictions on data use is not the same risk object as a public consumer chatbot. That does not create automatic privilege, and it does not solve every problem, but it changes the control posture materially. Serious firms will treat that distinction as foundational, not optional.

The SEIKOURI view

The most expensive AI mistakes are often the ones that still feel ordinary when they happen. Nobody announces, “Today we will compromise confidentiality.” They say they are saving time. They say they are drafting faster. They say they are just summarizing. They say it is only internal. Then the system boundary turns out to matter more than the user intended.

That is why this story matters. It is not another abstract warning about responsible AI. It is a concrete reminder that legal exposure can be created by a very modern form of old-fashioned carelessness. A public AI tool may feel like software. A court may still see a third party.

And once that distinction becomes expensive, the market will discover what it should have learned earlier. AI governance is not mainly about what the model can generate. It is about what the organization is willing to expose.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.