Article image

The Boardroom Is Not A Private Chat

The fantasy of private thinking

Markus Brinsa 8 Apr 13, 2026 8 8 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

A lot of serious people have slipped into the same mental shortcut. They do not exactly believe that a public AI model is their lawyer, their banker, or their board portal. But they behave as if it were a kind of sealed thinking room. The prompt box feels intimate. The exchange feels provisional. The model sounds responsive, discreet, and oddly tailored to the user’s concern. It creates the illusion that what is happening is not disclosure but cognition. You are not sending anything out into the world, the instinct says. You are just thinking with assistance.

That instinct now looks dangerously outdated.

In United States v. Heppner, Judge Jed Rakoff confronted a question that many executives, lawyers, and operators have been quietly avoiding. If a person uses a publicly available generative AI platform while dealing with a legal matter, are those communications protected by attorney-client privilege or the work-product doctrine? His answer was no. Not because AI is unusually sinister. Not because the law has developed a brand-new anti-model doctrine. But because the old rules of confidentiality still apply, and the court was not willing to pretend that a public AI platform somehow sits outside them.

That is what makes the case more important than the headline. The real force of Heppner is not that a judge said something skeptical about AI. The real force is that he treated the system as something much more ordinary and therefore much more dangerous to careless users: a third party.

Rakoff’s move was doctrinal, not theatrical

The facts matter because they strip away comforting abstractions. The government had seized materials that included roughly thirty-one documents memorializing Heppner’s communications with Claude. Heppner argued that those documents reflected defense strategy, were prepared in anticipation of a possible indictment, and were later shared with counsel. But his counsel also conceded that counsel had not directed him to run Claude searches. The court then walked through the privilege analysis and found at least two, and possibly all three, elements missing.

Rakoff’s reasoning was blunt. Claude was not an attorney. The communications were not confidential. And Heppner had not communicated with Claude for the purpose of obtaining legal advice from counsel in the legally relevant sense the court was prepared to recognize. The opinion states that even if some information input into Claude had once been privileged, that privilege was waived by sharing it with Claude and Anthropic, “just as if he had shared it with any other third party.” The court also leaned on Anthropic’s privacy policy in concluding that Heppner lacked a reasonable expectation of confidentiality.

This is the key shift. Rakoff did not need a grand theory of machine personhood. He did not need to decide whether AI is like a human assistant, a calculator, a cloud drive, or something entirely new. He reached for settled waiver logic. Voluntary disclosure to a third party defeats confidentiality. Once the court saw the platform through that lens, the rest followed.

That move is easy to underestimate because it feels narrow. It arose in a criminal case. It involved a specific user, a specific platform, and a specific evidentiary dispute. But the underlying logic is much broader than the setting. Reuters was right to frame the case as an early warning for any doctrine that depends on secrecy or confidentiality as a condition of protection.

The legal fight is narrower than the governance lesson

The Harvard Law Review blog is useful precisely because it shows where the case is contestable. The post argues that Rakoff’s confidentiality analysis may have been too quick, that courts do not automatically treat every third-party software environment as fatal to a reasonable expectation of confidentiality, and that a more fact-intensive inquiry might have produced a more nuanced result. It also points to the possibility that privacy settings, opt-outs, and the tool-like character of AI systems could matter more than the opinion allowed.

That is an important legal point. Heppner is not the end of the doctrinal conversation. Another court could distinguish it. A different fact pattern could narrow it. A more tightly controlled enterprise environment could produce a different outcome. Even Rakoff acknowledged, in substance, that counsel-directed use might look more like an agent relationship than self-directed interaction with a public platform.

But governance people should resist the urge to hide inside that uncertainty. Whether Rakoff’s reasoning becomes universal black-letter law is not the main strategic question. The main strategic question is whether a board, a CEO, a general counsel, or an investment committee should build policy around the hope that a future court will be more forgiving than this one.

That is not a governance model. That is wishful drafting.

The mistake many organizations are making is to hear “privilege case” and file Heppner away as a lawyer problem. It is not. It is a confidentiality classification problem dressed in litigation clothes. Once you recognize that, the blast radius widens immediately.

Trade secrets are the next obvious fault line

Trade-secret law does not ask whether your information was valuable in your own imagination. It asks, among other things, whether you took reasonable measures to keep it secret. Reuters makes this point directly, and it is the most commercially important extension of Heppner. If courts begin to treat uploads of proprietary information to consumer AI tools as disclosure to a third party inconsistent with meaningful secrecy safeguards, trade-secret protection becomes harder to defend.

That risk is not theoretical. It sits inside ordinary behavior. An executive drops a pricing framework into a model and asks for negotiation angles. A product lead pastes roadmap notes into a chatbot for synthesis. A strategy team uploads internal market assumptions for a cleaner board narrative. An M&A team asks a model to compare target-company weaknesses against internal diligence findings. A founder rewrites investor updates using confidential customer churn data. None of this feels dramatic while it is happening. It feels efficient.

That is exactly why it is dangerous.

The old trade-secret world was built around familiar control points. NDAs. access restrictions. data rooms. document markings. role-based permissions. logging. internal policies. vendor diligence. The new problem is that the most senior and trusted people in the company can now route sensitive information through consumer interfaces that feel frictionless, helpful, and informal. The posture shifts from controlled circulation to casual externalization. Once that behavior becomes normalized, the organization’s claim that it used reasonable measures to preserve secrecy starts to look thinner.

The most damaging version of this is not the employee who does something obviously reckless. It is the capable executive who believes he is still operating inside a private thinking environment. That executive is not trying to leak. He is trying to work faster. But from a court’s point of view, from an adversary’s point of view, or from a future discovery fight’s point of view, intent may matter less than the fact of disclosure.

Boardroom exposure is broader than most boards realize

The boardroom angle is even more underappreciated because directors and senior executives often treat AI as a drafting layer rather than an infrastructure decision. They assume the real governance questions concern model vendors, cyber controls, or future regulation. Meanwhile, the immediate issue is much simpler. Where, exactly, is sensitive deliberation going?

Consider the kinds of materials that now routinely pass through AI systems in modern organizations: draft earnings language, scenario analysis, restructuring options, whistleblower-response framing, litigation exposure summaries, internal investigation notes, CEO succession thinking, activist defense narratives, diligence questions, crisis communications, regulatory positioning, acquisition rationale, integration planning, and downside modeling. These are not abstract “data.” They are concentrated strategic intent.

When such material is placed into a public or lightly governed model environment, the problem is not only whether the content might resurface somewhere else. The problem is that the organization may be undermining its own legal, strategic, and evidentiary posture at the moment it is generating it. The issue is not just leakage in the popular sense. It is exposure in the governance sense. Exposure means you have changed the character of the information by how you handled it.

That matters in board settings because boards are supposed to think in gradients of control, not in consumer convenience. A board that allows management to use public AI systems for sensitive drafting without a clear classification architecture is effectively permitting a shadow disclosure regime. It may be efficient in the short run. It is indefensible in the long run.

The infrastructure question underneath the habit

This is where the deeper EdgeFiles frame comes into view. Heppner is not merely about privilege. It is about what happens when cognitive infrastructure is outsourced before governance catches up.

Consumer AI systems encourage users to experience infrastructure as interface. You see the clean prompt field, the responsive prose, the apparent conversational continuity. You do not feel the legal architecture behind the interaction. You do not naturally experience it as vendor dependency, policy exposure, contractual allocation of rights, retention logic, discoverability risk, or chain-of-custody complexity. The interface is intimate. The infrastructure is not.

That mismatch is becoming a central governance problem of the AI era.

Organizations have spent years building doctrines, controls, and workflows around documents, communications, storage environments, and counterparties. Public AI systems blur those categories. They can look like software, feel like colleagues, function like outsourced analysis, and behave like data-processing intermediaries all at once. That ambiguity is commercially useful for adoption. It is terrible for governance. It tempts users to move between categories without noticing the legal consequences of the move.

The lesson from Heppner is that courts may be far less impressed by that ambiguity than product adoption teams are. When the pressure arrives, a judge may simply ask whether you disclosed sensitive material to a third party under conditions inconsistent with confidentiality. If the answer is yes, the metaphysics of “thinking with AI” will not save you.

Governance has to become architectural

The wrong response to this case is a generic warning telling employees to “be careful with AI.” That is the modern equivalent of putting a polite sign on a broken railing.

The right response is architectural. Organizations need a clear internal distinction between public AI, enterprise AI, privately deployed models, and approved domain-specific systems. They need an information-classification regime that maps types of material to types of environments. They need explicit rules for legal matters, investigations, deal work, board materials, regulated data, proprietary technical content, and commercially sensitive strategic analysis. They need logging, approvals, procurement discipline, retention logic, and policy language that matches actual technical deployment rather than PR assumptions about privacy.

Most of all, they need to stop treating AI use as a productivity preference and start treating it as an operating-environment choice.

That shift is cultural as much as technical. Senior people are often the least governable users because they assume judgment exempts them from system design. But Heppner points in the opposite direction. Judgment without infrastructure is exactly what creates exposure. The more senior the user and the more strategic the content, the less tolerable casual tool choice becomes.

The deeper message

The comforting sentence many executives still tell themselves is that they are only using AI to help them think. The trouble is that the law may not care what the activity felt like from the inside. It may care what the activity was from the outside.

That is the deeper significance of Heppner. It turns a fuzzy modern habit into an old legal fact pattern. The prompt box is not a mind. It is an environment. And once that environment is operated by someone else, governed by someone else’s terms, and accessible under someone else’s policies, your organization is no longer merely thinking. It is disclosing.

That is a hard sentence for AI adoption programs built on convenience to absorb. But it is the sentence boards, general counsel, founders, and operating leaders need to internalize now, before more of their sensitive reasoning migrates into systems they do not actually control.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.