Article image

The Control Gap

Why AI governance fails when policy cannot touch the machine

Markus Brinsa 5 Apr 7, 2026 7 7 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

The dangerous idea that policy is enough

Enterprise AI governance still suffers from an old corporate fantasy. The fantasy is that language governs behavior. Write the policy. Define acceptable use. Tell employees not to paste confidential material into the wrong tools. Require legal review for sensitive cases. Add procurement questions. Run a training session. Put a reassuring sentence in the vendor file. Then assume the institution has done what serious institutions do.

This is the managerial version of crossing yourself before takeoff. It makes people feel better. It changes very little. 

The weakness is not that policy has no role. The weakness is that policy is still being treated as the control layer itself. In many organizations, written intent and system behavior are only loosely related. The rule says one thing. The workflow permits another. The system logs a third thing. The vendor contract leaves a fourth thing ambiguous. Employees operate in the middle and call that “using AI responsibly.”

 That arrangement was always brittle. AI is simply making the brittleness visible.

The cleaner thesis and why it seduces people

Once people realize that policy alone is weak, a sharper thesis rushes in to fill the vacuum. The thesis is this: AI governance is not really a policy problem. It is a technical enforcement problem. If the institution can verify provenance, seal records, authenticate actions, harden logs, and create a tamper-evident chain of execution, governance stops being theater and becomes machinery.

That is the serious version of the “cryptography fixes AI governance” claim.

It is appealing for obvious reasons. It offers an escape from institutional mush. It replaces interpretation with evidence, trust with verification, and policy language with system discipline. It promises a world in which the organization no longer has to rely on vague employee judgment or cheerful vendor assurances. The machine itself becomes the boundary.

 That is a real advance over governance by memo. It is also not enough.

The legal signal is not subtle 

The legal warning here is more straightforward than much of the AI industry wants to admit. In United States v. Heppner, Judge Jed Rakoff held that communications with a publicly available generative AI platform were not protected by attorney-client privilege or the work product doctrine. Reuters, in analyzing the decision, argued that the opinion reaches beyond privilege into the logic of trade-secret protection, because trade-secret law also depends on meaningful efforts to preserve confidentiality. Harvard Law Review’s analysis notes that the dispute is not just about novelty or hype. It is about whether old doctrines of confidentiality and third-party disclosure still apply to new interfaces. They do. 

That is the point many executives still refuse to internalize. A public AI interface does not become a private environment because it feels useful, intelligent, or software-like. Productivity does not erase disclosure. Fluency does not create confidentiality. A model that helps with drafting is still a third-party system if the surrounding conditions say it is. 

This matters far beyond law firms. The same logic sits over strategy decks, board materials, internal investigations, M&A scenarios, roadmap discussions, investor messaging, product concepts, and any other material whose value depends in part on controlled exposure.

If a company handles sensitive information through systems with weak or poorly understood boundaries, the damage may begin long before any model produces an obviously bad answer. 

Why the architecture people are right for half the argument 

The technical-enforcement camp is correct about one thing that the policy camp still evades. Governance that cannot shape execution is weak governance.

NIST’s AI Risk Management Framework playbook pushes organizations to define terms and intended uses, connect AI governance to existing organizational risk controls, and align AI governance with broader data-governance practices, especially around sensitive or risky data. That is not the language of soft principles floating above the system. It is the language of governance that has to connect to actual controls. 

The same direction appears in security guidance. OWASP’s Secure AI Model Ops guidance emphasizes access control, artifact integrity, auditability, and the secure handling of model assets and logs. C2PA’s security considerations likewise treat provenance and cryptographic binding as part of the trust problem for digital artifacts. In plain English, the standards world is already telling organizations that trust cannot remain a policy-only construct. It has to be attached to evidence and control. 

That is why the architecture critique bites so hard. A company can have principles, committees, approvals, and employee training while still lacking durable audit trails, clear routing restrictions, meaningful data segmentation, and technical boundaries that map to the seriousness of the information being handled. In that setting, governance is mostly mood music.

So yes, the machine has to be touched. Policy that cannot touch the machine is mostly prose.

Why “cryptography fixes AI governance” still collapses 

But the moment that argument becomes total, it breaks. Cryptographic integrity, signed logs, provenance controls, and tamper evidence can strengthen enforcement. They can make the system more legible, more defensible, and less dependent on blind trust. In some high-sensitivity environments, they may be indispensable. But they do not decide what deserves protection, which workflows are forbidden, what level of exposure is unacceptable, or when a business is willing to trade flexibility for control.  That is where the slogan falls apart.

Governance is not only about proving what happened. It is about deciding what should be allowed to happen at all.

No cryptographic layer can define the institution’s confidentiality boundaries for it. No signing scheme can tell the company whether a board memo belongs in a public model, whether a product prompt contains strategic disclosure, whether a finance workflow requires a contained environment, or whether the convenience of a tool justifies the retention posture attached to it. Those are governance choices before they are engineering choices.  

This is the mistake many technically serious people still make. They assume the governance problem begins after the institution has already decided what matters. In reality, many organizations have not done that work cleanly. They have fuzzy data classes, weak escalation rules, undefined exception paths, and highly selective executive attention. If that organization builds a strong enforcement layer, it may simply become better at enforcing its own confusion.

That is not maturity. That is expensive clarity about the wrong thing.

The real divide is prose versus control

The useful distinction is not governance versus architecture. The useful distinction is prose versus control. One class of organization still governs in prose. It writes rules, states values, drafts guidelines, and hopes the words will somehow discipline workflows that were built elsewhere. This class mistakes documentation for control. It talks about responsible AI as though responsibility were mainly a communication exercise.

Another class is beginning to govern through control logic. It maps data classes to environments. It maps contract terms to configurations. It maps workflow sensitivity to access conditions, escalation thresholds, and evidence requirements. It does not assume that employees will remember every boundary in the heat of work. It makes the system carry some of that burden.

That second model is much harder to build because it forces the institution to answer questions it would rather postpone. What are the real categories of sensitive information. Which uses are prohibited outright. Which uses require contained environments. What can leave the boundary and under what conditions. What evidence has to exist later if a regulator, court, board, or counterparty asks what happened. NIST’s guidance explicitly points organizations toward this kind of connected governance rather than free-floating policy language.  

That is the practical future of serious AI governance. Not prettier policy. Not technical fetishism. Control logic.

What this changes in the market

The market consequence is larger than many vendors would like. Enterprise AI will not sort cleanly into “best models” and “worse models.” It will sort into governable environments and ungovernable ones. Buyers in higher-sensitivity settings will care less about generic productivity rhetoric and more about whether the provider can support defensible boundaries, meaningful logs, clear retention assumptions, and an auditable relationship between policy and execution. Reuters’ analysis of the Heppner dispute points directly at that pressure by linking confidentiality doctrine to practical handling choices. 

That shifts value toward control-plane design. It raises procurement standards. It changes what enterprise readiness means. It also creates a sharper split inside companies themselves. Some organizations will use AI as a convenience layer and spend years pretending governance can catch up later. Others will treat AI as a control problem from the start and will build slower, stricter, more defensible systems. The second group may look less flashy. It will also be far harder to embarrass, sue, or corner.

The harder conclusion 

The old corporate habit was to assume governance could remain linguistic while the machinery ran elsewhere. AI is making that habit harder to sustain.

A policy with no mechanical relationship to execution is a wish. A sophisticated enforcement architecture built on undefined boundaries is a disciplined mistake.

That is why “cryptography fixes AI governance” deserves to be stated plainly and then withdrawn. It identifies something real, which is the need for enforceable technical control. But it fails as a total theory because it skips the institutional work that determines what the controls are even for.

The real challenge is not writing better AI principles. It is not worshiping stronger technical machinery either. It is building an organization in which definitions, permissions, evidence, boundaries, and system behavior all belong to the same governance reality.

 That is when policy stops being prose.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.