
The easiest AI governance story to understand is the disclosure story. Employees dump sensitive material into public tools. Lawyers discover that confidentiality is not a vibe. Executives learn, a little too late, that “we thought it was private” is not a legal doctrine. That is the world that gave force to the recent attention around United States v. Heppner, where Judge Jed Rakoff treated communications with a publicly available generative AI platform as disclosure to a third party rather than some new, mystical category of protected interaction. Reuters correctly framed that case as a warning about confidentiality, privilege, and the exposure of trade secrets. The Harvard Law Review correctly noted that the opinion raises broader questions about how courts will understand AI’s role in legal work.
That is the first problem. The second problem begins the moment a company believes it has solved the first one.
Once an organization moves away from public tools and into enterprise AI, the conversation usually becomes self-congratulatory. The model is private. The vendor terms are better. The deployment is contained. The logs exist. The system sits inside approved infrastructure. Everyone relaxes because the old nightmare was leakage, and leakage now appears controlled.
But privacy and containment answer only one question. They answer where the information goes. They do not answer what the system may do.
That distinction sounds technical until it becomes operational. A model that reads internal files is one thing. Another is a model that drafts recommendations. A model that can trigger workflows, modify records, send approvals, initiate customer communications, move money, alter code, or escalate actions inside production systems is something else entirely. Enterprises keep talking about access because access feels familiar. Security teams know how to talk about authentication, identity, and perimeter controls. But AI systems are rapidly moving from passive access to active agency, and the old mental model does not survive that shift.
The category error is subtle but dangerous. Authenticated access inside the perimeter is being mistaken for permission to act.
This is where a lot of AI governance still lives in the past. Companies are governing as if the main danger were an employee pasting confidential text into a chatbot window. That remains a risk, but it is no longer the only serious one, and in some organizations it may not even be the biggest one.
The more mature the enterprise deployment becomes, the more the risk profile changes. The system is no longer just summarizing documents. It is connected to calendars, ticketing systems, CRMs, procurement tools, code repositories, customer support queues, internal knowledge bases, and communication platforms. It has tool access. It can call APIs. It can perform chained actions. It can trigger downstream consequences that are operationally real before anyone has had time to debate whether the output was “good enough.”
This is precisely why current control language matters. NIST’s Cybersecurity Framework Profile for AI states that AI systems should be treated separately from other entities in a network and should require their own permissions and authorization policies. It goes further and explicitly says organizations should apply least privilege to AI agents by granting only the permissions necessary to carry out their role. That is not a decorative recommendation. It is a recognition that AI systems are no longer just software features. They are emerging actors inside enterprise workflows and need to be governed accordingly.
The phrase “inside the perimeter” is doing too much emotional work here. It suggests safety, legitimacy, and containment. It implies that if the model runs in the right environment, the governance problem has been substantially solved. But that is legacy perimeter thinking. NIST’s zero trust framework made the broader point years ago: perimeter defenses do not settle the question of whether access decisions are accurate, granular, least-privilege, and enforced per request. The perimeter is not the point. The decision is the point.
That logic becomes even more important when the “subject” making the request is not just a human user but a semi-autonomous system acting through tools.
The next stage of AI governance will be defined by a move from access control to authority control. Access control asks whether a system can reach a resource. Authority control asks whether the system should be allowed to produce a consequence.
Those are not the same thing, and enterprises keep blurring them because most digital systems were not designed around machine judgment. Traditional software executes instructions within bounded pathways. AI systems increasingly generate those pathways on the fly. They interpret prompts, weigh ambiguous context, choose tools, sequence tasks, and produce outputs that can look procedural enough to pass through weak oversight. The danger is not only that they hallucinate. The danger is that they may do the wrong thing in a context where the organization has already endowed them with legitimate procedural power.
That is why the next governance debate will not be about model intelligence in the abstract. It will be about delegated authority.
Can the system approve a payment or only draft one for review. Can it send a legal response or merely prepare language. Can it update customer records or only recommend a change. Can it terminate access, modify code, create a public statement, or trigger a compliance workflow. Can it act under a senior employee’s identity, or is every high-risk step broken out into separate approvals, narrower scopes, and reversible checkpoints.
Those questions sound almost boring compared with frontier rhetoric about superintelligence and existential risk. They are not boring. They are where the institutional damage will happen.
OWASP’s current security guidance for model-context-protocol deployments is already pointing in this direction. It stresses minimum permissions, scoped credentials, narrow OAuth scopes, and the separation of privileged actions. In plain English, that means an AI-connected system should not inherit broad power just because integration is technically possible. The convenience of orchestration is not a governance model.
The most dangerous AI failures in enterprise environments may not look like failures at first. They will look authorized.
A public chatbot leak is noisy. A mistaken disclosure feels obviously wrong. A privilege waiver has a recognizable shape. But an internal system performing an internal action with valid credentials inside approved infrastructure can look completely normal right up until the moment someone asks the only question that matters: who decided this system was allowed to do that.
That is the trap. Institutions are good at controlling outsiders. They are much worse at recognizing when they have over-empowered insiders, especially when the insider is a machine wrapped in enterprise legitimacy.
This is why the next serious AI incident may not begin with a sensational leak. It may begin with a trusted system denying a claim, sending a message, approving a transaction, escalating a disciplinary process, changing a record, or triggering a chain of automated acts that no single human consciously intended. The output may even be plausible. The workflow may be auditable. The logs may be immaculate. And yet the underlying governance design may still be indefensible because the system crossed from support into authority without the institution ever clearly confronting that shift.
That is a harder failure to narrate because nothing obviously “broke.” The permissions existed. The APIs worked. The identity was authenticated. The infrastructure was approved. Everything functioned exactly as configured. The real problem was the configuration of power.
The AI market still rewards capability theater. Vendors show frictionless orchestration, smoother workflows, fewer clicks, faster action, and broader integration. Buyers hear productivity. What they should hear is delegated power.
Real governance starts where the product demo gets exciting. The moment the system can do more than just retrieve and summarize, the discussion has to move beyond privacy, security questionnaires, and generic model risk language. It has to become a discussion about decision rights.
Which actions are low-risk, reversible, and safely automatable. Which actions require human review every time. Which actions require a named approver. Which actions should be impossible for a model to execute regardless of confidence, context, or role. Which credentials are scoped to reading, which to drafting, which to proposing, and which to execution. Where are the hard boundaries. Where are the breakpoints. Where are the kill switches. Where does responsibility remain unmistakably human.
None of this is anti-AI. It is anti-sloppiness.
The mistake is not deploying AI. The mistake is letting enterprise containment masquerade as governance maturity. A system can be private, compliant, authenticated, logged, monitored, and still be given more authority than the organization can rationally defend.
The disclosure panic was the opening act. The authority crisis is the real story.
Heppner matters because it clarified, in brutally old-fashioned terms, that courts may treat generative AI like any other third party when confidentiality is at stake. But the more consequential strategic lesson is what comes next. Once companies respond by bringing AI inside the perimeter, they do not enter a post-risk environment. They enter a different risk environment. One governed less by secrecy and more by power.
That is the transition many organizations still have not understood. They think the hard part is stopping the leak. It is not. The hard part is deciding whether the machine may act in your name.