Article image
SEIKOURI Inc.

AI Governance – The Compliance Parade Left the Data Center

Markus Brinsa 27 December 23, 2025 6 6 min read Download Web Insights Edgefiles™

Sources

AI governance has finally become fashionable. Panels are full. White papers multiply. Lawmakers confidently pronounce words like “guardrails,” “oversight,” and “responsible innovation.” Everyone nods. Everyone agrees. Everyone believes something is now “under control.”

It isn’t.

What changed in 2025 is not that AI governance suddenly became effective. What changed is that power quietly moved, while most people were still arguing about principles. Governance today is no longer primarily about ethics, transparency, or fairness. It is about leverage. About who controls compute, who controls deployment, who controls liability, and who gets to decide when “safety” becomes optional. 

If you’re still thinking AI governance is a moral debate, you are already behind.

Governance Is No Longer a Philosophy Problem

For years, AI governance lived in the realm of aspirations. Fairness frameworks. Ethical guidelines. Voluntary commitments. Panels featuring the same five experts warning about the same five risks. The assumption was simple: if we could just align incentives and values, the technology would behave.

That assumption collapsed under scale. By the end of 2025, AI systems are embedded deeply enough in economic infrastructure that governance has shifted from “how should AI behave?” to “who gets to pull the plug, rewrite the rules, or look the other way?” That is a political question, not an ethical one.

In the U.S., governance now happens less through law and more through procurement, national security framing, and infrastructure control. Europe, meanwhile, insists on formal regulation—but struggles with enforcement in a system it does not fully command.

Both approaches claim success. Both hide weaknesses. And both are dangerously misunderstood.

The U.S.: Governance by Acceleration, Not Constraint

In Washington D.C., the word “governance” still exists. It’s just been redefined beyond recognition. The Biden-era attempt to formalize AI safety through executive action is now historical footnote territory. What replaced it under the Trump administration is not an absence of governance, but a reorientation of priorities. Safety is no longer the headline. Competitiveness is. National leadership is. “Removing barriers” is the phrase that matters.

This shift confuses many observers, who interpret deregulation as a lack of control. In reality, the U.S. model of AI governance in late 2025 is indirect but powerful. Instead of regulating models directly, the government influences behavior through defense contracts, federal procurement rules, export controls, and access to compute.

If you want to deploy large‑scale AI in the U.S. economy, you increasingly need alignment with federal priorities. If you want access to government contracts, you will comply—quietly—with evolving standards that are never marketed as regulation but function exactly like it.

This is governance by gravity, not law.

The problem is that this system is opaque by design. There are no clear red lines for consumers. No meaningful right to explanation. No standardized liability regime when AI systems cause harm. The government retains leverage over companies, but citizens retain very little leverage over either. 

The result is a governance model that protects state interests first, corporate interests second, and the public somewhere later, if at all.

Congress Is Busy Solving the Wrong Problems

Yes, Congress passed AI‑related legislation in 2025. That fact alone gets celebrated far too often. Most of these laws focus on the most visible harms, not the most structural ones. Deepfake abuse. Non‑consensual imagery. Election interference optics. These issues are real and deserve action—but they are not where power accumulates.

There is still no comprehensive federal framework governing AI liability in hiring, healthcare, credit decisions, or automated enforcement systems. There is no binding requirement for independent audits of high‑impact models. There is no unified disclosure regime explaining where training data comes from or how models behave under stress.

Governance is happening where it is politically safe, not where it is structurally necessary. The illusion of progress is strong. The substance is thin.

Europe: The Most Serious Attempt—and Its Own Worst Enemy

Europe deserves credit for doing what the U.S. refuses to do: writing rules down. The EU AI Act is a real law. It defines risk categories. It bans certain practices outright. It imposes obligations on high‑risk systems and imposes additional scrutiny on general‑purpose models. On paper, it is the most ambitious AI governance framework in the world. 

In practice, it exposes a deeper problem: regulation without operational dominance. Europe does not control the leading AI platforms. It does not control the majority of global cloud infrastructure. It does not set the pace of frontier model deployment. Enforcement, therefore, becomes a negotiation, not a command.

By the end of 2025, compliance discussions between regulators and AI providers increasingly resemble trade talks. Timelines stretch. Interpretations soften. Exceptions multiply. The letter of the law exists, but its teeth depend on cooperation from companies whose real centers of gravity sit elsewhere. The EU AI Act works best where Europe already had leverage: procurement, consumer‑facing products, and traditional enterprise software. It struggles precisely where power has shifted fastest: foundation models, model weights, and inference infrastructure.

Europe is governing earnestly. It is just governing uphill.

Ethics Teams Are Gone—And That’s Not an Accident

One of the clearest signals that AI governance matured in the wrong direction is what disappeared. Ethics teams. Internal review boards. Public accountability structures inside companies. They did not fail because ethics stopped mattering. They failed because ethics had no enforcement power. When ethical review conflicted with deployment speed, revenue targets, or geopolitical pressure, it lost. Every time.

By late 2025, governance inside AI companies looks different. Safety is still discussed, but almost always in the language of risk management, not moral obligation. The question is no longer “is this right?” but “is this defensible?” and “will this trigger regulatory friction?” This is not necessarily cynical.

It is rational behavior in a system where responsibility remains diffuse, and consequences remain negotiable.

The Quiet Successes No One Talks About

Not everything is broken. One thing that genuinely improved is model evaluation rigor. Frontier Labs now tests systems more aggressively than they did two years ago. Red‑teaming is more serious. Certain failure modes are better understood. Catastrophic public meltdowns are less frequent.

Another real improvement is supply‑chain governance. Compute access, chip exports, and cloud concentration are now openly treated as governance mechanisms. This is uncomfortable, but effective. Control the infrastructure, and you control behavior. Public accountability did not improve. The people most affected by AI decisions—job seekers, patients, students, welfare recipients—still have almost no recourse when systems fail them.

Governance protects institutions far better than individuals.

The Biggest Lie: “We’re Catching Up”

The most dangerous belief circulating at the end of 2025 is that governance is catching up to technology. It isn’t. What happened instead is that governance changed shape. It moved away from visible rule‑setting and into invisible constraint. Away from ethics and into strategy. Away from public debate and into closed rooms where access is the real currency.

If you are waiting for a single law, agency, or framework that will “solve” AI governance, you are waiting for something that no longer fits the problem. Governance today is fragmented by design.

That fragmentation benefits those who already have power.

So What Actually Needs Fixing

AI governance does not need more principles. It needs friction. It needs clear liability when systems cause harm. It needs mandatory independent audits for high‑impact use cases. It needs transparency, not as a marketing feature but as a legal obligation. It needs enforcement mechanisms that do not depend on corporate goodwill.  Most of all, it needs honesty.

Honesty about who benefits from current arrangements. Honesty about who bears the risks. Honesty about the fact that many governance conversations exist primarily to signal control, not exercise it. Until that changes, AI governance will remain what it is at the end of 2025: a system that looks mature, sounds confident, and still leaves the most important questions unanswered.

 If that makes you uncomfortable, good. That’s the point.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. He created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 25 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™