Article image
SEIKOURI Inc.

I Am Not Against AI. I Am Against Unserious AI.

Markus Brinsa 27 May 5, 2026 7 7 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

Are you against AI? – The question says more about the market than about the answer

After publishing more than 150 AI-related articles in roughly 18 months, I have been asked a question often enough that it now feels less like curiosity and more like a symptom of the moment.

“Are you against AI?” – The answer is simple. No.

But the fact that the question keeps coming up is revealing. It shows how badly the AI conversation has been distorted by marketing pressure, executive anxiety, vendor incentives, and the performative urgency that now surrounds almost every technology discussion.

In a healthier market, critical analysis would not be confused with opposition. Asking whether an AI system is reliable enough for a business-critical workflow would not be treated as resistance. Asking whether a deployment has governance, oversight, escalation, and evidence behind it would not sound like fear. Asking whether an autonomous agent should actually be autonomous would not be interpreted as a lack of imagination.

But that is where we are.

The current AI market often treats scrutiny as hostility and enthusiasm as competence. That is not a good sign. It means the conversation has moved faster than the discipline required to support it.

I am not against AI. I am against unserious AI.

The problem is not the technology. It is the behavior around it

AI is one of the most important technological shifts of our time. It is already changing knowledge work, software development, content production, customer operations, research, analysis, marketing, compliance, and corporate decision-making. It will continue to alter how organizations operate, compete, allocate capital, manage risk, and structure human-machine collaboration.

That is precisely why it should not be treated as a branding exercise.

Too much of the current market operates as if putting an AI label on a product automatically makes it strategic. A tool becomes “AI-powered,” a workflow becomes “agentic,” a department becomes “transformed,” and a company declares itself future-ready before anyone has done the harder work of defining use cases, controls, accountability, data exposure, evaluation standards, and failure response.

This is not transformation. It is vocabulary inflation.

AI adoption is not serious because a company uses the word “agent.” It is serious when the company understands what the system is allowed to do, what it is not allowed to do, how outputs are evaluated, where humans intervene, what evidence is retained, how exceptions are escalated, and who is accountable when the system causes harm.

The same applies to executive leadership.

An executive who wants AI because it sounds modern is not leading. An executive who wants AI to reduce cost without understanding operational, legal, reputational, and workforce consequences is not transforming the business. An executive who believes AI governance is a bureaucratic drag is not accelerating innovation. They are removing the controls that make durable innovation possible.

The issue is not AI. The issue is the gap between technological ambition and organizational maturity.

Governance is not the enemy of adoption

One of the most damaging misconceptions in the AI debate is that governance slows progress.

It can, if it is badly designed. So can finance, legal, procurement, cybersecurity, compliance, HR, and every other function that exists because organizations eventually learned that unchecked enthusiasm is not an operating model.

Good governance does not exist to stop AI. It exists to make AI usable at scale.

It defines the boundaries within which experimentation can happen safely. It clarifies decision rights. It distinguishes low-risk productivity use from high-risk operational deployment. It sets escalation paths. It creates evidence trails. It makes sure that a promising pilot does not quietly become an uncontrolled production system. It forces organizations to ask whether they are automating work they understand or simply accelerating processes they have never properly examined.

That is not anti-innovation.

That is how serious companies move from experimentation to adoption without gambling their reputation on a demo.

The organizations that will benefit most from AI are not necessarily the ones making the loudest announcements today. They will be the ones that know how to integrate AI into workflows, incentives, controls, and accountability structures. They will know when to automate, when to augment, when to keep humans in the loop, and when not to deploy at all.

In other words, the winners will not be the companies that “use AI.” They will be the companies that use AI well.

The agent narrative needs adult supervision

The current fascination with AI agents is a useful example. There are legitimate and powerful use cases for agentic systems. Software agents can coordinate tasks, retrieve information, trigger workflows, monitor systems, draft outputs, and assist with complex operational processes. Properly designed, they can reduce friction and expand organizational capacity.

But the market has jumped from useful automation to existential salesmanship with impressive speed.

We are now told that companies must use agents or fall behind. We are told that agentic workforces are inevitable. We are told that entire business functions will be delegated to autonomous systems, usually by people who skip past the less glamorous questions.

What data does the agent access?

What actions can it execute?

Who approved those permissions?

How does it verify instructions?

How does it handle conflicting goals?

How does it respond to malicious input?

What happens when it fabricates completion?

What happens when it performs the wrong task perfectly?

What happens when nobody notices?

These questions are not obstacles to agentic AI. They are prerequisites for using it responsibly.

A company that cannot answer them is not ready for autonomous systems. It may be ready for assisted workflows. It may be ready for narrow automation. It may be ready for internal experimentation. But it is not ready to hand over operational authority simply because the market has discovered a new word.

Again, that is not anti-AI. It is pro-competence.

Serious AI requires a serious operating model

The deeper issue is that many organizations are trying to adopt AI as a tool when they should be treating it as an operating model question.

AI changes more than output. It changes how work is initiated, distributed, evaluated, approved, audited, and defended. It changes the relationship between speed and evidence. It changes the cost of producing plausible material. It changes the risk of invisible errors. It changes what employees can do with sensitive information. It changes how vendors are assessed. It changes how boards should think about exposure. It changes how leaders should define productivity.

That requires more than enthusiasm. It requires institutional clarity.

Where does AI create advantage?

Where does it create unacceptable risk?

Where is human judgment essential?

Where is automation appropriate?

Where does the organization need evidence rather than confidence?

Where does the system require monitoring after deployment?

Where does a vendor claim need to be tested instead of admired?

These are the questions serious AI leaders should be asking. Not because they are afraid of AI. Because they understand its significance.

The strongest pro-AI position is not blind enthusiasm

There is a strange irony in the current moment. Many of the loudest self-proclaimed AI optimists are doing the technology a disservice.

They oversell immature systems. They minimize risks. They mock governance. They present adoption as a moral test. They reduce complex organizational decisions to slogans. They frame hesitation as cowardice and caution as backwardness.

That may generate attention. It does not build trust. And trust is now one of the central constraints on AI adoption.

Customers need to trust that systems will not mislead them. Employees need to trust that tools will not expose them or silently reshape their work in ways leadership has not explained. Regulators need to trust that companies can document decisions. Boards need to trust that management understands both upside and downside. Markets need to trust that AI spending is producing more than narrative momentum.

Blind enthusiasm does not create that trust. Discipline does.

The serious pro-AI position is not “use it everywhere.” It is not “move faster.” It is not “agents will replace the enterprise.” It is not “prompt harder.”

The serious pro-AI position is that AI can create substantial value when it is matched to real problems, governed intelligently, evaluated rigorously, integrated operationally, and deployed with a clear understanding of consequences.

That is where the opportunity is. That is also where much of the market still needs to grow up.

No, I am not against AI

So no, I am not against AI.

I am against AI as decoration.

I am against AI as boardroom theater.

I am against AI as a stock-market incantation.

I am against AI as an excuse to avoid strategy.

I am against AI vendors who sell inevitability instead of capability.

I am against executives who confuse adoption with understanding.

I am against prompt mythology, agent hysteria, governance denial, risk minimization, and the lazy assumption that every AI deployment is progress by default.

AI is too important for that. The technology deserves a better conversation. Businesses deserve better decisions. Employees deserve better operating models. Customers deserve better protections. Investors deserve better evidence. Leaders deserve a clearer understanding of what they are actually adopting.

Being pro-AI does not mean applauding everything with an AI label. It means taking the technology seriously enough to separate value from noise.

That has been the point of my work from the beginning. Not opposition. Seriousness.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.