Article image
SEIKOURI Inc.

When Empty Language Starts Looking Like Strategy

Markus Brinsa 22 March 23, 2026 6 6 min read Download Web Insights Edgefiles™

Sources

Vague management language becomes more dangerous when AI can generate it endlessly

Clarity is not a soft skill

At SEIKOURI, we do not treat clear language as a branding preference or a matter of writing style. We treat it as a sign of operational maturity. When leaders cannot explain what they mean in plain English, the problem is usually not that the audience is unsophisticated. The problem is that the thinking is unfinished.

A recent Cornell study gives that intuition more substance. Research published in "Personality and Individual Differences" introduced a Corporate Bullshit Receptivity Scale and found that greater receptivity to vague, impressive-sounding corporate language was associated with lower analytic thinking and worse work-related decision-making. The study also found that people more receptive to that language were more likely to see supervisors as charismatic or visionary and were more likely to spread the same rhetoric further.

That matters because most companies still treat jargon as harmless theater. They roll their eyes at it, joke about it, and then let it shape meetings, strategy decks, internal memos, and executive communication anyway.

But if empty language correlates with weaker judgment and helps inflate perceptions of leadership quality, then jargon is not just annoying office wallpaper. It is part of how bad management cultures reproduce themselves.

The real problem is not language

The easy version of this conversation is to mock words like synergy, paradigm, alignment, optimization, transformation, or thought leadership. That version is lazy. Specialized language is not automatically bad, and technical work often requires technical vocabulary.

The actual problem begins when language becomes abstract enough to conceal whether any real claim is being made at all.

That is the difference between expertise and performance. Real expertise can usually survive translation. A capable operator can explain a complex idea in language that keeps its meaning intact. Performance language does the opposite. It uses abstraction to create the impression of intelligence without accepting the burden of being specific.

This is where organizational risk enters the picture. Once a company starts rewarding how strategic something sounds more than what it actually says, it begins selecting for a very particular kind of internal success.

People learn that status comes from sounding elevated, not from being clear. They learn that ambiguity is safer than precision, because precise claims can be tested, challenged, or disproven.

That kind of culture rarely announces itself. It usually presents as professionalism. The deck looks polished. The memo sounds executive. The strategy session feels sophisticated. Meanwhile, decisions drift, accountability weakens, and everyone becomes slightly more fluent in a language that allows people to speak for a long time without exposing what they know, what they do not know, or what exactly they intend to do.

How bad management culture scales

One of the most important findings in the Cornell work is not simply that some workers are more impressed by empty language. It is that those same workers may also be more likely to rate leaders as visionary and charismatic. That turns jargon into a selection mechanism. Leaders who speak in inflated abstraction are not just tolerated. In some environments, they are actively rewarded.

This is how a language habit becomes a management system. Once enough people inside an organization confuse elevated tone with elevated thinking, weak leadership starts to look persuasive. The company begins promoting confidence over clarity, symbolic motion over operational direction, and verbal gloss over decision quality.

There is a reason this becomes expensive. Cornell’s summary of the research points to reputational and financial harm when corporate language drifts too far into semantic fog, citing widely ridiculed examples such as Pepsi’s leaked jargon-heavy marketing presentation and Microsoft’s 2014 memo that buried layoffs under executive abstraction. The point is not that these companies used a few embarrassing phrases. The point is that language was used as a buffer between leadership and reality.

For SEIKOURI, this is not a communications issue in the narrow sense. It is a governance issue. If language consistently obscures decisions, priorities, ownership, and consequences, then the organization is not just speaking badly. It is reducing its own ability to detect weak reasoning early.

AI turns a cultural weakness into a scalable one

This is where the issue becomes much more current. Generative AI is extraordinarily good at producing polished, plausible, businesslike text. It can generate strategy language, leadership messaging, internal updates, values statements, board summaries, customer-facing copy, and meeting recaps with almost no friction. That is useful when the underlying thinking is sound. It is dangerous when the organization already has a weakness for abstraction.

The danger is not simply that AI writes badly. The danger is that AI can industrialize a style of writing that many companies already overvalue. If a manager was previously limited by their own ability to produce impressive-sounding but low-substance language, that limit has now largely disappeared.

There is already evidence that AI-mediated workplace writing affects how senders are perceived. A University of Florida summary of a study in the International Journal of Business Communication reported that while AI-assisted messages could appear more professional, heavier AI use in routine manager communications reduced perceptions of sincerity and trustworthiness. In other words, polished output is not the same thing as credible leadership.

At the same time, Gartner reported in March 2026 that managers are experimenting with AI more than employees and that organizations are still struggling to define effective usage expectations. That combination matters.

When managerial use is rising faster than institutional discipline, the risk is not only automation. It is the normalization of synthetic strategic language before organizations have developed strong standards for clarity, judgment, and accountability.

This is why “AI makes it worse” is not a gimmick angle. It is the natural next step. If corporate jargon already functioned as a low-grade cultural pollutant, generative AI has turned it into a scalable output format.

What strong firms do differently

Strong firms are not allergic to sophisticated language. They are allergic to empty language. They understand that clarity is not the opposite of intelligence. Clarity is what intelligence looks like when it is forced to operate in the real world.

That means strategy should survive compression. A leader should be able to explain the plan without disappearing into performance phrases.

A management team should be able to state what is changing, why it is changing, who owns it, what the tradeoffs are, and how success will be judged. If that cannot happen, the problem is usually upstream from the memo. It also means AI should be treated as an amplifier, not an author of institutional meaning.

The moment companies use AI to mass-produce leadership language without enforcing human standards of precision, they create the conditions for elegant drift.

Documents multiply, alignment theater expands, and the organization becomes more verbose at the exact moment it should be becoming more exact.

At SEIKOURI, we think companies should pay much closer attention to what their internal language rewards. If the culture celebrates abstraction, it will eventually promote people who are good at hiding uncertainty inside polished phrasing. If it rewards clarity, it becomes easier to spot who actually understands the business, who can make decisions under pressure, and who is merely performing competence.

The leadership signal that still matters

In an era of AI-assisted communication, clarity becomes even more valuable because it becomes harder to fake convincingly over time. Anyone can generate a polished paragraph. Fewer people can state a position plainly, defend it under scrutiny, and connect it to operational reality.

That is why we see clarity as a leadership test.

Not because plain English is morally superior, and not because every complex issue has a simple answer, but because clear language forces contact with the thing itself. It reveals whether the strategy is real, whether the decision makes sense, and whether the person speaking has earned the confidence their language is trying to command.

The next wave of corporate underperformance may not come from companies that failed to adopt AI. It may come from companies that used AI to produce ever more sophisticated versions of the same old management fog.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.