Article image
SEIKOURI Inc.

When AI Wealth Gets Concentrated

Markus Brinsa 24 March 26, 2026 5 5 min read Download Web Insights Edgefiles™

Sources

The real AI divide

For the past two years, the public argument about AI has been strangely narrow. People keep asking whether the technology will replace workers, supercharge productivity, or generate a new wave of economic growth. Those are reasonable questions. They are also incomplete. The more revealing question is who gets paid when that growth arrives. Larry Fink’s annual letter matters less as a piece of market commentary than as an admission from inside mainstream finance that AI may widen the wealth divide if the gains remain concentrated among people who already own financial assets. Reuters captured that warning directly, and BlackRock’s own letter makes clear that the firm sees broad participation in capital markets as the central challenge.

That is the part of the AI story too many executives still glide past. They talk about adoption as if access to a tool were the same thing as participation in value creation. It is not. A company can deploy AI aggressively and still end up on the wrong side of the trade. A worker can use AI every day and still own none of the upside. A country can celebrate AI innovation while discovering that the largest gains are captured by a narrow layer of firms, funds, and investors sitting closest to the infrastructure, the models, the chips, and the capital. What looks like a broad technological transition can still operate like a highly selective financial event. Reuters’ reporting reflects exactly that tension: AI is expected to create major economic value, but the winners are likely to cluster unless participation expands.

AI adoption is not the same as AI participation

This is where the business conversation needs to become more honest. Many firms will use AI. Far fewer will own meaningful pieces of the value chain. The market has already been signaling that distinction. Reuters notes that much of the AI-driven market gain since the launch of ChatGPT has been led by companies at the center of the boom. That is not a trivial detail. It suggests that the first phase of AI economics has not behaved like a broad tide lifting all boats. It has behaved more like a selective repricing of companies that own scarce technical advantages, compute access, infrastructure leverage, or investor confidence.

That matters because the social rhetoric around AI often implies the upside will be diffuse. It usually is not. Transformative technologies do create enormous value, but they do not distribute that value automatically. It flows through ownership structures, capital allocation, market design, and bargaining power. BlackRock’s letter makes that point in a polished and investor-friendly way, but the underlying issue is much sharper than the language suggests. If AI becomes embedded across industries while the ownership of the most valuable assets remains narrow, then the economy does not simply become more productive. It becomes more stratified.

The system was unequal before AI arrived

That is what makes this story stronger than a single CEO quote. AI is not entering a fair, balanced, broadly shared economy and then creating inequality from scratch. It is arriving in an environment where wealth concentration is already severe. Federal Reserve distributional wealth data, updated in January 2026, show the extent to which ownership is already skewed at the household level. Recent reporting on those data found that the top 1 percent held 31.7 percent of U.S. wealth in the third quarter of 2025, roughly equal to the bottom 90 percent combined. Brookings has made a parallel argument in a different context: the important question is not only what AI can do, but where it lands and who is positioned to benefit from it.

This is why the usual executive line about reskilling, experimentation, and innovation culture feels incomplete. Those things matter, but they do not answer the ownership question. A worker can be reskilled and still lose bargaining power. A midsize software company can integrate AI and still watch pricing power erode as AI-native competitors attack its margins. Reuters explicitly notes this pressure on legacy software and services firms. In other words, AI may not simply divide people into adopters and non-adopters. It may divide them into owners and users, allocators and renters, infrastructure holders and downstream dependents.

Why this matters for serious business readers

For executives, this changes the strategic frame. The risk is not only that AI disrupts your workflows. The risk is that AI reorganizes your position in the value stack. Some companies will use AI to cut costs. Some will use it to defend margins. Some will use it to build entirely new categories. But many will discover that the largest economic gains accrue elsewhere, to firms controlling the infrastructure, the financing, the model layer, or the distribution choke points. That is why a serious AI strategy cannot stop at implementation. It has to ask where value will concentrate, who will capture it, and whether the company is building defensible participation in that upside or merely paying for access to someone else’s advantage. Reuters’ description of investor anxiety around legacy business models points directly to this problem.

For policymakers and governance-minded investors, the issue is even broader. If AI becomes one more system where volatility, disruption, and labor-market pressure are socialized while gains are privatized, the backlash will not stay confined to think pieces. It will turn into a legitimacy problem. BlackRock frames broader ownership as a way to reconnect people to economic growth. That is a polite way of saying something more dangerous: people are less likely to trust an economic order that asks them to absorb the disruption while excluding them from the compounding.

The next phase of AI will be judged by who shares in it

That is why this Reuters story is useful. Not because Larry Fink has discovered a shocking new truth, but because the ownership problem is now visible enough that even the largest asset manager in the world feels compelled to address it. The AI debate is maturing. It is no longer just about model performance, dazzling demos, or workforce anxiety. It is becoming a question about capital markets, infrastructure, access, and the distribution of economic power.

The next AI divide may not be between people who use the technology and people who ignore it. It may be between those who own the upside and those who live with the consequences. That is the frame serious readers should keep in mind. Not whether AI creates value. It almost certainly will. The real question is how narrowly that value gets held when the dust settles.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.