Article image
SEIKOURI Inc.

After the Human Bargain

AI abundance and the shrinking value of human leverage

Markus Brinsa 18 Apr 21, 2026 7 7 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

“Anti-human future” sounds at first like the kind of phrase people use when they want to frighten a podcast audience into attention. It sounds cinematic. It sounds maximalist. It sounds like the verbal equivalent of putting storm clouds behind a keynote slide.

Then you listen more carefully, and the phrase gets worse.

Because in the Sam Harris and Tristan Harris conversation, the core argument is not really about killer robots, machine consciousness, or some Hollywood vision of humanity being hunted through the rubble. The darker point is more banal, and for that reason more plausible.

The risk is that artificial intelligence becomes economically indispensable while human beings become politically negotiable. That is a much nastier idea.

A society can survive frightening technology. What it struggles to survive is a technology stack that quietly changes who matters.

The conversation lands because it does not rest only on technical risk. It moves toward a deeper question: what happens when the institutions that organize wealth and power no longer need people in the same way they used to? Not morally. Not rhetorically. Not in campaign speeches. In their actual operating logic.

That is where this stops being a podcast about AI fear and becomes a story about the future of the social contract.

The old bargain was never noble, but it was legible

Modern political economy was never some beautiful humanist arrangement. It was messy, unequal, frequently exploitative, and often cruel. But it did contain a recognizably human bargain. States needed populations. Companies needed workers. Tax bases mattered. Education mattered. Public health mattered. Stable households mattered. Not because elites were saints, but because human beings remained load-bearing.

You could not build an industrial society without people who could work, consume, raise children, serve in institutions, and generate taxable output. Human beings were inconvenient, expensive, and politically troublesome, but they were still structurally necessary. That necessity gave the public bargaining power, even when that power was uneven, partial, or constantly under attack.

The nightmare embedded in the latest AI debate is that this necessity begins to erode.

That is what makes the “intelligence curse” idea unsettling. If a meaningful share of economic growth starts flowing from AI systems, compute infrastructure, capital concentration, and a handful of firms with access to frontier capability, then the public’s role changes. Not symbolically. Mechanically.

If labor matters less, labor has less leverage. If mass participation matters less, democratic responsiveness becomes easier to bypass. If a country can generate extraordinary value through systems owned and controlled by a narrow layer of actors, the incentive to invest in the broader population weakens.

Not disappears overnight. Weakens first. That is how these things usually happen. The old bargain decays long before anyone formally announces that it has ended.

We have already seen the rehearsal version

One reason the argument hits so hard is that we are not starting from zero. The conversation draws a line from social media to AI, and that line matters.

Social platforms already gave us a preview of what happens when technology scales through incentives that are misaligned with human flourishing. The systems did not need to “hate” people. They only needed to optimize for engagement inside institutions that monetized attention. The result was not one catastrophic event. It was something more corrosive: degraded attention, damaged shared reality, worsened incentives in media, chronic emotional destabilization, and a public sphere that became easier to manipulate and harder to trust.

That was the baby version.

The point is not that social media and frontier AI are identical. They are not.

The point is that we have already lived through a case where a highly profitable technical architecture was allowed to reorganize social life faster than governance, ethics, or institutional restraint could catch up. We know what it looks like when executives describe one thing, incentives reward another, and society absorbs the externalities on delay.

The anti-human future does not begin when a machine declares war on humanity. It begins when powerful systems repeatedly produce anti-human outcomes and everyone keeps calling that progress.

Intelligence without distribution is just hierarchy with better marketing

A lot of AI discourse still behaves as if intelligence is automatically a public good. That assumption is sentimental.

Electricity became broadly transformative because it diffused. Computing became broadly transformative because it diffused. The internet reshaped the economy because access, though uneven, expanded across firms, sectors, and households. Broad gains usually require broad distribution.

AI may not follow that script.

The more the stack depends on concentrated compute, proprietary models, locked ecosystems, exclusive data access, and giant capital expenditures, the easier it becomes for capability to accumulate faster than benefit spreads.

In that world, society gets something that looks like abundance from above and scarcity from below. Extraordinary productivity at the top. Thinner bargaining power everywhere else.

That is why the question is not simply whether AI creates value. Of course it will create value. The question is who captures it, who governs it, and whether the people displaced, deskilled, surveilled, or politically weakened by it retain any credible claim on the upside.

If they do not, then “innovation” becomes a very elegant way of describing a transfer of power.

This is where so much public discussion still feels evasive. We talk endlessly about model capability and not nearly enough about institutional destination. More intelligence for whom. More control for whom. More resilience for whom. More profit for whom. More dignity for whom.

Those are not side questions. They are the whole question.

The political danger is not only unemployment

The laziest version of this debate reduces everything to jobs. That is too small.

Yes, labor displacement matters. Yes, wage pressure matters. Yes, the hollowing out of white-collar work matters. But the larger risk is the evaporation of social leverage. A public that is less economically necessary becomes easier to ignore. A workforce with declining bargaining power becomes easier to manage through software, metrics, and precarity. A citizenry overwhelmed by synthetic media, personalization, and automated persuasion becomes easier to fragment before it can organize.

This is where the phrase anti-human future earns its keep.

A future can be anti-human even while remaining materially impressive. It can produce scientific breakthroughs, astonishing convenience, better drugs, faster discovery, and smoother interfaces, while also degrading agency, weakening democratic pressure, and shrinking the domain in which ordinary people still count as decision-makers rather than variables.

That is the trap. The future does not have to fail to become hostile. It only has to succeed in the wrong shape.

And once that shape hardens, fixing it gets harder. Because by then the winners are richer, the infrastructure is deeper, the dependency is wider, and the public is told that any serious intervention would be reckless, anti-growth, anti-innovation, anti-national-competitiveness, or hopelessly naïve.

Every concentration regime eventually develops a moral vocabulary for why its power is necessary. AI is building one now.

The governance conversation is still far behind the capability conversation

This is the part that should worry serious readers most. Governance is not absent, exactly. It is simply outpaced, fragmented, and often trapped in the wrong frame.

We have principles, safety language, roadmaps, and summits. We also have policy fights that increasingly emphasize national dominance, regulatory preemption, and competitive acceleration. Meanwhile, independent experts continue to warn that evidence on risk lags behind capability, and that waiting for fully mature proof may leave institutions reacting after systems have already scaled into critical functions. That is not a stable setup.

The public conversation keeps oscillating between two unserious poles. On one side are the accelerationists who talk as though any hesitation is civilizational cowardice. On the other side are the people who reduce the whole issue to chatbots being weird or students cheating on essays.

Between those poles sits the actual problem: how to keep increasingly general and increasingly embedded AI systems from reorganizing economic and political life in ways that quietly reduce the standing of human beings.

This is not only a technical problem. It is a governance design problem, an incentive problem, a capital concentration problem, and a democratic durability problem. And it cannot be solved by asking companies to please behave better while the reward structure tells them the opposite.

The real dividing line is not optimism versus pessimism

The most useful distinction is not between people who are hopeful and people who are afraid. It is between people who think the future will distribute itself, and people who understand that it will be designed.

That is the choice hiding underneath all the AI rhetoric.

If AI becomes an intelligence dividend, then society has to decide that in advance and encode it in institutions, bargaining structures, duty-of-care expectations, competition policy, public-interest protections, labor design, and infrastructure access. It does not happen because a few founders discover their conscience at the right moment.

If society does nothing, the more likely outcome is not extinction tomorrow. It is stratification first. Human downgrade first. Public disempowerment first. A long, polished process in which more and more decisions migrate into systems that ordinary people neither control nor meaningfully contest.

That is why this conversation matters. Not because “anti-human future” is a catchy phrase. Because it names a possibility that polite AI discourse keeps trying not to describe too clearly: a world in which intelligence scales, profits soar, products improve, and the human position inside the system quietly deteriorates anyway.

That is not science fiction. That is one plausible business model. And if that is the road we are on, then the central question of the AI era is not how smart the machines become.

It is whether human beings remain part of the deal.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.