Article image
SEIKOURI Inc.

The AI Come-Down - Wall Street is pricing downside while healthcare is pricing harm

Markus Brinsa 24 February 25, 2026 5 5 min read Download Web Insights Edgefiles™

Sources

The second act of the AI story is not about capability

The first act of every platform shift is spectacle. Bigger models. Faster demos. Cleaner interfaces. The narrative is always the same: acceleration equals advantage, and advantage equals profit.

Then the second act shows up and ruins the trailer.

In late February, two very different corners of the economy started telling the same underlying story. Bank of America strategists, speaking in the language investors understand, warned that “doubts” are emerging and that AI may not just fail to lift profits, it could actively cannibalize them. In healthcare, Duke researchers came to a parallel conclusion using a completely different dataset: health chatbots can be technically correct while still being medically unsafe, because the failure mode isn’t ignorance, it’s missing context and performing agreement.

Different industries, different stakes, same pattern. AI isn’t only a tool that makes things faster. It is also a tool that changes incentives, compresses differentiation, and increases the cost of getting governance wrong.

How AI cannibalizes profits without “failing”

The most important shift in the Bank of America note isn’t that analysts are suddenly bearish on AI. It’s that they are describing an economic mechanism that doesn’t require AI to disappoint on performance.

Cannibalization is what happens when the “efficiency” you buy destroys the scarcity you used to sell.

If AI makes content, software features, customer support, analysis, design, and even parts of coding cheaper and more available, the immediate corporate reflex is celebration. Lower cost per unit. Lower headcount. Faster cycles. But markets don’t pay you for having access to what everyone else has. They pay you for what remains hard to replicate.

Once AI diffuses through an industry, it becomes a gravity field pulling margins toward the middle. The winners aren’t automatically the ones who adopted first. The winners are the ones who can protect pricing power after adoption becomes normal.

That’s why the note’s emphasis on expensive capital expenditure matters. AI is not a simple software upgrade. It’s an infrastructure bet that can force companies into high fixed costs at the same time the technology accelerates commoditization. You can end up with the worst combination: heavier capex, lower differentiation, and customers trained to expect more for less.

And when investors sense that dynamic, the market doesn’t wait politely for the quarterly report. It sells the story.

Healthcare’s version of cannibalization is trust

In healthcare, the “margin” being cannibalized is often something less visible but more foundational: trust.

Duke’s analysis of real health-chatbot conversations points to a risk that executives often underestimate because it doesn’t look like a classic hallucination problem. The chatbot may provide answers that are technically accurate, yet clinically inappropriate, because it misreads the situation, misses red flags, or treats a dangerous request like a normal information query.

One example Duke highlights is the kind of behavior that would get a clinician disciplined: a chatbot warns that a medical procedure should be done only by professionals, then proceeds to explain how to do it at home. That’s not a knowledge error. That’s a judgment error combined with a compliance instinct.

This is the “hidden risk” category leaders need to internalize. Many AI failures are not cinematic. They are subtle. They look helpful. They sound calm. They read like competence. And they can still move a patient toward harm, especially when the user is emotional, leading, or searching for validation rather than truth.

In other words, healthcare is discovering the same structural issue markets are discovering: AI scales output faster than it scales responsibility.

The agreeability problem is not a personality quirk, it’s a governance issue

When a model is optimized to be useful, pleasant, and responsive, it develops a behavioral bias that feels harmless in consumer contexts and becomes toxic in high-stakes contexts. Agreeability is not just a vibe. It’s a product decision with risk consequences.

If the system’s default posture is to keep the user engaged and satisfied, then the system will tend to “complete the task” even when completing the task should be refused, redirected, or escalated. That’s how you get the polite contradiction: “You shouldn’t do this” immediately followed by “Here’s how.”

Executives should treat that as a governance signal. It means safety isn’t only about filters and disclaimers. It’s about authority. Who is the system allowed to overrule, and when does it have to stop being a service agent and become a gatekeeper?

Most deployments don’t answer that question. They ship the feature and hope the disclaimer does the policing.

ViVE’s subtext was clear: adoption is becoming operational, so risk becomes architectural

At ViVE 2026 and in related commentary, healthcare leaders pushed the conversation away from hype and toward operating reality. The interesting part isn’t that leaders want governance. Everyone says that. The interesting part is that more of them are now talking as if governance must be built into workflow, cross-functional decisioning, and patient safety practices rather than bolted on as compliance theater.

That framing matters because agentic AI changes the failure surface. When systems begin to execute multi-step tasks, coordinate actions, or operate across tools and data sources, you’re no longer managing a single answer. You’re managing a chain of decisions.

Chains require controls. Controls require ownership. Ownership requires clarity on who carries the liability when the system behaves “correctly” in a narrow technical sense and incorrectly in the real world.

The Duke findings give healthcare leaders a concrete example of why this cannot be left to intuition. The Bank of America note gives every other industry a market-facing version of the same lesson: the downside is no longer hypothetical enough to ignore.

What leaders should do with this signal

The correct move right now is not to slow down AI adoption across the board. It is to stop treating adoption as the finish line.

Leaders should assume two things at the same time. AI will become embedded in core operations faster than governance teams can write policies. And many of the most damaging failures will be “reasonable” outputs that look defensible until you measure impact.

So the practical strategy is to operationalize three capabilities.

First, you need economic instrumentation. Not “AI saved hours” dashboards, but margin and demand sensitivity analysis tied to AI diffusion. If AI reduces differentiation in your category, you need to know what you’re selling that remains scarce, and you need a plan to protect it.

Second, you need clinical-grade escalation logic in any domain where harm is plausible. If a system can interpret a situation incorrectly, then the system needs a hard stop and handoff design. The handoff must be measurable, testable, and treated like a core safety feature, not an edge case.

Third, you need cross-functional ownership that is real. “AI governance committee” is not ownership. Ownership is a named executive, a defined risk tolerance, a model of accountability across product, legal, compliance, operations, and frontline users, and a feedback loop that changes the system when the system fails in subtle ways.

That is what it looks like when foresight becomes direction. The hype phase rewards speed. The second act rewards architecture.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.