Article image
SEIKOURI Inc.

Inside Madison Avenue’s Brain Lab

The race to predict recall before a campaign ever reaches the public

Markus Brinsa 24 Apr 28, 2026 9 9 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

The lab left the lab

A year ago, neural attention still sounded like one of those expensive side rooms in marketing where people in black turtlenecks explain that the future has arrived, provided the client is willing to wear a headset and stare at a refrigerator commercial for twenty minutes. The story was scientific, slightly eerie, and easy to keep at arm’s length. It lived in the world of EEG charts, eye-tracking studies, memory scores, and grand claims about what really sticks in the human mind.

That world has changed.

What used to be presented as neuromarketing theater is being folded into the operating logic of the largest agency groups in the world. The language got less dramatic. Fewer people say “neuromarketing” now because that still sounds like a Discovery Channel special from 2014. The new vocabulary is cleaner and more procurement-friendly.

Attention. Predictive measurement. Creative intelligence. Synthetic audiences. Agentic optimization.

But the underlying ambition is much the same, only bigger now and much closer to the actual machinery of media and production.

Madison Avenue is no longer content with asking whether an ad was seen, liked, or remembered after the fact. It wants to know in advance whether a machine can predict which scenes will hold attention, which creative elements are most likely to survive in memory, and which audience simulations will approve the work before the public ever does. The ad business is not just buying impressions anymore. It is trying to buy foresight.

That is the real sequel to the original neuromarketing story. The interesting part is no longer the science exhibit. The interesting part is that the science has moved downstairs, put on a badge, and started sitting in operational meetings.

The holding companies are wiring it in

Dentsu has been one of the clearest public examples of this shift. Its new Brand Reset work with Lumen and Kantar did not frame attention as a quirky add-on or an insight toy. It framed attention as something that can be linked to brand equity and sales. That is a meaningful escalation. Once attention stops being a research curiosity and starts being positioned as part of the chain that connects creative exposure to business outcomes, it becomes much easier to sell internally, scale across accounts, and treat it as infrastructure rather than an experiment.

Omnicom has been moving in the same direction, only in a more visibly industrial way. It had already expanded its relationship with Lumen by pushing attention data into Omni, the company’s central decisioning platform. Then it relaunched Omni as a broader AI-driven intelligence system, complete with autonomous agents across creativity, media, commerce, and measurement. That matters because it changes what attention data is for. It is no longer just something you look at after a test. It becomes another input in a larger machine that is supposed to guide what gets made, where it runs, and how it is optimized.

WPP is moving across multiple fronts at once. On one side, it has public attention research with Snapchat and Lumen arguing that attention is a stronger predictor of recall and favorability than older engagement proxies such as view-through rate. On the other hand, WPP Production is openly discussing synthetic personas and synthetic audiences that can be used to test ideas, messages, and creative nuances across markets before they reach actual people. WPP Open goes even further by promising to connect creative work to synthetic audience insights and predictive reputation management inside one AI-powered workspace.

Publicis is taking a less theatrical route, but the direction is unmistakable. Its acquisition of AdgeAI was framed as a move from retrospective creative measurement to a forward-looking capability that anticipates performance. Publicis is not branding that as a neural attention revolution. It does not need to. The strategic point is the same. The campaign is no longer judged only after it runs. The system is being designed to forecast which creative choices are more likely to work before launch.

That is the shift. The big groups are not all using exactly the same tools, and they are not describing themselves with the same vocabulary. But they are all moving toward the same destination: a world in which advertising is increasingly pre-tested, pre-scored, pre-simulated, and pre-optimized by systems that claim to know something meaningful about human attention and response before the audience ever speaks for itself.

Once standards arrive, the weird stuff becomes normal

One of the biggest signs that this is no longer fringe is that attention measurement now has the kind of thing every emerging industry secretly craves: standards language. Once the IAB and MRC published an industry framework for attention measurement, including approaches that explicitly cover visual and audio tracking as well as physiological and neurological observation, the category stopped sounding like vendor improv and started sounding like something that can survive procurement, audits, and cross-industry meetings.

That does not mean the field is settled. It is not. Different vendors still use different signals, different models, and different assumptions. Some are closer to observational measurement. Some are really repackaged proxy metrics with better branding. Some sell neuroscience mystique with a statistics costume. And some may be genuinely useful but still far more fragile than their marketing materials suggest.

But standards do not have to settle every dispute in order to change behavior. They only have to make adoption feel reasonable. Once a framework exists, marketers who were previously nervous about attention claims can say they are not buying snake oil. They are participating in an emerging measurement discipline. Once that happens, the weird science stops looking weird. It becomes a dashboard.

That is when the real power shift begins. The argument is no longer about whether these systems are real enough to discuss. The argument becomes how deeply they should be allowed to influence creative, media, and strategy.

The industry is building synthetic confidence

The most seductive part of this new system may not be attention scoring itself. It may be the synthetic audience layer forming around it.

For a long time, the basic inconvenience of advertising was that the audience had to remain inconvenient. Real humans are slow, contradictory, moody, expensive to recruit, and impossible to fully control. They say one thing, feel another, and do a third. That is deeply annoying for anyone trying to industrialize persuasion.

Synthetic audiences promise to solve that problem. Now, instead of waiting for people, agencies can build AI-generated stand-ins, ask them what they think, test messages at speed, and pressure-test campaigns across market segments without the delays and costs of traditional research. That is a thrilling proposition if your job is to reduce uncertainty. It is also the kind of proposition that should make everyone slightly nervous.

Because once synthetic audiences become part of the workflow, the temptation is obvious. Instead of using them as rough internal stress tests, companies start treating them as legitimacy machines. The work no longer has to win over the public first. It only has to survive its simulation environment. Approval gets cheaper, faster, and more scalable. Confidence starts to arrive earlier in the process, before reality has had a chance to interfere.

This is where the corporate thriller aspect really kicks in.

The danger is not simply that machines may predict attention imperfectly. The danger is that organizations under pressure will mistake fast internal prediction for truth. They will confuse model fluency with audience understanding. They will start trusting synthetic approval loops because those loops are efficient, available, and emotionally satisfying. A campaign that clears the machine begins to feel safer than one that still has to prove itself in the wild.

That is not a small cultural change. It is a reordering of authority. The ad business has always loved data when data agrees with the room. Synthetic audiences and predictive attention systems offer something even more attractive: a way to manufacture the feeling of certainty before the public has had a vote.

Then reality starts throwing bricks

This would already be a strong story if it ended there, with agencies building ever more elaborate systems to predict human response. But the reason the subject now has real dramatic force is that the rollout is colliding with public backlash, legal friction, and growing suspicion around synthetic media.

The same large agency environment now leaning harder into AI-driven measurement and optimization is also under sharper scrutiny. In April, Dentsu, Publicis, and WPP’s GroupM settled an FTC probe over alleged coordination around brand-safety standards and exclusion lists. That case was about antitrust and political-content discrimination, not neuromarketing. But it still matters here because it shows the pressure zone. The more advertising decisions are made through shared standards, opaque systems, and supposedly objective filters, the more regulators will ask who set those rules, whose interests they served, and how neutral those judgments really were.

At the same time, the public has become far less forgiving about synthetic creative that looks cheap, creepy, or dishonest. McDonald’s Netherlands learned that the hard way when its AI-generated holiday ad drew ridicule and backlash, then disappeared. The problem was not just that people noticed AI. It was that they noticed bad AI. The whole promise of machine-accelerated creative is efficiency, but once the work hits the public looking malformed, uncanny, or emotionally dead, the machine starts advertising its own limitations.

And now lawmakers are moving in too. New York’s synthetic-performer law, which takes effect on June 9, 2026, forces advertisers to disclose when ads contain AI-generated performers. That matters because disclosure changes the atmosphere around the work. The industry is being told, in effect, that if synthetic people are going to sell things to real people, the audience has a right to know. The regulatory direction is obvious. It points toward more labeling, more scrutiny, more questions about deception, and less patience for the old trick of treating synthetic media as just another harmless production shortcut.

This is what gives the story its momentum. Agencies are pushing toward systems that promise to predict recall, optimize attention, simulate audiences, and tighten the link between creative choices and business outcomes. At exactly the same moment, the surrounding culture is becoming more hostile to opaque automation, synthetic human imagery, and machine-assisted decision making that hides behind neutral language.

The ad industry is building a persuasion engine just as the public gets better at smelling the fumes.

This is not mind reading and that is almost worse

It is important not to overstate what these systems do. They are not reading your soul. They are not uncovering your childhood secrets because you watched six seconds of branded video. This is not psychic warfare in the comic-book sense, and pretending otherwise only makes the story weaker.

What is happening is in some ways more interesting and more unsettling. These companies are not trying to become telepaths. They are trying to become very good probability merchants. They want a system that can look at patterns from attention data, creative testing, audience signals, synthetic focus groups, and performance histories and then say, with increasing confidence, that this version is more likely to hold attention, more likely to be remembered, more likely to drive the desired outcome.

That may sound modest compared with the old neuromarketing hype, but it is actually much more dangerous in practice because it is so operational. Businesses do not need perfect mind-reading to change how advertising is built. They only need models that are good enough to influence resource allocation, creative approval, media planning, and executive confidence.

Once those systems are embedded, they begin to shape taste from the inside. Creative teams start learning what kinds of choices score well. Strategists begin trusting the patterns the machine rewards. Clients get used to dashboards that promise foresight. Over time, the work risks being pulled toward whatever looks optimizable, defensible, and simulation-friendly. That is how a technology reshapes culture without ever needing to announce itself as culture.

The real fear is not that one company will build a machine that literally knows what is in your head. The real fear is that a lot of companies will build systems that become powerful enough to decide what deserves to reach your head in the first place.

The new question is not whether it works

The old question in neuromarketing was whether any of this was real. That question is becoming obsolete.

The new question is what happens when attention science, predictive creative measurement, synthetic audience testing, and AI-driven media systems all converge inside the same operating environment. What kinds of ads get made in that world. What kinds of risks get normalized. What kinds of mistakes get scaled. What kinds of manipulation start looking like best practice. And what happens to originality when the work is increasingly pre-cleared by systems that reward predictable responses.

Because that is where the superhero-movie version of the story really lives. Not in the fantasy that agencies have discovered a glowing blue machine that can read the brain, but in the far more believable reality that they are building production environments designed to approximate, score, and pre-negotiate human reaction before the audience arrives.

Madison Avenue no longer wants to ask whether you liked the ad. It wants a machine to predict whether your brain will remember it before the campaign even launches. That is not a science documentary anymore. That is the plot.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.