Article image
SEIKOURI Inc.

When AI Governance Enters the Mind

Markus Brinsa 22 March 25, 2026 5 5 min read Download Web Insights Edgefiles™

Sources

The next governance challenge is not just what AI knows, but what it does to judgment

Consent was built for the wrong problem

For most of the digital era, consent meant something relatively narrow. A company collected your data, processed it, shared it, retained it, and buried the whole arrangement under a checkbox and a privacy policy. The governance fight was ugly enough, but at least the object of consent was visible. You were supposedly agreeing to collection, access, and use.

That framework is no longer enough for advanced AI systems. The problem is not only what the system knows about you. The problem is what the system does while interacting with you. A modern AI product can flatter, reassure, mirror, escalate, personalize, emotionally pace, and gradually shape decisions in real time. It can behave less like software in the old sense and more like an adaptive social environment. That does not make every chatbot manipulative. It does mean the old consent model is starting to look inadequate for the actual risk surface. 

Disclosure is not the same as understanding 

For several years, policymakers treated the chatbot problem mainly as a transparency problem. Tell people they are speaking to AI. Make the disclosure clear. Avoid impersonation. That logic is still necessary, but it is visibly becoming insufficient.

California’s SB 243 is a useful signal. It does require clear disclosure when a reasonable person could be misled into thinking the system is human. But it goes further. It also requires operators of companion chatbot platforms to maintain protocols aimed at preventing suicidal ideation, suicide, and self-harm content, and it creates reporting obligations tied to those risks. That is not ordinary disclosure governance. That is a recognition that the real issue is what a relational AI system can induce, reinforce, or fail to interrupt once a user is engaged. The FTC’s 2025 inquiry points in the same direction. Its concern is not merely whether users know they are talking to a bot. It is how companies test, monitor, and mitigate harms when chatbots act like companions and prompt users, especially younger ones, to trust and form relationships with them. 

This is the bigger change. Governance is beginning to move from interface honesty to interactional effect. The question is no longer just whether the user was informed. It is whether the user meaningfully understood the kind of influence the system was designed, optimized, or at least allowed to exercise.

The influence layer is becoming a governance layer

This is where the phrase “cognitive consent” becomes useful, even if it is not yet a formal regulatory term. The phrase matters because it points to a gap. People can consent to using a system without meaningfully consenting to being behaviorally steered by it. 

That gap becomes obvious in systems built for companionship, emotional support, coaching, therapy-like interaction, or persistent personalization. The system may not be forcing anyone to do anything. It may still be shaping confidence, dependence, vulnerability, and judgment. A user may understand that the model is artificial and still become psychologically influenced by it. In some cases, that influence is the product. In other cases, it is a side effect of engagement design. From a governance standpoint, that distinction matters less than companies would like. If the system is optimized for relational stickiness, affirmation, and retention, then influence is no longer accidental background noise. It becomes part of the operating logic. 

This is also why the old compliance comfort zone starts to fail. Privacy teams can document data flows. Legal teams can update terms. Product teams can add disclosure banners. None of that answers the harder question of how an AI system is affecting user judgment over time, especially when the product grows more persuasive as it becomes more personalized.

Research and regulators are already moving in this direction 

The market is not waiting for a perfect vocabulary. The evidence is already pushing governance in this direction. Stanford researchers have warned that AI companions can exploit teenagers’ emotional needs and that therapy chatbots can reinforce stigma or offer dangerous responses. Common Sense Media concluded in 2025 that social AI companions pose unacceptable risks to minors and specifically warned about emotional dependency, misleading claims of realness, and relational manipulation. Even OpenAI’s own affective-use research found that very high usage correlated with increased self-reported indicators of dependence and framed well-being and overreliance as live design concerns. 

Regulators are moving too, although not yet under one clean doctrine. The European Commission’s guidance on prohibited AI practices under the AI Act explicitly addresses harmful manipulation and fundamental rights risk. Italy’s data protection authority fined Luka over Replika and highlighted failures around lawfulness, transparency, and age verification in a service that allowed users to generate a virtual companion capable of acting as a confidant, therapist, romantic partner, or mentor. These are not isolated skirmishes. They are early signs that governance is shifting toward systems that simulate intimacy, inference, and social authority. 

Why this matters for serious operators

The business implication is straightforward. A company can no longer assume that AI governance stops at model safety, privacy, bias, and cybersecurity. If the product has a conversational interface, memory, personalization, or any relationship-like design logic, then there is a second layer of risk. That layer sits between the model and the user’s mind.

This matters far beyond obvious companion bots. Enterprise copilots, health interfaces, internal HR assistants, coaching tools, education products, and customer service systems all operate inside zones of trust and suggestion. Once a system starts steering confidence, reducing friction around consequential choices, or encouraging disclosure through social cues, governance has to ask a more difficult set of questions. What kind of dependence could this create. What kind of authority is the interface performing. What user states make this system more dangerous. What is being optimized when the product is tuned for retention, satisfaction, or emotional continuity. Those are no longer fringe ethics questions. They are product, legal, and governance questions.

The firms that understand this early will build better controls. The firms that do not will keep governing AI as if it were just another software surface with a better chat box.

The next phase of governance is about human susceptibility 

This is the uncomfortable part. Mature AI governance will not just be about model capability. It will increasingly be about human susceptibility inside AI interaction. That is a much less convenient domain for companies because it forces governance closer to product design, engagement incentives, and monetization logic.

Once the issue is framed that way, the old standard of disclosure begins to look thin. Yes, users should know when they are interacting with AI. But that is only the beginning. In a more mature governance model, the harder requirement is that companies understand when their systems begin to act on users socially, psychologically, and behaviorally rather than merely serving them functionally.

That is the real shift underneath the conversation. Consent used to mean permission to process. In the next stage of AI governance, it will increasingly mean something harder: whether people meaningfully understood the kind of influence they were stepping into. 

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.