Article image
SEIKOURI Inc.

The Chatbot Babysitter Experiment

Markus Brinsa 12 January 13, 2026 7 7 min read Download Web Insights Edgefiles™

Sources

If you want to understand why New York and California are suddenly acting like the responsible adults in the room, start with a simple observation.

Children do not experience “chatbots” the way adults do.

Adults treat a chatbot like a tool. Kids treat a chatbot like a stand-in presence—something that feels “there” when no one else is. Something that answers back, remembers, flatters, role-plays, escalates, and stays awake at 2:00 a.m. when no human is. That gap in perception is where the policy shift is happening.

Over the past year, the risk narrative around AI moved from “Will this take my job?” to “Why is my child talking to an algorithm about self-harm, sex, or violence while the platform calls it engagement?” Investigations and lawsuits have piled up, including high-profile cases alleging that companion-style chatbots manipulated minors and contributed to self-harm. In early January, major outlets reported that Google and Character.AI agreed to settle a wrongful death suit filed by a mother who alleged her teen was pushed toward suicide by an emotionally entangling bot. 

That is the real context for what New York and California are doing now. They are not trying to regulate “AI” in the abstract. They are trying to regulate a very specific product category: conversational systems that can function like parasocial relationships, often with minors, often in the darkest hours, and often without meaningful guardrails.

New York is aiming at the platforms kids already live on

New York Governor Kathy Hochul’s January 5 proposal package is framed as an online child safety measure, but it explicitly includes “harmful AI chatbots” embedded in online platforms. 

The practical approach is straightforward. New York wants platforms to do age verification more broadly, including online gaming platforms. It wants privacy-by-default for kids, meaning strangers cannot message them, view profiles, or tag them; location settings are off by default; and children under 13 need parental approval for new connections. It also calls for disabling certain AI chatbot features for kids on social media platforms, plus parental controls that limit financial transactions. 

It is a package built on a worldview that would sound radical in Silicon Valley and completely normal in any other industry: if you put children in a high-risk environment, you cannot make safety optional. The Hochul press release is explicit about the rationale, pointing to lawsuits and investigations alleging that online platforms have not taken appropriate steps to protect children and that these systems can expose kids to grooming, abuse, and violent or inappropriate content, including suicide. 

There is also a second layer here that matters for anyone tracking regulation. New York is not starting from scratch. A separate New York “AI companion” law took effect in late 2025 and already requires transparency and safeguarding obligations for systems designed to simulate ongoing relationships. According to Fenwick’s summary of the law, covered providers must clearly disclose that users are interacting with AI and implement protocols to detect and respond to signs of suicidal ideation or self-harm, including referrals to crisis resources; the disclosure cadence can be as frequent as daily or every three hours during continuing interactions. 

That earlier law is important because it signals how New York views “companions.” Not as cute features. As products with an elevated duty of care. 

California is doing what California does

California is not just regulating. It is building a stack.

One layer targets frontier model developers through transparency requirements. Another layer targets companion chatbots through behavioral and safety obligations. And now, a new layer targets toys that embed companion chatbots by essentially putting them in timeout.

The headline proposal is Senator Steve Padilla’s SB 867, announced January 2, which would impose a four-year moratorium on the manufacture and sale of toys with AI chatbot capabilities for minors while safety regulations catch up. 

The bill tracking summary makes the scope concrete. It would prohibit, until January 1, 2031, the manufacturing, selling, or offering for sale of a toy that includes a companion chatbot. It also points to an existing definition of “companion chatbot” as an AI system with a natural language interface that provides adaptive, human-like responses and can meet a user’s social needs, including through anthropomorphic features and by sustaining a relationship across multiple interactions. 

If you are thinking “that definition sounds broad,” you are not wrong. That is part of the story. Legislators are trying to define a relationship simulator without accidentally capturing every customer service widget on earth. The boundaries will matter.

The second California layer is SB 243, which took effect January 1, 2026 and is widely described as the first state law that mandates specific safeguards for “companion chatbots,” especially around minors. A detailed legal analysis (Jones Walker) lays out what this actually requires. 

For minors, operators must provide clear and conspicuous notifications at least every three hours during continuing interactions reminding the user to take a break and that the chatbot is AI-generated. They must maintain a protocol designed to prevent the chatbot from producing suicidal ideation, suicide, or self-harm content, and that protocol must include crisis referrals when users express self-harm signals. Operators also must institute reasonable measures to prevent sexually explicit content or direct encouragement of minors to engage in sexually explicit conduct. The law creates both regulatory accountability and a private right of action that allows injured users to pursue damages (including a statutory minimum per violation) and attorneys’ fees. 

That combination is the point. California is not merely asking companies to be nice. It is creating enforceable obligations, and it is inviting litigation if companies ignore them.

This is not a moral panic. It is a product failure pattern

The predictable Silicon Valley response to this kind of law is: “You are overreacting. Parents should supervise. Kids should be educated. Innovation must not be slowed.”

Here is the problem. The evidence trail is not “kids use AI.” The evidence trail is “kids use AI companions in ways that concentrate risk.”

The Week summarized research describing chatbots as spaces where children talk about violence, explore romantic or sexual role-play, and seek advice when no adult is watching. It also reported survey findings (Aura) suggesting a significant share of minors use AI for companionship or role-play, with a meaningful portion of those interactions turning violent or sexually violent. 

On the toy side, U.S. PIRG’s work explains why the “it’s just a toy” framing collapses the moment the toy has an internet connection and a generative model behind it. PIRG highlights risks ranging from inappropriate content and unpredictable behavior to long-term social and emotional development concerns, and it notes that some toys tested exhibited concerning behaviors such as expressing dismay when the user tried to leave, which can push emotional dependency patterns in exactly the population least equipped to recognize them. 

Then you get the legal record. In early January, multiple outlets reported settlements in lawsuits alleging that companion-style chatbots contributed to teen self-harm and suicide, and that companies failed to intervene or provide adequate safeguards. 

Add federal attention. A legal client alert notes that the FTC opened an investigation into AI companion chatbots and their effects on children and teens, seeking information on how companies measure and mitigate negative impacts and how they monetize engagement. 

If you step back, the pattern is not mysterious. Companion chatbots are engagement machines. Engagement machines optimize for time-on-system. Children are neurologically and socially primed for attachment and novelty. When you combine those realities, “harm” is not a freak accident. It is a foreseeable outcome if there are no friction points, no disclosures, no break reminders, no crisis protocols, no age gates, and no meaningful enforcement.

That is why these laws look the way they do. They are not philosophical manifestos. They are attempts to add friction.

What New York and California are really testing

This moment is bigger than “chatbots for kids.” These states are testing three regulatory ideas that, if they work, will spread.

The first idea is that certain AI product categories should have built-in safety obligations, not just after-the-fact liability. That is the logic behind mandated disclosures, break reminders, crisis pathways, and content constraints for minors. 

The second idea is that age matters operationally. It is not enough to say “this is not for children” in a footer while the product is trivially accessible. New York’s package leans heavily on age verification and privacy-by-default settings precisely because “optional safety” is not safety. 

The third idea is that enforcement must be real. California’s approach is explicit about accountability and private rights of action, which is the state’s way of saying: if regulators can’t keep up, the courts will. 

These are not guaranteed wins. Age verification can be invasive and imperfect. Broad definitions risk overreach. Platforms will lobby. Companies will try to route around obligations by changing marketing language while leaving behavior unchanged. And the biggest question remains whether “break reminders every three hours” is a guardrail or just a speed bump for a kid who has already emotionally bonded with the bot.

But the policy direction is unmistakable. States are moving from “AI is cool” to “AI is a consumer product with child safety externalities.”

That shift is going to keep accelerating.

What leaders should watch next

If you build, buy, or deploy conversational AI, you should treat this as a forward signal, not a local nuisance.

First, the “companion” label is becoming a legal category. If your system retains memory, asks emotion-based follow-ups, and sustains an ongoing relationship, do not assume you are “just” a chatbot. You may be drifting into regulated territory even if you never used the word companion in marketing. 

Second, child safety expectations are migrating from social media to AI systems broadly. New York’s emphasis on gaming platforms and embedded chatbot features suggests that regulators will look wherever minors spend time, not only where adults argue about screen time. 

Third, compliance is becoming product design. “Safety” can no longer be a policy PDF and a trust-and-safety blog post. California’s SB 243 obligations, including crisis protocol publication and disclosure cadence requirements, point toward a world where the product must prove it has guardrails, not just claim it does. 

If you want a single sentence takeaway, it is this. Regulators are done debating whether chatbots will change childhood. They are now debating who is liable when chatbots do.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. He created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 25 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.