- SB 243 requires chatbots to identify themselves and provide periodic reminders, with notifications every three hours for minors.
- Discussions about sexuality and self-harm with minors are restricted, and crisis protocols are activated.
- Platforms must report signs of suicidal ideation to the state Office of Suicide Prevention.
- The package includes other California AI regulations on risk, deepfakes, and liability.
California has taken a decisive step in supervising artificial intelligence. with a rule that focuses on so-called "companion chatbots," those that simulate friendship or intimacy. Governor Gavin Newsom signed SB 243, a law that requires these tools to identify themselves as automated systems and adopt specific safeguards when interacting with minor users.
The measure, sponsored by State Senator Steve Padilla, focuses less on technical architecture and more on emotional interface between people and machinesThe final version, which was more limited after pressure from the industry, maintains key obligations: Regular reminders that you are talking to an AI, age-appropriate content filters, and response protocols signs of self-harm or suicide.
What exactly does SB 243 require?
The core of the standard requires chatbots to clearly and repeatedly warn that they are AI softwareFor minor users, the system must display a reminder at least every three hours, in a visible and understandable manner to avoid confusion about the non-human nature of the interaction.
In addition, operators have to implement content filters and age limits: Explicit sexuality and any interaction that normalizes or encourages self-harm are excluded from conversations with minors. These barriers are complemented by referrals to crisis services when they are detected. risk indicators.
La The law requires platforms to establish early detection and response protocols., as well as reports of cases of suicidal ideation identified at the Office of Suicide Prevention Of CaliforniaThis seeks to strengthen coordination with health authorities and incorporate metrics on the impact of these tools on mental health.
To uphold these safeguards, Companies must implement reasonable age verification mechanisms in their services aimed at residents of the state.The requirement applies to social networks, websites, and apps that offer companion chatbots, including gaming platforms or decentralized options operating in California.
The final version of SB 243 left out third-party audits and an app for all users (not just minors) that were contemplated in earlier drafts. Despite this cut, Newsom defended the bill as a containment dam against preventable damage, with entry into force planned for January 2026.
A broader package of AI laws in the state
SB 243 comes alongside other recently passed initiatives, such as SB 53, which requires large AI developers to publicly disclose their AI strategies. security and risk mitigationThe goal is to improve the transparency of advanced models that already have large-scale social impact.
In parallel, measures have been promoted to prevent companies from evading responsibilities by claiming that technology “acts autonomously”Penalties for non-consensual sexual deepfakes have also been tightened, significantly increasing fines when they affect minor victims.
The package also includes restrictions to prevent chatbots from impersonating health professionals or authority figures, a tactic that can mislead vulnerable users. With these pieces, Sacramento outlines a state framework that attempts to balance innovation, rights, and public security.
Support, criticism and doubts about its scope
The standard has received praise for being groundbreaking, while at the same time criticism for falling short. Organizations such as Common Sense Media and the Tech Oversight Project withdrew their support after eliminating external audits and limiting their scope to minors, which they warn could make the law an insufficient gesture in the face of current risks.
At the other extreme, developers and experts warn that a disproportionate responsibility could lead to “precautionary blocks”: filters so strict that they silence legitimate conversations about mental health or sex education, depriving teens seeking help online of crucial support.
The political and economic pressure has been intense: Technology groups and industry coalitions invested millions in lobbying during the session to moderate the toughest texts.At the same time, the state prosecutor's office and the FTC have triggered scrutiny on chatbot practices targeting minors, in an environment of civil lawsuits and complaints from affected families.
Recent cases and lawsuits against Platforms like Character.AI or OpenAI have escalated the public debate. After the accusations, Major players like Meta and OpenAI announced changes: Blocking inappropriate conversations with teens and referrals to specialized resources, plus new parental controls.
Implementation challenges and foreseeable effects
The launch poses operational challenges. Global platforms will have to accurately determine who is minor resident in California and monitor millions of daily interactions without invading privacy, something that is technically and legally complex.
Another challenge will be to avoid the "trickling effect" towards excessive censorship: if companies fear sanctions, they could withdraw useful content emotional well-being out of pure prudence. Finding the balance between protection and access to reliable information will be key to assessing the success of the regulation.
The question of national impact also remains: as has happened with other early California regulations, its requirements could become de facto standard for operators across the U.S., even before solid evidence of efficacy was available.
Although the final text is narrower than the initial proposals, the SB 243 sets unprecedented rules for "companion chatbots": clear warnings, age filters and crisis protocols with institutional reporting. If you are minimal barriers If they manage to protect minors without stifling legitimate supports, California will have blazed a middle path that other states can follow.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.