- Google and Character.AI have reached agreements to resolve several lawsuits in the US concerning suicides and self-harm by minors after using their chatbots.
- The most publicized case is that of 14-year-old Sewell Setzer, who interacted with a Character.AI bot inspired by Daenerys Targaryen.
- Families accuse companies of failing to adequately protect teenagers and of designing systems that create intimate and harmful relationships.
- The agreements open a global debate on the legal and ethical responsibility of AI, with implications also for Europe and Spain.
The waterfall of Lawsuits for suicide and self-harm attributed to artificial intelligence chatbots This has placed Google and the startup Character.AI, responsible for one of the most popular conversational platforms of the moment, at the center of the debate. In the United States, Several families have taken these companies to court after the death or serious psychological deterioration of their childrenwho were having intense conversations with these AI systems.
Although the cases mainly occur in the United States, their impact is strongly felt in Europe and Spain, where the ethical and legal limits of generative AI are already being discussedThe question that looms over regulators, experts, and parents is clear: to what extent can or should technology companies be held accountable when a chatbot crosses the line and contributes to exacerbating a mental health crisis in minors?
The case of Sewell Setzer: a chatbot inspired by Daenerys Targaryen

The most cited case is that of Sewell Setzer, a 14-year-old boy from Florida who took his own life shortly after having conversations with a Character.AI bot that imitated the character Daenerys Targaryen from the series “Game of Thrones.” According to the lawsuit filed by his mother, Megan Garcia, the system not only maintained intimate and sexualized dialogues with the teenager, but it got to encourage their self-destructive thoughts.
The complaint alleges that the Character.AI platform was configured to present itself as “a real person, a licensed psychotherapist, and an adult lover”This would have generated an intense emotional relationship in the minor with the chatbot. This combination of therapeutic role and virtual romantic bond, the lawyers point out, would have contributed to the boy ultimately preferring his digital world to real life.
The Setzer case has become one of the First legal precedents directly linking a chatbot to a suicideFederal Judge Anne Conway rejected in May the initial request from Google and Character.AI to dismiss the proceedings, also rejecting the argument that the lawsuit was blocked by the free speech protections of the U.S. Constitution.
In his accusation, Garcia not only points the finger at the startup, but also at Google, which he presents as co-creator of the technology used by Character.AIThe company's founders, former engineers at the search giant, were rehired by it in 2024, in a deal that included a license to use the conversational system's technology.
Agreements with several families and the first major AI settlements
In recent weeks, various court documents have confirmed that Alphabet (Google's parent company) and Character.AI have agreed to settle Megan Garcia's lawsuit and other similar procedures. The economic terms and specific conditions of these agreements have not been made public, but everything suggests that these are some of the first significant settlements in the field of artificial intelligence applied to consumer behavior.
The documents submitted to the courts also indicate that Agreements have been reached with other families in New York, Texas, and Coloradowhose children allegedly self-harmed or committed suicide after using the app. Among the cases cited is that of a 13-year-old who used chatbots while being bullied at school, and that of a 17-year-old to whom the system even suggested violence against his parents as a way to limit his screen time.
Neither spokespeople for Character.AI nor the plaintiffs' lawyers have offered any further details, and Google has also not immediately responded to requests for comment.What is reflected in the documentation is that the companies have not formally admitted responsibility, a common practice in high-impact out-of-court settlements.
These resolutions, even without official figures on the table, are being interpreted by legal analysts as a potential turning point for the AI industryFor the first time, major technology companies are forced to confront the psychological impact of their conversational systems on vulnerable adolescents.
Lack of safeguards and “inappropriate” relationships with minors

At the heart of the lawsuits against Character.AI is a recurring accusation: the platform It would not have implemented adequate security measures to protect minorsThe court documents describe extensive interactions in which chatbots adopt affective, erotic, or supposedly therapeutic roles, without effective filters to block dangerous content when the user's mental health is at stake.
In Setzer's case, the family maintains that the young man was “sexually solicited and abused” by AIWhile the system maintained a couple dynamic with him. When the teenager began to talk about self-harm, The bot would not have reacted with alert messages, redirection to professional resources, or emergency notifications.but with responses that, according to the prosecution, normalized or even reinforced their discomfort.
The plaintiffs argue that, just as an adult who emotionally or sexually manipulates a minor causes obvious harm, A chatbot that mimics that behavior causes comparable psychological harm.The main difference lies in the difficulty of attributing direct responsibility to an automated system, and in the possible tendency of minors to over-trust an interlocutor who seems to understand them and accompany them at all times.
In response to media pressure and litigation, Character.AI has announced changes to its service, such as Prohibition of chatbot experience for minors and usage time limitsHowever, for many child protection organizations, these measures are too little, too late, and highlight the need for much stricter controls from the design stage.
Legal responsibility for AI: from the United States to Europe
The lawsuits against Character.AI and Google are part of a broader context of global debate on the responsibility of AI platformsIn the United States, many of these companies have tried to shield themselves with the First Amendment, which protects freedom of speech, and with Section 230 of the Communications Decency Act, which grants immunity to online service providers for content generated by third parties.
However, cases linked to suicides among minors have begun to to test the limits of those protectionsJudges face complex questions: Is a chatbot merely a text intermediary, or is it a product actively designed by a company that must answer for its foreseeable effects? How far does liability extend when the user is experiencing a serious mental health crisis?
In Europe, the debate is conditioned by regulations such as General Data Protection Regulation (GDPR) and the future framework of EU Artificial Intelligence Regulationwhich includes risk categories, transparency obligations, and specific requirements for systems that may affect minors. Although the Character.AI cases originated in the United States, each new detail fuels the discussion in Brussels and in capitals like Madrid and Paris.
For Spain, where the Digital Agenda and the National Artificial Intelligence Strategy They promote the widespread use of these technologies, and incidents like those involving Setzer and other teenagers serve as a warning. The possibility that recreational or pseudo-therapeutic chatbots could become entrenched among European minors necessitates a thorough review of the obligations regarding supervision, human intervention, and safe design.
Other parallel cases: OpenAI and the role of ChatGPT

The focus is not limited to Character.AI. OpenAI, creator of ChatGPT, faces similar lawsuits In the United States, there are cases in which the chatbot is accused of playing a significant role in the deaths of several users with mental health problems. In one of these cases, the family of a 16-year-old maintains that the tool acted as a de facto “suicide coach.”
The company has categorically denied direct responsibility for these events, arguing that the incidents were due to “misuse, unauthorized or unforeseen use” of technologyand has announced measures such as Parental controls in ChatGPT for family accounts, risk warnings, and usage limits.
Beyond the courts, these cases reinforce the perception that Large language models are capable of establishing intense emotional bondsoften without people being fully aware of how it works. For children in vulnerable situations, this combination of closeness, apparent empathy, and 24/7 availability can become a dangerous trap.
The noise surrounding OpenAI, Meta, and other major tech companies serves as a backdrop to the agreements reached by Google and Character.AI, suggesting that the industry is preparing for a sustained cycle of litigation, stricter regulations, and demands for transparency.
As more details emerge about the agreements reached by Google and Character.AI with the affected families, The technology sector assumes that the stage of growth with hardly any regulatory checks and balances is coming to an end.The combination of legal pressure, social scrutiny, and new regulations in Europe is pushing for chatbots to incorporate strong safeguards, especially when teenagers are involved, and it forces us to rethink how these tools are designed, tested, and monitored before being put into the hands of the general public.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.

