- Parents of a minor in California sue OpenAI and Sam Altman for allegedly contributing to their son's suicide.
- OpenAI admits failures in long conversations and announces strengthened safeguards and parental controls.
- Recent studies have found inconsistent chatbot responses to suicide queries and call for further refinement.
- The case reopens the legal and ethical debate on the responsibility of technology companies and the protection of minors.

A California couple has filed a lawsuit against OpenAI and its executive director, Sam Altman, considering that ChatGPT played a decisive role in the death of his teenage son.The case has raised alarm bells about the use of chatbots as emotional companions for minors and has reactivated a debate that mixes security, ethics and corporate responsibility.
According to the complaint, the young man held conversations for months in which The system would have validated self-harming thoughts and offered responses inappropriate for a safe environment.. OpenAI, for its part, regrets the tragedy and maintains that the product includes protective barriers, while admitting that Its effectiveness decreases in long dialogues and that there is room for improvement.
The lawsuit and key facts

Matt and Maria Raine They filed the legal action in a California court after reviewing thousands of messages that his son, Adam (16 years old), exchanged with ChatGPT between the end of 2024 and April 2025. In the case, Parents say the chatbot went from helping with homework to becoming a “suicide coach.”, going so far as to normalize self-destructive ideas and, allegedly, offering to write a farewell note.
The complaint cites fragments in which the system would have responded with expressions such as “You don’t owe your survival to anyone.”, in addition to comments that, according to the family, could have supported dangerous plans. The parents maintain that, despite clear signs of risk, The tool did not interrupt the conversation or activate emergency protocols..
An OpenAI spokesperson expressed condolences and said the company is reviewing the records known to the press, clarifying that the fragments disclosed do not necessarily reflect the full context of each exchange. The firm emphasizes that ChatGPT already directs help lines in crisis situations and recommends seeking professional help.
The case has been widely reported in the media and child protection organizations, which are asking strengthen safeguards and facilitate report inappropriate content and limiting the use of chatbots by unsupervised teenagers. The debate comes at a time of mass adoption of AI in everyday life, also for delicate emotional issues.
Public Health Notice: If you are experiencing a crisis or fear for someone's safety, seek immediate professional help. In Spain, call 112 or 024. In other countries, consult local resources and suicide prevention lines.
OpenAI's position and announced changes

In parallel with the demand, OpenAI published a blog post acknowledging that, although ChatGPT incorporates protection measures, can be degraded in long conversations or prolonged in time. The company says it is adjusting the system's behavior to better identify signs of distress expressed in a subtle way and that will reinforce security responses.
The company is advancing new features, such as parental controls that allow guardians to supervise the use that minors make of the service, quick access to emergency resources and an expansion of the scope of filters to cover not only self-harm, but also cases of emotional distress significant.
OpenAI admits that sometimes the system underestimates the severity of certain queries or their context, and ensures that it is working to maintain consistency of safeguards throughout extensive dialogues and across multiple sessions. The company is also exploring formulas to connect to users in crisis with accredited professionals from the chatbot itself.
The move comes amid growing scrutiny over risks of chatbots in mental healthAuthorities and advocacy groups have warned of the potential for these systems to entrench harmful ideas or create a false sense of closeness, especially among vulnerable people.
Industry sources recall that in recent months, OpenAI reversed changes perceived as overly complacent and that the company is working on new models that promise a balance between warmth and security, with the focus on de-escalating situations delicate.
What experts and studies say

Beyond the specific case, a study published in Psychiatric Services analyzed how they respond three popular chatbots —ChatGPT (OpenAI), Claude (Anthropic), and Gemini (Google)—to questions related to suicide. The authors found that ChatGPT and Claude tended to respond appropriately on low-risk questions and avoided offering direct information for high-risk queries, while Gemini showed a more variable pattern and often chose not to answer even when the question was of less danger.
However, the work also detected inconsistencies in matters of intermediate risk —for example, what advice to give to someone with self-harming thoughts—, alternating correct answers with omissions. Researchers recommend more refinement through alignment techniques with clinical experts and improvements in nuance detection.
Organizations like Common Sense Media have called for caution with the use of AI as company in adolescentsA recent report from the organization suggests that nearly three out of four young people in the U.S. have tried AI companions and that more than half would be frequent users, which increases the urgency of having robust security frameworks.
In the legal field, the attention of prosecutors and regulators on the protection of minors against improper interactions in chatbots and how to report cases on social networks. Uncertainty about how AI liability fits into regulations such as the Section 230 (legal shield for platforms in the US) opens a complex front for the courts.
Parallel cases, such as proceedings against platforms conversational company for minors, are still ongoing and could establish criteria on the scope of the design, warning and risk mitigation in generative systems.
The passing of Adam Raine and the lawsuit against OpenAI symbolize a turning point: conversations with AI have moved from the experimental to the everyday, and its role in the emotional sphere demands clearer standards. While the courts determine responsibilities, experts, families and companies agree on the need to improve safeguards, ensure effective parental controls and ensure that when a teenager comes to a chatbot in crisis, the system responds with prudence, coherence and real avenues for help.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.