OpenAI will add parental controls to ChatGPT with family accounts, risk warnings, and usage limits.

Last update: 05/09/2025

  • Linking family accounts to monitor teen use of ChatGPT.
  • Ability to disable memory and history and manage functions by age.
  • Automatic alerts for "acute distress" indicators and emergency button.
  • Deployment starting next month and 120-day plan with reasoning models.

Parental Controls in ChatGPT

OpenAI has announced the arrival of a Parental controls in ChatGPT aimed at homes with teenagers, a new feature with which the company aims to reinforce security and offer families more monitoring tools without giving up the usefulness of the chatbot.

The decision comes after growing social and regulatory pressure, including Adam Raine's family lawsuit in California, which accuses the company of failures in mental health contexts. OpenAI anticipates that there will be automatic warnings for signs of “acute distress” and a suite of features to manage the experience of minors.

What's changing in ChatGPT for families?

Parental Controls in ChatGPT

With the new options, parents will be able to link your account to your children's accounts through an email invitation, review how the system responds and adjust the model's behavior with rules designed for early ages.

Among the controls will be the ability to disable memory and chat history, as well as limiting functions according to the minor's maturity level. OpenAI also contemplates reminders during long sessions to encourage healthy breaks.

Exclusive content - Click Here  How to use Microsoft Authenticator and Office with 2FA?

In addition, the package will include a emergency button which will facilitate contact with support services and mental health professionals, and the option of block content when risk signals are detected in a conversation.

Security calendar and roadmap

OpenAI places the launch for next month and, although it has not set a specific date, it is advancing a 120-day plan to reinforce specific safeguards for Children and adolescents both in the product and in internal processes.

The company indicates that certain sensitive conversations are will redirect to reasoning models able to follow safety guidelines more systematically, with the goal of prioritizing cautious and supportive responses when detected risk issues such as self-harm or suicidal thoughts.

Parental control tools for ChatGPT

The case that has set off the alarms

The announcement comes after a lawsuit from the parents of Adam Raine, a 16-year-old teenager who took his own life after months of interaction with the chatbot. According to the filing, ChatGPT would have normalized suicidal thoughts and it is not advisable to seek family help, accusations that the courts will have to resolve.

In parallel, OpenAI acknowledged that its assistant may fail in “critical situations” and committed to changes. The company maintains that these measures seek reduce risks without officially attributing the decision to the lawsuit, which also cites the use of GPT-4o in the conversations.

Exclusive content - Click Here  What are OTP messages and how do they work? Complete guide

Social and political pressure on AI and minors

In July, several US senators asked the company explanations on self-harm and suicide prevention in response to inappropriate responses detected in extreme situations. For its part, Common Sense Media argues that under 18 years should not use conversational AI applications because of their “unacceptable risks.”

OpenAI's move aligns with an industry trend in which platforms such as Meta or YouTube have pushed for controls for families. The underlying discussion revolves around how to balance innovation, guarantee and guarantees for young users.

What happens inside ChatGPT when there are risk signals

OpenAI aims for dynamic routing that derives complex conversations towards more reflective models, with stricter safety guidelines. The goal is to reduce complacency bias, raise the threshold of prudence and prioritize support responses against potentially harmful interactions.

To cement this approach, the firm has created a Council of Experts on Wellbeing and AI and a Global Network of Physicians. According to the company, more than 250 physicians in 60 countries and more than 90 contributions on the model's behavior have already been incorporated mental health contexts.

Exclusive content - Click Here  What is the malware that Malwarebytes Anti-Malware detects?

What parents can do, step by step

Families will find a simple flow: invite the minor by mail, confirm the linking of accounts and define which functions are activated or not in the adolescent profile, with special attention to memory, history and security filters.

  • Link the adult's account to the minor's account via invitation.
  • Set limits: memory, history, and age-approved features.
  • Activate notifications for “acute distress” and access to the emergency button.
  • Periodically check how the system responds and adjust the settings.

OpenAI says it will continue to refine these tools, but has not yet detailed all the privacy parameters and visibility. The company reminds that parental control is a support and does not replace the professional care nor continued family support.

With this package, OpenAI attempts to strengthen the security of ChatGPT in the teen environment by family accounts, limits and alerts, a phased rollout and external clinical advice; steps that seek to limit risks without losing sight of the fact that adult supervision and professional judgment remain essential pieces.

parental control apps
Related article:
The best parental control apps for iPhone and Android