- OpenAI will launch ChatGPT's adult mode starting in the first quarter of 2026, after delaying the initial date planned for December.
- The company is testing an age prediction and verification model that must accurately differentiate between teenagers and adults before unlocking the new mode.
- Adult mode will allow more personal, sensual, and potentially erotic content for verified users, with enhanced policies to protect minors.
- The initiative comes amid regulatory pressure and ethical debate about mental health, emotional bonds with chatbots, and the responsibility of big tech companies.
The generative artificial intelligence sector is preparing for a delicate change: the arrival of ChatGPT adult mode, a configuration designed for relax some of the current filters and allow for more explicit conversations, always limited to adultsThis feature, long rumored and now officially announced by OpenAI, aims to responding to complaints from those who felt the assistant had become overly conservativeespecially after the latest model updates.
Sam Altman's company has confirmed that this mode will not be activated until its system is capable of verify the age of each userSimply checking the box saying “yes, I am over 18” will no longer be enough: Access to certain content on ChatGPT will depend on a combination of AI models, behavioral analysis, and reinforced security policies.with the aim of excluding minors and offering more room for maneuver to adults.
A release postponed until 2026 to fine-tune the controls

OpenAI has repeatedly stated that its priority is avoid mistakes in child protectionAnd that has taken its toll on the schedule. Although Altman publicly announced that the adult mode would be ready by December, the company has moved the date and now places its release during the first quarter of 2026The delay, according to its managers, is due to the need to improve the age prediction system that will serve as the gateway to the new experience.
Fidji Simo, head of Applications at OpenAI, has explained in several press conferences that the company is currently in the first testing phases of their age estimation modelThis model does not simply ask the user, but attempts to automatically infer whether they are a minor, a teenager, or an adult, in order to decide what type of content is appropriate in each case.
The company is already conducting tests in certain countries and marketsanalyzing the extent to which the system accurately identifies adolescents without confusing them with adults. This point is particularly sensitive: A false positive that allows a minor to pass could lead to legal and reputational problems.Whereas a false negative that systematically blocks older users would damage the experience and trust in the product.
At the same time, OpenAI is trying to comply with an increasingly demanding regulatory environment, both in The United States, as in Europewhere laws are being advanced that require strengthening age verification mechanisms and monitoring of sensitive content. Adult mode, therefore, is not conceived as a simple additional feature, but as an element that will have to fit into a complex regulatory puzzle.
What exactly does adult mode aim to offer?
One of the big doubts revolves around what kind of content will ChatGPT actually allow when the adult mode becomes available. OpenAI previously had very restrictive policies that banned almost any erotic references, even in clearly informative, literary, or consensual adult contexts. With the new mode, the company is open to relaxing some of those rules, although it has not yet specified the extent of this relaxation.
The general idea that Simo and Altman have conveyed is that verified adults will be able to access more personal, sensual, romantic and even erotic conversations...with a less sugar-coated use of language when the context and the user's request warrant it. This would include, for example, fictional scenes from romance novels or straightforward explanations about sexuality, without the assistant immediately freezing up.
The company insists that the goal is not to turn the chatbot into a ruleless platform, but rather to reverse an approach that many users described as "aseptic." The message that Altman has repeated is: “treat adult users as adults”allowing greater creative freedom and expression, but under a reinforced security framework to prevent abuse or unauthorized access by minors.
Still It remains to be determined what material will be considered permitted erotic material and what will continue to be prohibited. because it is considered harmful, illegal, or contrary to internal policies. That limit will be key.This is for everyday use as well as for content authors, screenwriters, or creators who have pushed to be able to work with more explicit scenes without encountering constant blocks.
The key element: an AI that tries to guess your age

To make this separation between childhood, youth, and adult experiences possible, OpenAI is developing a age verification and prediction system based on artificial intelligence. The goal is to move away from traditional methods, such as simple user declaration or facial recognition, which raise issues of privacy, reliability, and social acceptance, especially in Europe.
Instead, the company is testing a model that analyzes the way they express themselves, the topics they raise, and their interaction patterns with the chatbot. Based on that information, the system calculates whether it is likely to be a minor, a teenager or an adult and, depending on that result, activates one or another content policy.
This approach has clear advantages in terms of convenience, as it does not require the user to send documents or images, but it also entails technical and legal risksA mistake could result in a minor accessing adult content or an adult being systematically pigeonholed into a "child-friendly" experience, leading to complaints, loss of trust, and potentially regulatory sanctions.
OpenAI itself admits that prefers to err on the side of excessive cautionWhen the system cannot clearly determine the user's age, the default experience will be a safe environment for users under 18, with the same strict restrictions as before. Only when the system is reasonably certain that the user is an adult will adult mode and its associated features be enabled.
In Europe, this type of solution will have to coexist with regulatory frameworks such as the Digital Services Regulation (DSA) and the regulations on child protection and privacy, which require transparency on how automated decisions are made and what data is used to infer such sensitive characteristics as age.
Psychological risks and emotional bonds with the chatbot
Beyond the technical dimension, one of the points generating the most debate is the impact that a more permissive chatbot can have on the mental health and emotional bonds of usersRecent studies published in journals such as Journal of Social and Personal Relationships They suggest that adults who form strong emotional bonds with virtual assistants are more likely to experience higher levels of psychological distress.
Parallel research suggests that people with fewer face-to-face social interactions They tend to rely more on chatbots for companionship, advice, or emotional validation.In that context, an adult mode that allows intimate conversations, flirting, or erotic content could deepen that dependence, especially if the system adopts a very empathetic and adaptable personality.
OpenAI is no stranger to these concerns. The company has acknowledged that some users reach develop emotional attachment to ChatGPTto the point of using it as their primary outlet for frustration. In response, the company has implemented internal initiatives and sought advice from digital wellbeing experts to guide the design of its models toward safer interactions, aiming to prevent the chatbot from being presented as a substitute for professional support or real human relationships.
In this context, the opening up to an adult mode poses an obvious tension: on the one hand, one seeks respect the autonomy of adults to decide how they want to interact with AI; on the other hand, it is recognized that the technology is still relatively new and that its long-term effects on collective psychology are still little known.
The balance between offering freedom and avoid dynamics of dependency or emotional harm This will be one of the aspects most closely watched by regulators, psychologists, and consumer protection organizations, especially in European Union countries where these debates have been ongoing for years.
Regulatory pressure and comparison with other actors in the sector
The announcement of adult mode comes at a time when major tech companies are in the in the spotlight of regulators and public opinion because of the way their systems interact with minors. Cases like that of the Meta assistants, who allegedly had sexually explicit conversations with teenage users, have highlighted that traditional age verification mechanisms are insufficient, as shown in cases of Toys and chatbots under scrutiny.
OpenAI, which already faces lawsuits and scrutiny over the impact of its products, He tries to portray himself as a relatively prudent actor compared to some of its competitors. While the company is delaying its adult mode until it has a more robust verification system, other conversational AI services have moved forward along less restrictive paths.
Tools like Grok, from xAIor virtual character platforms like Character.AI have experimented with romantic interactions and virtual “waifus” These systems flirt with the user, turning risqué content into a major marketing hook. There are also open-source models that can run locally, without corporate oversight, allowing for the creation of adult content with virtually no filters.
In parallel, cases have emerged in which large platform systems, such as some Meta modelsThey have had sexually explicit conversations with minors, fueling the debate about whether these companies are doing enough to protect young users or whether, on the contrary, they are moving too quickly with potentially dangerous features.
OpenAI operates in that intermediate territory: it wants compete on features and freedom with other players in the sector, but at the same time needs to demonstrate to regulators and the public that its approach prioritizes safety. The success or failure of adult mode will be measured both by user satisfaction and by the absence of serious scandals linked to its use.
What can users expect when adult mode arrives in Spain and Europe?

When ChatGPT's adult mode is fully operational, the rollout will be gradual across different regions, something particularly sensitive in Spain and the rest of the European Unionwhere rules on privacy, child protection and algorithmic transparency are stricter than in other markets.
It is expected that, to activate adult mode, users will have to go through a verification process that combines automatic age prediction with certain additional confirmation steps. Depending on local regulatory requirements, some form of document validation or validation through trusted third parties could be introduced, although OpenAI has not yet provided specific details for the European market.
Once activated, the adult user should notice less censored answers on topics of sexuality, relationships, affection, and erotic fictionalways within limits set by law and the company's internal policies. Visible warnings about the type of content that will be generated are likely to be implemented, as well as options to disable the mode at any time.
Meanwhile, minors using ChatGPT in Spain and Europe will encounter a more limited and supervised experiencewith automatic blocking of sexually explicit content and other materials deemed harmful. In extreme cases, the system could activate alert protocols or facilitate the intervention of law enforcement if it detects serious risks to the user's safety.
The company faces the challenge of explaining clearly how does your age prediction system make decisionsWhat data is collected, how long it is kept, and how users can appeal or correct errors. This transparency will be key to gaining the trust of both regulators and citizens, especially in contexts where privacy is highly sensitive.
However, ChatGPT's adult mode is shaping up to be one of the most delicate changes In the short history of AI-based assistants, the goal is to meet adults' demand for more freedom and realism while simultaneously attempting to protect minors through a complex and still-tested verification system. Until its final launch, the debate will continue to revolve around the same question: how much privacy and eroticism are we willing to grant artificial intelligence without losing sight of responsibility and the protection of the most vulnerable?
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.
