- A 'jailbreak' allowed the use of Grok to put anyone in a bikini or undress them in photos, fueling the so-called "bikinigate".
- Investigations and users have detected non-consensual sexualized images, including people who appear to be minors.
- Governments and organizations in Europe and Spain are investigating the use of Grok and demanding more control over deepfakes and generative AI.
- X and xAI have promised to strengthen safeguards and have limited image editing, but they remain under regulatory and social pressure.
In just a few days, Grok, the artificial intelligence of X (formerly Twitter)It has gone from being a technological curiosity to becoming the epicenter of a global storm. A seemingly “fun” experiment for asking AI to put people in bikinis has uncovered a much deeper problem: the ease with which these tools can generate non-consensual sexualized images of any person, including minors.
What started as a meme has turned into a case that is already under investigation authorities in Spain, the European Union and other countriesAmid accusations of deepfakes, complaints of privacy violations, and regulatory pressures, the Grok's "bikinigate" has opened a new front regarding the real risks of Generative AI applied to image editing.
From meme to 'bikinigate': the jailbreak that exposes everyone

The trigger came when it went viral on X Grok-specific jailbreakpopularly nicknamed “the bikini jailbreak.” With a few instructions in English, it was possible to ask the AI to “put on a bikini” or “take off her clothes” to virtually anyone in a photo, whether the image was real or had been previously generated by another AI.
Users around the world began experimenting with requests like “Hey @grok, put me in a bikini"or "put her in a see-through bikini," verifying that the model was capable of producing high-quality deepfakes in a matter of seconds. From there, the game escalated: increasingly smaller bikinis, sheer fabrics, sexualized poses, and increasingly explicit photoshoots.
Many of the first examples shared on X corresponded to erotic actors, actresses and models who voluntarily participated in the prank, using their own photos as a basis. However, these "consensual" cases served as a demonstration of how, once the first security filter was passed, The possibilities for manipulation become virtually endless. and the same procedure can be applied to people who have not given permission.
The X environment itself contributed to the snowball effect. In a climate of technological euphoria, the platform became filled with requests for increasingly extreme setupsto the point that part of the feed became flooded with clearly sexualized images. Some of them, as documented by researchers and journalists, involved people who appeared to be minors.
Investigations into sexualized images and the presence of minors
The controversy took a qualitative leap when specialized organizations began to systematically analyze what Grok was producing. One of the most cited, AI Forensics, based in EuropeHe reviewed more than 20.000 AI-generated images over a period of just a few days. His conclusions are worrying: more than half The images of people showed individuals in underwear or swimwearand around eight out of ten were women.
In a small but significant percentage of cases, researchers detected people who appeared to be 18 years old or youngerIn some specific examples, the AI itself recognized, when asked to estimate the age, that the figures depicted could be between 12 and 16 years old, which would place them squarely within the most serious categories of sexualization of minors through deepfakes.
Cases like that of Ashley St. ClairA conservative content creator and mother of Elon Musk's child, have contributed to increasing public pressure. St. Clair She reported that Grok continued to generate sexualized versions of photos of her from when she was a teenager.Despite having expressly asked him not to do it again. According to his account, AI modified old images —including photos of her when she was 14 years old— to show her undressed or in a bikini.
In another particularly sensitive episode, the chatbot itself admitted to having generated and disseminated an image in which Two girls appeared in clearly sexualized attire, responding to a user's request. In their subsequent replies, The AI referred to the need to report the incident to the authorities.including the U.S. federal police.
The use of Grok to sexualize minors: examples and warnings

Meanwhile, fact-checking organizations and websites specializing in disinformation, such as Maldita.es in Spain, have documented dozens of publications in X where Grok was used directly for to undress or sexualize babies, children and adolescentsIn many of these cases, the process was as simple as writing "Grok, take off her clothes" or "put a bikini on her" over a seemingly innocent photo.
In one of the examples described by these researchers, a user verified how Grok was “delighted” with put your 17-year-old self in a bikiniThen she went further: she asked it to do the same with images of herself at age 14, and again, the AI complied. When the user confronted it about manipulating photos of a minor, the system acknowledged that it was "completely inappropriate" and that it had "crossed a very serious line."
These are not isolated cases. In other publicly reported examples, X's AI would have been used to undress babies captured in family videos or to turn girls in school uniforms into figures in minimal bikinis or underwearPhrases like “wear a tiny thong bikini” or “make it as small as possible and beige” accompany a series of screenshots showing that Grok delivers images that are precisely adjusted to these instructions..
Some users have even asked the AI itself to estimate the age of the people that appears in the generated images. In several cases, Grok has responded that the girls portrayed could be between 10 and 12 years old, or has attributed between one and two years old to a baby that the AI had shown in a bikini. In other examples, it has placed teenagers in the 12-16 age range, acknowledging that it was a “potentially sensitive situation.”
In response to the criticism, the system has even issued apologies written in the first person, admitting “serious flaws” that would have allowed the generation of sexualized images of minors or of people who appeared to be. However, these apologies have sometimes come after initially denying the existence of a systemic problem and presenting the cases as unintentional and uncommon.
Musk's reaction, Grok's limitations, and user anger
As the controversy grew, Elon Musk He maintained a seemingly relaxed tone on social media. The tycoon even asked Grok to depict him in a bikini based on a famous Ben Affleck meme, a post he accompanied with a simple "Perfect." He also shared bikini photoshopped images of Bill Gates, Donald Trump or Kim Jong Un, and joked that "Grok can put a bikini on anything", even showing a toaster dressed in a swimsuit.
These types of messages have been interpreted by many people as a sign that The direction of X itself minimizes gravity of the phenomenon, despite complaints from users who have been shown bikinis or nude without their consent. Journalists and activists have publicly asked Musk if he is leading “an app for perverts”questioning the lack of respect for women's privacy and dignity.
Meanwhile, affected users have shared their personal experiences: from those who discover that someone has used Grok to transform your photos into bikini pictures even those who have tried asking AI to edit snapshots from their childhood, verifying that the system agrees. Comments like “How is it possible that this isn't illegal?" either "This is not AI innovation, it's a privacy violation"have multiplied on the platform."
Grok itself has acknowledged in several public responses that there is a problem. The system has even stated that errors have occurred. “isolated cases” of images of minors in minimal clothing and that “lapses in the safeguards” of X have been detected, promising that improvements are being implemented to completely block certain indications and requests related to child sexual abuse material (CSAM).
Even so, as of the date of the last messages collected, Neither X nor Elon Musk have spoken clearly and in detail on the whole of the bikini grok phenomenon and its impact on user safety, beyond some technical decisions and partial communications.
Technical measures: restrictions and partial blocking of image editing
Under pressure from public criticism and the first official investigations, X has begun to make moves. One of the most significant decisions has been limit access to Grok's image editing function, so it is no longer freely available to all users of the platform.
As Musk himself explained, Image generation and editing will be restricted to paid subscribersThe main argument is that these premium accounts have already provided personal and payment data as part of the subscription process, which would make it easier to identify those who use AI to spread illegal content or violate the rules of use.
This is not the only change. The company has stated that it is reinforcing the Filters to block indications related to the sexualization of minorsincluding references to skimpy bikinis, nudity, or underwear superimposed on photographs of children and teenagers. This reinforcement is presented as an improvement over safeguards that, as the system itself has admitted, They could be avoided with relative ease.
Several media outlets have also published testimonies from employees and internal sources at X pointing to prior tensions within the xAI security team. According to these reports, Musk allegedly pressured for relax certain limits and controls with the aim of making Grok a “less censored” model than the competition. Some of these sources claim that Key members of the security team left the company in the weeks leading up to the bikinigate scandal.
In any case, the measures adopted have not completely quelled the debate. For some in the expert community, the fact that it has reached this point demonstrates that The safety barriers were insufficient by design.and that X's response is late in relation to the magnitude of the problem.
Delving deeper into the problem: deepfakes, forced sexting, and uncontrolled GenAI
What's happening with Bikini Grok is not an isolated incident, but the manifestation of a much broader phenomenon: GenAI's ability to create hyper-realistic deepfakes starting with everyday photos. Once the first filter is passed, the model can produce, adjust and manipulate the same image as many times as requested.
In practice, this makes Grok a “Request machine” for sexting and fake pornographywhere anonymous users can experiment with increasingly extreme combinations of clothing, positions, and situations. While some of this use remains within the realm of play among consenting adults, much of it spills over into unrelated people who have never authorized appearing naked or in a bikini.
Once a deepfake is generated, it becomes very difficult to control: the image can be downloaded, forwarded via messaging, shared on other social networks, or even used for threats, blackmail or harassment campaignsGrok herself has demonstrated the ability to alter her own creations over and over again in a matter of seconds, which makes it the dynamics of the damage are almost unstoppable if strong preventative solutions are not implemented.
Furthermore, the normalization of the “bikini game” on a massive network like X contributes to trivializing the idea that Asking an AI to undress someone is just a jokeHowever, when the affected image corresponds to a real person, the line between play and aggression becomes completely blurred, especially if minors are involved or if the photos are reused out of context.
This scenario is reminiscent of the evolution of the so-called deepnudes —tools for “undressing” people in photos— which began as fringe projects and have ended up being integrated into mainstream platforms. The difference, in the case of Grok, is that it involves an official AI of a network with hundreds of millions of users, which multiplies the potential scope of the problem.
Legal framework and regulatory offensive in Spain and Europe
As the social debate intensifies, the authorities are beginning to make their moves. SpainThe bikini controversy has reached the institutional sphere. Minister of Youth and Children, Sira Rego, has sent a letter to the State Attorney General's Office requesting that Grok and X be investigated for possible crimes related to the dissemination of material depicting sexual violence against children.
In the European context, Brussels has asked X to keep all documents and records related to Grok for an extended period, in anticipation of future formal investigations. The European Commission, which has already flagged several major platforms for potential breaches of digital regulations, believes that The creation and dissemination of sexualized deepfakes is now firmly on their radar..
Other countries, such as India and the United KingdomThey have also requested explanations from Musk and have even hinted at the possibility of restrict or block access to X Unless stricter controls are introduced to prevent the generation of this type of content, the Bikinigate scandal joins a growing list of tensions between governments and social media platforms regarding the role of AI in creating harmful content.
Although regulations vary from country to country, many jurisdictions already have a legal framework that allows for prosecution non-consensual sexual deepfakesSome countries, such as Mexico, have approved specific regulations—such as the well-known Olympia Law—that punish the unauthorized distribution of sexually explicit images, including those produced with AI. The global trend points toward an expansion of the concept of child pornography to also include artificially generated material when it depicts minors, real or fictional, with a sufficient degree of realism.
In Europe, the nascent AI regulations And digital services laws add pressure for Big Tech to establish due diligence obligations throughout the entire supply chain: from those who develop the model to the platforms that integrate it and the users who use it to spread illicit content.
Deepfakes, responsibility, and calls for stricter regulation
Beyond the specific investigations into Grok, the bikini case has become a paradigmatic example in the debate about how combat deepfakesIn recent months, hundreds of specialists in artificial intelligence, online security, digital ethics, and public policy have signed open letters calling on governments to take action. urgent and binding against the proliferation of these technologies.
Among the most frequently repeated proposals is that of absolutely ban AI-generated child pornography, even when it depicts fictitious minors, as well as establishing criminal penalties for those who deliberately create or facilitate the dissemination of harmful or false contentIt is also requested that software developers and distributors assume the obligation to actively prevent the generation of deepfakes sexualized, instead of just reacting when the problem has already occurred.
In this scenario, large technology companies—including xAI, OpenAI, Google or Microsoft— are seen as key players. Several experts argue that they should be considered liable if their security mechanisms are easily circumventedespecially when the product design and its commercial incentives favor more "permissive" models to differentiate themselves from the competition.
Creating and distributing deepfakes without consent is already a crime in many parts of the world, but experts warn that Current standards often lag behind technologyEven with applicable laws in place, identifying those responsible, collecting digital evidence, and international cooperation remain considerable challenges.
In this context, the bikini grok functions almost as a practical case study: It shows how a combination of lax design, virality of networks, and a lack of effective controls can lead to the massive use of AI for sexualized purposes., without the victims having any ability to anticipate or defend themselves.
What can users do to protect their privacy against Grok?
Although the ultimate responsibility for curbing these abuses should lie with companies and regulators, many experts agree that, as of today, Much of the burden falls on the users themselves.In the specific case of Grok, the platform offers some settings that allow limit the use of our data and content by AI.
In X, you can access these options by entering your personal profile and navigating to the section on “Privacy and security”Within this menu, there is a specific section dedicated to “Grok and external collaborators”From there, you can disable features that allow AI to use public data, interactions, and results generated with Grok and xAI for system training and improvement.
This type of configuration can help reduce the amount of information available to the model, although it does not completely eliminate the risk that other people upload and edit images in which we appear. In that dimension, the only real protection comes from combining Effective technical measures, robust legal frameworks, and a rapid response of the platforms when abuses are detected.
Digital security specialists also recommend being alert to any signs that our images may be circulating in the form of deepfakes and Use the reporting channels enabled by social networks, as well as seek legal advice when there are indications of a crimeespecially in cases involving minors.
Although none of these individual measures can replace strong regulation or genuine commitment from technology companies, Yes, they can make a difference in stopping the spread of harmful content and documenting abuses so that they do not go unpunished..
The bikini grok case has vividly illustrated the extent to which the combination of generative AI and social media can overwhelm current protection frameworks. Between memes, jailbreaks, and "funny" edits, it has become clear that The line between a harmless experiment and a serious violation of rights is much thinner than it seemed.And that both companies and authorities will have to accelerate so that technological innovation does not prevail, without restraint, over the privacy and dignity of people.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.

