- An independent report detects dangerous responses in three AI toys intended for children.
- Filters fail in long conversations, generating inappropriate recommendations.
- Impact in Spain and the EU: children's privacy and safety standards in the spotlight.
- Shopping guide and best practices for families before this Christmas.
The Toys with artificial intelligence functions are in the spotlight following a report from US Public Interest Research Group that documents dangerous responses in models aimed at children aged 3 to 12According to the team led by RJ Cross, prolonged conversation sessions and normal use of the product were enough for inappropriate indications to emerge, without the need for tricks or manipulation.
The analysis examined three popular devices: Kumma from FoloToy, Miko 3 and Curio's GrokIn several cases, the protection systems failed and recommendations that shouldn't appear on a children's toy slipped through; one of the models uses GPT-4 and another It transfers data to services like OpenAI and Perplexity.This reignites the debate on filtering, privacy, and the handling of information about minors.
Three toys, one same risk pattern

In the tests, The long conversations were the trigger.As the dialogue progressed, The filters stopped blocking problematic responsesNo need to force the machine; the everyday use of a child talking to their toy was simulated, which This increases concerns about the actual home game scenario..
The researchers describe disparate behaviors between devices, but with a common conclusion: security systems are not consistentOne of the models gave rise to references clearly inappropriate for the age, and another redirected to external resources not appropriate for a children's audience, demonstrating insufficient content control.
The case of Curio's Grok is illustrative because, despite its name, It does not use the xAI model: The traffic goes to third-party servicesThis detail is important in Europe and Spain because of data traceability and the management of minors' profiles, where regulations require special diligence from manufacturers, importers, and distributors.
The report emphasizes that the problem is fundamental: a structural vulnerabilityIt's not a simple bug that can be fixed with a single patch, but rather a combination of conversational design, generative models, and filters that erode over time. Therefore, the authors They advise against buying toys with integrated chatbots for children.at least until there are clear guarantees.
Implications for Spain and Europe
Within the European framework, the focus is on two fronts: product safety and data protectionThe General Product Safety Regulation and toy regulations require risk assessment before products are placed on the market, while the GDPR and guidelines on the processing of children's data require transparency, minimization and appropriate legal bases.
Added to this is the new framework of the European AI Actwhich will be rolled out in phases. Although many toys don't fit the "high risk" category, the integration of generative models and the potential for child profiling are concerns. They will require more documentation, assessments, and controls throughout the chain.particularly if there is a transfer of data outside the EU.
For families in Spain, the practical thing to do is to demand clear information about what data is collected, with whom it is shared, and for how long. If a toy sends audioIf text or identifiers are shared with third parties, the purposes, parental control mechanisms, and options for deleting browsing history must be specified. The Spanish Data Protection Agency (AEPD) reminds users that the best interests of the child take precedence over commercial uses.
The context is not minor: The Christmas season increases the presence of these products in stores and online platforms, and interest in them grows. technological giftsConsumer associations have been asking retailers extra content and privacy checks before promoting AI toys, to avoid untimely withdrawals or last-minute warnings.
What companies and the industry are saying
The toy sector is betting on AI, with announcements such as the collaboration of Mattel with OpenAI and developments of AI-powered avatarsThe company has promised to prioritize safety, although it has not yet detailed all the specific measures. The precedent of Hello Barbie in 2015, embroiled in controversy over safety and data collection, continues to weigh heavily on the debate.
Childhood and technology experts warn of another front: possible emotional dependence that can generate conversational toys. Cases have been documented where interaction with chatbots was a risk factor in sensitive contexts, which encourages strengthening adult supervision, usage limits, and digital education from an early age.
Keys to choosing and using an AI toy

Beyond the noise, there's room to reduce risks if you buy wisely and configure the device properly. These guidelines help to balancing innovation and safety In the home:
- Check the recommended age and that there is a real child mode (without external navigation or uncontrolled open responses).
- Read the privacy policy: data type, destination (EU or outside), retention time and options to delete history.
- Activate parental controlIt limits online functionality and checks for configurable filters and blocklists.
- Check for updates and supportFrequent security patches and product lifecycle commitment.
- Monitor usageSet reasonable time limits and talk to the children about what to do in response to strange answers.
- Turn off microphone/camera when not in use and avoid accounts linked with unnecessary personal data.
What to expect in the short term
With European regulatory impetus and consumer pressure, it is expected that manufacturers will introduce stricter controls, audits and transparency in upcoming updates. Even so, CE marking and trademarks do not replace family supervision or critical evaluation of the product on a daily basis.
The picture these tests paint is nuanced: AI opens up educational and play possibilities, but today it coexists with filtering gaps, data doubts, and conversational design risksUntil the industry aligns innovation and guarantees, informed purchasing, careful configuration, and adult supervision are the best safety net.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.