- Intimate videos of Ray-Ban Meta sunglasses end up on human scorekeepers' screens in Nairobi, Kenya, to train Meta's AI.
- Sama, the subcontractor in Kenya, reviews everyday and extremely sensitive scenes despite the anonymization systems.
- European regulators are investigating whether recordings of EU users are being legally transferred to third countries without an adequacy decision.
- Meta's policies and fine print leave the responsibility for not sharing sensitive data in the hands of the user, despite the massive use of AI.
The Ray-Ban Meta smart glasses They have become one of the most striking devices in the Meta ecosystem: they allow you to record in first person, talk to a artificial intelligence assistant and share content on social media almost without taking your phone out of your pocket. But behind that technological and aspirational image An uncomfortable truth has been revealed: part of what is recorded with these glasses, including very intimate moments and sensitive data, It ends up being manually checked by workers in Kenya..
A series of journalistic investigations, especially from Swedish newspapers Swedish Daily y Göteborgs-PostenThey have focused on How does the system that powers the AI in Ray-Ban Meta actually work?Based on testimonies from employees of subcontracted companies in Nairobi, it has been revealed that private scenes captured in European homes They reach the screens of Kenyan data recorders, who analyze them to train the machine vision models from Meta.
A camera always on and an invisible job in Kenya

The Ray-Ban Meta combines the aesthetics of classic sunglasses with integrated cameras and microphones These devices allow users to record video, take photos, and activate Meta AI with a voice command or a physical button. In everyday life, this translates to walks, meetings, shopping trips, or journeys recorded in real time without needing to take out your phone—something convenient but which also increases the risks to the privacy of both the wearer and those around them.
What many users don't know is that some of that material, especially the material used to interact with the assistant, It doesn't just stay on the deviceWhen the AI is asked a question about what it's seeing—for example, what brand a car is or what appears in a scene—the videos are sent to Meta's servers for processing. And, according to Swedish reports, some of those clips end up in the hands of... human teams in Kenya, who review them frame by frame.
Sama operates in Nairobi, a technology company subcontracted by Meta that is dedicated to visual data annotationIts employees describe a repetitive workday: marking objects on screen, drawing outlines of people, classifying scenes, validating descriptions, reviewing transcripts, and checking if the AI has given correct answers. In return, they receive modest salaries and are subject to... strict confidentiality agreements and internal rules to prevent leaks.
The problem is the type of material they receive. Several scorers have explained that They don't just see halls, streets, or landscapesbut also situations that clearly cross the line into intimacy: People in the bathroom, users changing clothes, having sex, consuming pornography, or accidentally displaying bank cards and account numbers when operating the computer or mobile phone.
One of these workers summed it up with a phrase that is repeated in several reports: “We see everything, from living rooms to naked bodies”Many of the faces, they say, are perfectly recognizable because the algorithms that should blur them are not working. They don't always work well.especially in poor lighting conditions or in unusual shots.
How AI and vision systems are trained using video from glasses

The work being done in Nairobi is a clear example of what rarely appears in product advertising: Without human annotators, AI does not learnIn order for Meta AI to recognize a traffic light, a STOP sign, a dog, a computer, or a person sitting on a sofa, someone has to have previously tagged thousands of images with those elements.
Data recording basically consists of to translate what is seen in a video into information understandable to a machineA noter can spend hours outlining a person's silhouette, marking a lamp, identifying a car, or classifying a room type. All that work is then poured into large datasets used to train and refine the computer vision models that Meta integrates into its products.
Meta admits in its terms of use that Some interactions with your AI may be reviewedThis can be done both automatically and manually, by in-house staff or external collaborators. These documents explain that voice recordings and images may be retained to improve services, provided the user consents. However, they also clarify that for the assistant to function, it needs to process voice, text, images, and, in some cases, video, which effectively limits the user's actual decision-making power.
Former employees of scoring centers have explained to international media that, in theory, Particularly sensitive content should not reach human reviewersbecause it first goes through algorithmic filtering that attempts to anonymize faces and hide private data. But they acknowledge that the system fails fairly often and that, in many fragments, people, their bodies, and clearly identifiable objects remain visible.
In practice, Sama's workers find themselves reviewing scenes ranging from protests and descriptions of crimes including conversations bordering on sexual violence, as well as intimate material captured in homes and bedrooms. Several of them indicate that A large portion of those recordings come from Western homes.This suggests a significant presence of European and North American users among those appearing in the videos.
What does the user know and what do the terms of use say?

Much of the controversy revolves around what the user really understands When you buy Ray-Ban Meta sunglasses at an optician or electronics store, Swedish reporters visited several establishments in Stockholm and Gothenburg to ask salespeople what happens to the data from the glasses and where it is processed. The answers varied: from those who claimed that “everything stays on the mobile phone” to those who directly admitted that They did not know the fate of the recordings.
The journalists were given glasses and, during testing, explicitly declined the option to share “extra data” to improve Meta's products. Despite this, analysis of network traffic revealed continuous connections with Meta servers in Sweden and Denmark when they used the AI function, which would confirm that the processing is not limited to the device nor is it only executed locally.
The fine print of Meta AI's policies explains that users can choose not to contribute certain data to improve services, but at the same time indicates that For the assistant to work, it is essential to process the information sent.This information “can be shared” with internal systems and trusted partners. It is also noted that interactions may be manually reviewed, and the responsibility for avoiding sharing “sensitive” content is placed on the user if they do not want it used or stored.
From the perspective of privacy advocacy organizations, such as NOYB, this wording raises a transparency issue: The user has the feeling of controlling their data.However, in practice it is difficult to understand the extent to which their recordings can circulate through Meta's infrastructure and reach third countries. Data protection lawyers point out that if the material is used to train models, a [relevant legal framework/regulation] would be necessary. clear and specific consentThis is something that is not always reflected in a comprehensible way in the setup and configuration flows of the glasses.
Experts consulted by various media outlets insist that, beyond the legal text, many people are not truly aware that the video captured while interacting with AI It may end up on external servers and even be seen by human reviewersAnd they emphasize that, once that material enters the Meta AI training circuit, it is very difficult for the user to maintain effective control over its subsequent use.
Meta's reaction and concerns about the everyday use of glasses

When asked about the investigations, Meta repeatedly referred to her terms of service and privacy policiesIn some public statements, company spokespeople have emphasized that the images remain on the device unless the user decides to share them with Meta AI and that, when this happens, they may use external collaborators to review some of that data in order to improve the experience. The company insists that it uses filters to hide identifiable information and minimize access to sensitive material.
However, the general response is perceived as vague in light of testimonies from Sama employees describing highly sensitive scenes flashing across their screens. A Meta executive, quoted anonymously, argued that as long as GDPR requirements are met, It doesn't matter where the servers are located or the companies involved in the processing, a stance that does not entirely dispel doubts about the effective control of data once it leaves Europe.
The device also raises concerns among privacy experts due to the possibility of record third parties without them noticingThe glasses include a small LED light that should indicate when the camera is active, but security organizations have warned that this indicator can be easily overlooked, covered up, or even disabled through third-party services. This opens the door to opaque uses in public spaces, from university campuses to bars and public transportation.
At the same time, researchers and developers have begun to put on the table “digital self-defense” solutionsThis is the case of Nearby Glasses, an application created by Yves Jeanrenaud that It alerts you on your mobile phone when it detects Ray-Ban Meta sunglasses nearby. active via their Bluetooth signal. Although the system might be confused with other Meta devices, such as the Quest headsets, it serves as a kind of radar for those who want to know if there are smart glasses recording in their immediate surroundings.
All of this adds to a context of growing distrust towards large platformsThis has been fueled by previous incidents such as the scanning of messages on Messenger and Instagram, or the working conditions of content moderators in European centers, including one in Barcelona, where serious psychological consequences have been reported after years of exposure to extremely violent and sexual material.
The case of the Ray-Ban Meta glasses and the notepads in Kenya is prompting users, regulators, and companies to question how far they are willing to go in exchange for “smart” features in everyday devices. The glasses promise comfort, translation, real-time assistance, and a new way of capturing the world, but the cost in privacy and the opacity of part of its data chain They are generating a profound debate, especially in Europe, where the GDPR sets clear expectations that Meta will have to demonstrate it is meeting.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.