TikTok layoffs: moderation becomes centralized and AI takes over

Last update: 26/08/2025

  • TikTok cuts hundreds of trust and safety positions in the UK and Asia, with a focus on moderation.
  • The company moves functions to Dublin and Lisbon and accelerates automation with artificial intelligence.
  • Britain's new Online Safety Act tightens controls and imposes fines of up to 10% of global turnover.
  • European revenues are up 38%, and the firm claims AI eliminates 85% of violations, without providing evidence.

The video platform has started a cut of hundreds of moderators in your trusted and secure teams, especially in United Kingdom and parts of Asia, while moving towards a model with greater emphasis on artificial intelligence to filter content. Criticism has not been long in coming from unions and online safety advocates., which warn of risks if human supervision is reduced.

The decision coincides with the start of the new British regulations on Internet security and with a reorganization to concentrate operations in fewer locations. According to internal communications cited by the British and American press, the company informed London staff that moderation and quality control will no longer be carried out in the United Kingdom and will be transferred to other centers, in a process in which AI will gain prominence.

Restructuring of moderation and transfer of functions

TikTok layoffs and AI moderation

Sources cited by the Financial Times indicate that the team of London received an internal notice: moderation and quality assurance work will no longer be made in the United KingdomThe firm plans to centralize operational experience in fewer hubs, with special emphasis on Dublin and Lisbon, and has already closed a similar team in Berlin within the framework of this European adjustment.

Exclusive content - Click Here  How to Change Your Profile Photo on Instagram

The scope is remarkable: there is talk of several hundred jobs affected both in the United Kingdom and in South and Southeast Asia. The Communication Workers Union estimates that there are around 300 people in confidence and security, most of which are impacted. The company has indicated that it will offer relocation priority to those who meet certain criteria, without detailing which ones, and called meetings with staff to explain the process.

The company insists that this is a reorganization initiated last year to strengthen the global Trust and Security operating model, concentrating activities in fewer locations to gain consistency and speed in response.

The turn rests on the intensive adoption of artificial intelligence in the moderation chain. The company assures that it has been researching and deploying these tools transversally for years and that it will use them to maximize efficiency and speed when it comes to removing content that violates the rules. It even claims that AI automatically removes around 85% of the publications offenders, although no evidence has been provided to publicly support this figure.

Exclusive content - Click Here  How to know who enters your Instagram profile

The trend is not exclusive. Other platforms such as Meta or YouTube They have long relied on machine learning systems for image recognition, violent language detection, and age screening. However, unions and experts point out that replacing human moderation en masse can be overlooked. cultural and contextual nuances essential to protect the most vulnerable users.

Regulation, security and business figures

TikTok Regulation and Financial Results

The movement occurs in the heat of the Online Safety Law of the United Kingdom, which requires platforms to strengthen age verification and to remove information quickly harmful or illegal content. The penalties for non-compliance are high: up to £18 million or the 10% of global turnover, whichever is greater. As part of this adaptation, the company introduced systems of AI-based age verification to infer the age of users.

In addition, the British data protection regulator has increased the scrutiny on the treatment of minors, with a review launched in March into the use of data from adolescents aged 13 to 17. This regulatory pressure adds to the general climate of surveillance regarding security and privacy on social media.

On the economic front, the company reports in Europe a year-on-year increase of 38% of income to around 6.300 million, while reducing its pre-tax losses from 1.400 to 485 million. Despite the improvement, it maintains a cost optimization plan and internal reorganization which explains, in part, the decision to adjust templates and accelerate automation.

Exclusive content - Click Here  How to Open Facebook from Instagram

Union criticism has been heard. Workers' representatives have accused the company of put corporate interests first to the safety of staff and the public, and warn that AI alternatives are still immature to ensure safe moderation without human support. The concern is about the potential increase in failures affecting vulnerable users.

The company, for its part, defends that the use of AI is already comprehensive to improve security both the community and moderators, reducing staff exposure to harmful content and streamlining decisions. It also emphasizes that the goal of the reorganization is strengthen Trust and Security under a more efficient and globally coordinated operating model.

With the layoffs underway and centralization of functions in Europe, the platform faces a turning point: complying with stricter rules, sustaining growth and demonstrating that AI can keep harmful content at bay without sacrificing quality of moderation nor the use of professionals who provide criteria and context.

Leave a comment