Musk sparks controversy by publicly criticizing Grok, his AI, for biased responses to political violence.

Last update: 23/06/2025

  • Elon Musk openly criticized his AI, Grok, for considering its response to political violence biased.
  • The debate over algorithmic bias and narrative control once again focuses attention on the role of artificial intelligence.
  • Musk promised to recalibrate Grok, raising concerns about data integrity and AI autonomy.
  • The controversy reflects sociopolitical polarization and raises questions about ethics and transparency in AI development.
Musk criticizes Grok-7

Elon Musk is at the center of a new media storm after charging against Grok, the artificial intelligence developed by his company xAI, for issuing responses that he considers erroneous and biased regarding political violence in the United States. The businessman's reaction came after Grok claimed, based on various academic and journalistic sources, that the most lethal acts of violence since 2016 have been primarily linked to the far right, citing episodes such as the storming of the Capitol and various mass shootings.

The AI's response, integrated into the X platform (formerly Twitter), It sparked a heated debate both among users and in the political sphere. Musk, visibly upset, He called the response “objectively false” and accused Grok of merely repeating what he called the “traditional media” discourse., announcing that he would personally work to correct it. This gesture has raised intense questions about the independence of AI and the degree of intervention by its own creators.

Exclusive content - Click Here  Countries where you cannot use DeepSeek: blocks and controversies

The trigger: a simple question and a lot of controversy

AI Biases and Social Debate Grok

It all started from a Question posed on the social network by a user who inquired about which of the two political spectrums had been responsible for the most violence since 2016. Grok, drawing on studies by the Center for Strategic and International Studies (CSIS) and the Government Accountability Office (GAO), concluded that the radical right was responsible for a higher number of violent incidents and deaths, although he also acknowledged an increase in left-wing incidents since the 2020 protests.

Musk was quick to express his disapproval of Grok's response, claiming that the information provided was a "huge mistake." and insisted that its AI was, in his words, “repeating traditional media.” The mogul publicly expressed his intention to recalibrate the system so that its responses would be more accurate and free from what he sees as media bias.

This intervention provoked mixed reactions, both from advocates of AI autonomy and from those who share Musk's critical view of the reliability of mainstream media and its influence on artificial intelligence models.

Exclusive content - Click Here  What is the range of Alexa's voice in a room?

The debate on bias and neutrality in artificial intelligence

Debate on political violence against IA Grok

The incident rekindles a long-standing argument over the algorithmic bias and who polices the police in technological development. Grok's responses were supported by statistics and independent studies, but the discrepancy between the data and Musk's view raises the question of whether an AI should be tailored to suit its owners or speak according to the facts and empirical evidence.

In the background, there is a Concern that intervening in the programming of an AI to adapt it to a specific perspective could constitute a form of censorship, especially on such sensitive issues as political violence or social polarization.

El Grok himself, after being asked about the controversy created by Musk, He replied with data from academic sources, stressing that he was not just repeating media narratives, but which was based on verified figures, which generated even more debate about the independence and objectivity of these systems.

Exclusive content - Click Here  The best tools to summarize texts with AI

Chain reactions and consequences in the sociopolitical context

The case has not remained in the digital sphere, but has fueled conversations about the responsibility of those developing and tweaking AI on influential platforms. Various artificial intelligence experts and academics have warned about the risk of narrative manipulation if bots' responses are modified to match the interests of their creators or the dominant ideological movement.

In a context where The political climate in the United States remains particularly tenseRecent events such as the assassination of Minnesota State Senator Melissa Hortman and the debate over the escalation of political attacks are increasing pressure on tech platforms to act with maximum transparency and accountability.

This whole episode with Grok and Musk calls into question to what extent can an AI be truly neutral and who decides which data or narratives should prevail. The situation reflects how technology can reflect, amplify, and sometimes deepen social divisions. Control over the interpretation and communication of data becomes a central issue for both technological ethics and social cohesion today.

Related article:
How to analyze digital information?