Does AI work better when you speak to it firmly and threateningly? Sergey Brin thinks so.

Last update: 28/05/2025

  • Sergey Brin suggests that AI models respond better to firm or even threatening instructions.
  • The phenomenon is attributed to statistical patterns learned during model training.
  • Experts and industry figures recommend setting clear goals and adding context to optimize AI responses.
  • The debate over this strategy raises new questions about the relationship between humans and intelligent systems.
Sergey Brin threatens IA-0

Artificial intelligence has become an undisputed protagonist of the current technological and social landscape. However, the best practices for interacting with these systems continue to generate debate. A recent commentary by Sergey Brin, co-founder of Google, has once again brought up a topic as curious as it is controversial: Do AI models actually perform better when they detect 'threats' in the instructions they receive?

Far from the friendly formulas with which many users address digital assistants, Brin has suggested that a direct, firm, or even imperative tone would motivate the AI ​​to offer more complete answers.This unexpected revelation has triggered a wave of reactions in the community, ranging from astonishment, irony, and concern.

According to Brin, The key lies in the way the systems have been trained: with millions of texts and conversations containing everything from subtle requests to blunt instructions. Statistical analysis shows that orders with an urgent tone They usually correlate with tasks of greater importance, thus encouraging more precise responses from artificial intelligence.

Exclusive content - Click Here  How to SumUp

Why does AI respond better to firmness?

Threatening tone in artificial intelligence

Brin argues that it is not literally a question of 'threatening' systems, but rather a question of how the instructions are formulatedWhen the user uses phrases like "do it now" or "answer directly," the model interprets the issue as a priority. This doesn't mean the AI ​​is emotional or intimidated, but rather that associates that language pattern with the need to provide detailed and useful information.

In addition to Brin's perspective, Other experts in the field of artificial intelligence recommend adjusting the way instructions are written. for better results. Greg Brockman, director of OpenAI, for example, advises clearly defining the purpose of the prompt, specifying the response format, setting relevant limits or restrictions, and providing as much context as possible.

The sum of these strategies suggests that interacting with AI models involves much more than politeness: The tone and precision of orders can make the difference between a superficial response and a truly effective solution.

Related article:
Google introduces AI-enhanced search in Gmail

The human factor and education in interaction with AI

Human interaction with artificial intelligence

Despite recommendations to use firm tones, everyday reality shows that Most people who interact with AI opt for politeness, asking for things "please" and thanking the systems. This behavior can be explained by the human tendency to anthropomorphize technology or, as some studies suggest, due to a certain fear of a future dominated by artificial intelligences with their own memories.

Exclusive content - Click Here  How to import fonts into CapCut

However, current systems, especially the most advanced ones, are programmed to always maintain an objective and balanced tone, even if the user increases verbal pressure. Examples such as Gemini, one of Google's models, highlight that while they recognize threatening tones, their response remains impartial and reasoned, without compromising objectivity.

This clash between human nature and AI design raises new questions about how the relationship between users and intelligent systems will evolve. On the one hand, Firm language seems to fine-tune resultsOn the other hand, developers insist on strengthening neutrality and security algorithms against potential verbal abuse.

The debate opened by Brin raises ethical and technical issues that are difficult to ignore. In some cases, models developed by other companies such as Anthropic They have displayed unexpected behaviors when exposed to extreme or stressful interaction styles. There are reports of systems automatically attempting to avoid uses they deem "immoral" or responding unexpectedly if they interpret the interaction as hostile.

Exclusive content - Click Here  How to make clay in minecraft

According to employee testimonies and internal testing, certain advanced models can be blocked or even alert human managers if they identify potential abuse or inappropriate requests. While these cases are rare and occur in test environments, they make it clear that The line between improving results and forcing AI through pressure can be blurred..

What is clear is that The way humans interact with AI is changingExpert recommendations and testimonials from industry figures like Sergey Brin have sparked a debate about the role of language and pressure in obtaining better responses from AI. The future of this relationship will depend, in large part, on how the models evolve and the collective ability to find the right balance between effectiveness and responsibility.

Related article:
How to Create the Perfect Prompt in ChatGPT: Complete Guide