Claude changes the rules: this is how you should configure your account if you don't want your chats to train the AI

Last update: 02/09/2025

  • Anthropic introduces an explicit preference for users to choose whether their conversations with Claude are used for training.
  • The change affects Free, Pro, and Max plans; Work, Government, Education, and API usage (Bedrock, Vertex AI) are excluded.
  • Data retention is for five years if you participate and 30 days if you don't; deleted chats will not be used for training.
  • You must set your preference by September 28, 2025; you can change it at any time in Privacy.

Privacy in Claude

Talking to an AI assistant has become quite normal, but we rarely think about it. What's up with those talks?. Now Anthropic introduces a relevant change in Claude's privacy: After a deadline, each user will have to decide whether to allow their conversations to be used to train future models.

The company will require those who use Claude on the Free, Pro and Max plans Choose your preference before September 28, 2025Without this choice, continuing to use the service becomes more difficult; the decision will appear in an in-app notification and can also be set during new account registration.

What exactly changes

Claude's Privacy Update

From now on, users can give or not their permission for your chats and code sessions help improve Claude's performance and security. The choice is voluntary and reversible at any time from your Privacy settings, without having to go through any complicated processes.

Exclusive content - Click Here  Check if my Android phone is spied on

The new policy applies only to the post-acceptance activityOld threads without new interactions won't be used for training. However, if you resume a chat or programming session after accepting, your contributions from that point forward can be included in the improvement dataset.

The change doesn't cover the entire Anthropic ecosystem. They're left out. Claude for Work, Claude Gov, Claude for Education and API access through providers like Amazon Bedrock or Google Cloud's Vertex AI. That is, the focus is on the consumer use of Claude.ai and Claude Code associated with those plans.

Those who accept now will see the effects applied immediately to their new conversations. In any case, from the deadline it will be mandatory have indicated a preference to continue using the service without interruption.

Data processing and retention

 

If you give your permission, Information provided for improvement purposes may be retained for five yearsIf you do not participate, the policy of 30-day retention. Also Deleted chats will not be included in future trainings, and any feedback you submit may be retained under these same rules.

Exclusive content - Click Here  Is personal information secure when using ExpressVPN?

Anthropic claims to combine automated tools and processes to filter or obfuscate sensitive data, and does not sell user information to third parties. In return, the use of real interactions seeks to strengthen the safeguards against abuse and improve skills such as reasoning, analysis, and code correction.

Reasons and context for the change

Language models require large volumes of data and long iteration cycles. With the open web providing less and less fresh content, companies are prioritizing signals from real interactions to refine responses and better detect problematic behaviors.

How to set your preference

Anthropic Claude Chrome

When logging in, many will see the notice “Updates to consumer terms and policies”. In that box, you'll see a control to allow your conversations to help improve Claude. If you don't want to participate, disable the option and confirm by clicking “Accept.”

If you already accepted and want to check it, open Claude and go to Settings > Privacy > Privacy Settings. There you can change the “Help improve Claude” option whenever you want. Keep in mind that disabling it doesn't delete anything that was previously used; what it does is block new interactions enter into future training.

Exclusive content - Click Here  Nearly one in five new games on Steam uses generative AI.

Limits and clarifications

The company emphasizes that the collection for improvement purposes applies only to new content after accepting the terms. Resuming an old chat adds recent material, but older content remains excluded if there was no subsequent activity. Business and government accounts use separate conditions, so this change does not affect them.

For those who prioritize maximum privacy, the settings allow you to opt out and maintain the 30-day policy. Those who contribute data, on the other hand, will see how security mechanisms and the model's capabilities are adjusted with signals from real-life usage.

With this move, Anthropic seeks to square the circle between data needs and user control: You choose whether your conversations help train, you know how long they are stored and can change your mind whenever you want, with clearer rules about what is collected and when.

how to protect privacy
Related article:
Protect your privacy on Google Gemini: Complete guide

Leave a comment