- Anthropic introduces an explicit preference for users to choose whether their conversations with Claude are used for training.
- The change affects Free, Pro, and Max plans; Work, Government, Education, and API usage (Bedrock, Vertex AI) are excluded.
- Data retention is for five years if you participate and 30 days if you don't; deleted chats will not be used for training.
- You must set your preference by September 28, 2025; you can change it at any time in Privacy.
Talking to an AI assistant has become quite normal, but we rarely think about it. What's up with those talks?. Now Anthropic introduces a relevant change in Claude's privacy: After a deadline, each user will have to decide whether to allow their conversations to be used to train future models.
The company will require those who use Claude on the Free, Pro and Max plans Choose your preference before September 28, 2025Without this choice, continuing to use the service becomes more difficult; the decision will appear in an in-app notification and can also be set during new account registration.
What exactly changes
From now on, users can give or not their permission for your chats and code sessions help improve Claude's performance and security. The choice is voluntary and reversible at any time from your Privacy settings, without having to go through any complicated processes.
The new policy applies only to the post-acceptance activityOld threads without new interactions won't be used for training. However, if you resume a chat or programming session after accepting, your contributions from that point forward can be included in the improvement dataset.
The change doesn't cover the entire Anthropic ecosystem. They're left out. Claude for Work, Claude Gov, Claude for Education and API access through providers like Amazon Bedrock or Google Cloud's Vertex AI. That is, the focus is on the consumer use of Claude.ai and Claude Code associated with those plans.
Those who accept now will see the effects applied immediately to their new conversations. In any case, from the deadline it will be mandatory have indicated a preference to continue using the service without interruption.
Data processing and retention
If you give your permission, Information provided for improvement purposes may be retained for five yearsIf you do not participate, the policy of 30-day retention. Also Deleted chats will not be included in future trainings, and any feedback you submit may be retained under these same rules.
Anthropic claims to combine automated tools and processes to filter or obfuscate sensitive data, and does not sell user information to third parties. In return, the use of real interactions seeks to strengthen the safeguards against abuse and improve skills such as reasoning, analysis, and code correction.
Reasons and context for the change
Language models require large volumes of data and long iteration cycles. With the open web providing less and less fresh content, companies are prioritizing signals from real interactions to refine responses and better detect problematic behaviors.
How to set your preference
When logging in, many will see the notice “Updates to consumer terms and policies”. In that box, you'll see a control to allow your conversations to help improve Claude. If you don't want to participate, disable the option and confirm by clicking “Accept.”
If you already accepted and want to check it, open Claude and go to Settings > Privacy > Privacy Settings. There you can change the “Help improve Claude” option whenever you want. Keep in mind that disabling it doesn't delete anything that was previously used; what it does is block new interactions enter into future training.
Limits and clarifications
The company emphasizes that the collection for improvement purposes applies only to new content after accepting the terms. Resuming an old chat adds recent material, but older content remains excluded if there was no subsequent activity. Business and government accounts use separate conditions, so this change does not affect them.
For those who prioritize maximum privacy, the settings allow you to opt out and maintain the 30-day policy. Those who contribute data, on the other hand, will see how security mechanisms and the model's capabilities are adjusted with signals from real-life usage.
With this move, Anthropic seeks to square the circle between data needs and user control: You choose whether your conversations help train, you know how long they are stored and can change your mind whenever you want, with clearer rules about what is collected and when.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.