Skip to content

Anthropic plans to utilize your discussions to fuel Claude's operations, but you have the option to opt-out.

Enhancements to the model are intended to boost its safety and enhance its competency in areas such as programming, data analysis, and critical thinking.

Anthropic plans to utilize your chat interactions to fuel Claude, but you have the option to...
Anthropic plans to utilize your chat interactions to fuel Claude, but you have the option to opt-out.

Anthropic plans to utilize your discussions to fuel Claude's operations, but you have the option to opt-out.

Anthropic, the company behind the AI model Claude, has announced a significant change in its data usage policy. Starting from September 2025, regular users will be asked to allow their conversations and coding sessions to be used for training the models. This shift aims to improve Claude's safety and capabilities in areas like coding, analysis, and reasoning.

The rollout of this new policy has raised concerns due to the prominent "Accept" button for the new terms and a smaller, less noticeable toggle for the training permission. By default, the toggle is switched on, meaning many users might inadvertently agree to five years of data retention.

Until now, Anthropic has deleted user prompts and responses after 30 days, unless they violated policies. However, the change in policy means that opt-in chats will be stored for up to five years, shorter than the previous storage period of up to two years for violating chats.

Enterprise customers are exempt from these changes, similar to OpenAI's treatment of corporate clients. All Claude Free, Pro, and Max users will be affected by this change and given the choice to opt out. If no action is taken by the deadline, user chats will automatically become part of Claude's training data.

The decision to use conversations and coding sessions of regular users from Claude for model training is a response to the need for human conversations to make AI more capable, accurate, and competitive against companies like OpenAI and Google. The shift is not about model size or benchmark scores, but rather the use of user data in training.

Anthropic previously stood apart from competitors by not using consumer chats for training. Building smarter, safer AI requires not just big servers, but also the human conversations that occur when people use the product. The company's blog post states that this data usage change is about improving Claude's safety and capabilities.

It's important to note that the use of user data for training is not unique to Anthropic. All companies building large language models are driven by a need for fresh, real-world data. The speed and subtlety of this shift highlight how rapidly user expectations around privacy are evolving.

The opt-in period for this data usage ends on September 28, 2025. Users are encouraged to review the new terms of service and make an informed decision about their data. This is a significant shift in how Anthropic treats user data in the AI market.

Read also: