Breaking News

Anthropic, the Amazon-backed AI startup behind Claude AI, has announced a major update to its privacy policy and terms of service. Beginning September 28, 2025, most Claude users will be automatically opted in to share their chat transcripts for AI training unless they take action to opt out.
Who Will Be Affected?
Who Will Be Affected?
The policy applies to users of all subscription tiers:
Claude Free
Claude Pro
Claude Max
Claude Code
However, users of Claude for Work (Team and Enterprise plans), Claude Gov, Claude Education, and those accessing Claude through the API via Amazon Bedrock or Google Cloud Vertex AI will not be affected by this change.
Why Anthropic Is Collecting Chat Data
Anthropic says the move is part of the generative AI boom, where companies are leveraging large datasets to train more powerful AI models. Training on real-world conversations allows Claude to deliver smarter, safer, and more accurate AI responses. However, this approach has sparked privacy concerns, as users worry about sensitive information being stored and reused.
This isn’t the first controversy in the space. In July, WeTransfer faced backlash for updating its terms of service to allow user files to be used for machine learning training, before later rolling back the policy.
With increasing concerns about data privacy, many tech companies are now offering opt-out options to users who do not want their personal content used in AI training or shared with third parties. Anthropic has joined this trend by allowing users to disable data sharing for Claude AI.
How to Opt Out of Claude AI Data Sharing
If you don’t want your chats used to train Anthropic’s Claude models, here’s how you can opt out:
On Claude Mobile App:
Tap the three lines icon at the top left.
Select Settings > Privacy.
Toggle off the option “Help improve Claude.”
On Claude Web App:
Click the user icon at the bottom left.
Open Settings > Privacy.
Toggle off “Help improve Claude.”
New users will also see the option to disable “Help improve Claude” when signing up. Existing users must make this change before September 28, otherwise, they’ll be opted in by default.
With AI privacy and data security becoming hot-button issues, Anthropic’s update is part of a wider industry trend where companies balance AI innovation with user trust. If you’re a Claude AI user, reviewing your privacy settings before the September deadline is crucial to maintain control over how your data is used.
See What’s Next in Tech With the Fast Forward Newsletter
SOFTWARE
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.