Skip to content

Artificially Intelligent Systems Adopt New Perspective on User Privacy, Allowing Private Data Sharing for Artificial Intelligence Training Purposes

Users of Claude now have the option to participate in enhancing Anthropic's AI models by allowing their chat data to be utilized.

Artificial Intelligence Developers Adopt New Approach, Allowing Users to Provide Personal Data for...
Artificial Intelligence Developers Adopt New Approach, Allowing Users to Provide Personal Data for AI Learning Purposes

Artificially Intelligent Systems Adopt New Perspective on User Privacy, Allowing Private Data Sharing for Artificial Intelligence Training Purposes

In a significant move, AI company Anthropic has announced a revised data collection policy for its popular chatbot, Claude. The changes, effective from Sept. 28, primarily affect users of connected devices (IoT users) in Europe, who will gain new rights to access and share data generated by their products.

Under the new policy, users will be presented with an on-screen toggle to decide whether to contribute to Claude's training data. Accepting the toggle enables the model to train with future chats and coding sessions. Those who opt out have their data retained for only 30 days, while declining keeps the previous data retention policy.

Anthropic is framing this as a voluntary contribution, unlike some industry peers that have historically collected user data by default. The company emphasizes that collected data will never be sold to third parties.

The move aims to make models more effective and secure. It also indicates an industry trend of companies seeking user input to fuel AI improvements. However, Claude for Work, Claude Gov, Claude for Education, and API usage through Amazon Bedrock and Google's Cloud Vertex AI will keep the existing privacy protections.

Enterprise and institutional consumers remain insulated from the new data practices. For everyday users, the decision to contribute conversations will now become part of the sign-up or continued use process. Users can change their decision about data sharing at a later time.

It's important to note that deleted conversations will not be used for training under any circumstance. The adjustment applies only to individual consumer plans, leaving commercial users untouched.

Recently, Anthropic's Threat Intelligence report revealed that Claude's AI-powered coding tool, Claude Code, has been exploited in cyber-extortion campaigns. The revised policy is a response to these concerns and a step towards enhancing the security and effectiveness of its AI models.

The update also aligns with the European Data Act, which targets companies by ensuring fairer data market conditions and enhancing contractual fairness. This means that users can request their vehicle data be transferred to a service provider of their choice, for instance, if they are connected car owners.

The deadline for setting data preferences is Sept. 28. Users are encouraged to review and make their decisions accordingly to continue using Claude after the deadline.

Read also: