Meta, the parent company of Facebook and Instagram, is set to restart its efforts to use public posts from UK users to train its artificial intelligence (AI) models.
This move comes after a temporary pause earlier in response to concerns raised by the UK’s Information Commissioner’s Office (ICO).
Meta has since made adjustments to address these concerns, making it easier for users to object to the use of their data for AI training.
Changes in Approach
Meta’s AI training program focuses on utilizing public posts shared by adults on Facebook and Instagram in the UK. These posts will be used to help the AI models better understand British culture, idioms, and history, reflecting the diverse nature of UK society.
However, the company has been clear that it will not use private messages or any data from users under the age of 18.
In response to feedback from the ICO, Meta has improved the transparency of its process, offering users a clearer and more accessible way to opt out.
Beginning next week, UK users will start receiving notifications in their Facebook and Instagram apps, explaining how their public content could be used to train Meta’s AI.
Users who do not wish to have their data included will be able to access a simplified objection form. Meta has assured that previous objections will still be honored, and the company will continue to accept new ones.
Privacy Concerns and Regulatory Feedback
The ICO, while not providing explicit regulatory approval, will monitor the situation as Meta resumes its AI training efforts in the UK.
Stephen Almond, the ICO’s executive director for regulatory risk, emphasized the need for transparency and the importance of providing a simple route for users to object to the use of their data.
Almond said:
“We have been clear that any organization using its users’ information to train generative AI models needs to be transparent about how people’s data is being used. Organizations should put effective safeguards in place before they start using personal data for model training, including providing a clear and simple route for users to object to the processing.”
“The ICO has not provided regulatory approval for the processing, and it is for Meta to ensure and demonstrate ongoing compliance.”
Meta’s plan has sparked concerns from privacy advocacy groups, including the Open Rights Group (ORG) and None of Your Business (NOYB). These groups argue that Meta’s actions could turn users into “involuntary test subjects” for its AI development.
They have urged the ICO and European regulators to block the use of personal data for AI training.
While the plan remains paused in the European Union due to stricter privacy laws under the General Data Protection Regulation (GDPR), Meta is moving forward in the UK, which is no longer bound by EU regulations.
Meta’s Position
Meta maintains that its approach to AI development, including the use of public posts for training, complies with UK data protection laws.
The company argues that its reliance on the legal basis of “legitimate interests” allows it to proceed without explicit user consent, as long as adequate safeguards are in place and users have the option to object.
However, privacy advocates remain skeptical, particularly in light of past legal challenges to Meta’s interpretation of legitimate interests.
By resuming its AI training in the UK, Meta aims to enhance its AI models while navigating the complex regulatory landscape surrounding data privacy. The outcome of this move will likely be closely watched by both regulators and privacy advocates, as Meta continues its AI initiatives.