Elon Musk-owned social media platform X, formerly known as Twitter, agreed on Thursday to temporarily cease using personal data collected from European Union (EU) users for training its AI systems. This decision comes after the Irish Data Protection Commission (DPC) sought legal intervention to suspend or restrict X’s processing of user data for AI development purposes.
The DPC, which oversees the compliance of most major U.S. tech companies operating within the EU, moved to prevent X from using data gathered before users were given the option to withdraw their consent. According to the regulator, X began processing data from EU users on May 7 to train its AI systems, but the opt-out feature was only introduced on July 16, and not all users had immediate access to it.
During the court proceedings, Judge Leonie Reynolds noted that the platform had indeed begun utilizing user data for AI training before offering an opt-out option. X's legal representatives assured the court that any data collected from EU users between May 7 and August 1 would remain unused until the court reaches a decision on the DPC's suspension order.
X has defended its actions, stating that users have always had the ability to decide if their public posts could be used by Grok, the platform's AI chatbot, by adjusting their privacy settings. However, the company is preparing to contest the DPC's suspension order, with opposition papers due to be filed by September 4.
In response to the legal action, X's Global Government Affairs account criticized the DPC's order on Wednesday, describing it as "unwarranted, overboard and singles out X without any justification."
The situation with X follows similar regulatory challenges faced by other tech giants. Meta Platforms decided in June to postpone the launch of its Meta AI models in Europe after the Irish DPC advised delaying its plans. Likewise, Alphabet's Google agreed earlier this year to delay and modify its Gemini AI chatbot following discussions with the Irish regulator.
This case underscores the increasing scrutiny of how major tech companies handle user data in the development of AI systems, particularly within the stringent regulatory environment of the European Union.