European regulators are looking into X's most recent user data harvesting practices
Elon Musk's X has quietly updated its data collection policies to opt all users into sharing their data to train Grok, xAI's model, raising concerns about compliance with EU and UK GDPR laws and potentially facing regulatory scrutiny similar to that experienced by Meta.
Elon Musk's X is the latest tech company to silently opt users into its data collection for AI model training policy. Grok, developed by xAI itself a Musk company, is one of the hordes of LLMs demanding increasingly larger amounts of data to refine its capabilities. Grok first made the headlines after Elon Musk publicly declared that Grok would be a rebellious model infused with humor, less political correctness, and more willingness to answer questions rejected by most other AI systems.
Since then, xAI and X have kept Grok in the news by outfitting the model with vision and real-time reasoning, openly releasing some components of the text-only Grok-1 model to a lukewarm reception, and offering a search assistant powered by the model to X Premium and Premium+ subscribers. Although the idea of enabling Grok real-time access to X user data had already been floated around in abstraction, X recently took it upon itself to silently update its data collection policies to opt in all X users to share interactions, public posts, and other public data with the model for training purposes.
As frequently happens, X acknowledged the move only after several users took notice of the change, with some even reporting that the data-sharing policy could not even be disabled through a mobile device. The Irish Data Protection Commission (DPC) which is the lead European regulator of both Meta and X, claims that it had already been collaborating with X on the subject, with DPC deputy commissioner, Graham Doyle expressing the regulator's shock over X's decision to roll out the data collection policy.
The EU and the UK GDPR laws discourage the unjustified use of already-checked boxes and other opt-in mechanisms, favoring instead transparent communication that respects users' right to choose, and having companies obtain informed consent rather than establishing often complicated opt-out procedures. The legislation does not fully outlaw opt-in procedures but requires that they be adequately justified. Last month, Meta announced its decision to pause model training on European users' data following concerns by the Irish DPC and the UK's Information Commissioner’s Office (ICO) about the company's data collection policies.
When it responded to the DPC and ICO's concerns, Meta stated its methods complied with European regulations, and justified its opt-in practices by defaulting to a "legitimate interests" justification; namely, that the need to collect personal data arose from a "legitimate interest" to provide the best possible service to its European customers. Still, Meta paused training models on European users' data, and more recently, stopped its multimodal Llama model rollout in the European Union, citing the "unpredictable nature of the European regulatory environment" as the culprit. X will likely attempt to justify its decision with an explanation similar to Meta's. If investigated and found uncompliant with the GDPR, X could face fines of up to 4% of global annual turnover.