Twitter, recently rebranded as “X,” is under increased scrutiny after nine additional complaints were filed across Europe, alleging the company unlawfully used the personal data of over 60 million EU/EEA users to train its AI technologies without their consent. This comes shortly after the Irish Data Protection Commission (DPC) initiated legal proceedings to halt the illegal data processing but has been criticized for not fully enforcing the GDPR.
The complaints, filed by the non-profit privacy advocacy group noyb, span Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Spain, and Poland. The group argues that Twitter’s actions mirror Meta’s recent failed attempt to use personal data for AI projects in the EU. Twitter began using European users’ data to train its “Grok” AI technology in May 2024 without informing or seeking consent from users.
Irish DPC’s Partial Action
Twitter’s blatant disregard for GDPR has led to a surprising reaction from the Irish DPC, known for its corporate-friendly stance. The DPC took Twitter to court to enforce compliance with GDPR, but the focus of the hearing seemed to be on mitigation measures rather than addressing the core violations. According to Max Schrems, Chairman of noyb, the DPC’s approach appears to be “taking action around the edges, but shying away from the core problem.
During the initial court hearing, the DPC settled with Twitter, agreeing to pause further AI training with EU data until September. However, no definitive ruling on the legality of Twitter’s actions was made, leaving many questions unanswered. In response, noyb filed additional GDPR complaints in nine countries, urging a comprehensive investigation into the legality of Twitter’s AI data practices.
Lack of User Consent
The core issue highlighted by noyb is Twitter’s failure to obtain user consent before using their data for AI training. The GDPR provides a straightforward solution: companies should ask users for clear consent to use their personal data for AI development. Schrems emphasized that asking for user permission is not only feasible but necessary, given that GDPR strictly regulates the processing of personal data.
Twitter’s current approach relies on the “legitimate interest” argument to bypass the need for user consent, a stance that has already been rejected by the Court of Justice in a similar case involving Meta. Despite this, the Irish DPC has reportedly been negotiating with Twitter over the “legitimate interest” approach, rather than challenging the fundamental issue of user consent.
Viral Disclosure and Broader GDPR Violations
Adding to the controversy, Twitter has not proactively informed users that their data is being used for AI training. Instead, most users learned about it from a viral post by a user named ‘@EasyBakedOven’ on 26 July 2024, more than two months after the data processing began.
This lack of transparency, combined with Twitter’s inability to comply with other GDPR right, such as “the right to be forgotten and the right to access personal data” has led noyb to request an “urgency procedure” under Article 66 GDPR. This procedure allows data protection authorities to issue preliminary halts and seek an EU-wide decision in cases of large-scale GDPR breaches.
The complaints against Twitter also raise concerns about the company’s ability to distinguish between data from EU/EEA users and those from non-GDPR regions. This issue, along with the processing of sensitive data, suggests that Twitter may have violated multiple GDPR provisions, including principles of transparency, data protection by design, and data minimization.
Dangerous and Illicit practices
Dr Ilia Kolochenko, CEO at ImmuniWeb and Adjunct Professor of Cybersecurity at Capital Technology University, said tech giants are trying to leverage the full power of modern GenAI amid the fierce competition on the global market. “Sadly, many of them believe that user data is their property and they are entitled to use it without asking or sometimes even adequately informing the users.”
He adds that data on social networks – including public posts and comments – frequently contain sensitive personal data like religious beliefs, political opinions or health conditions. “When exploited for LLM training without due precautions and, most importantly, without following the appropriate procedure to obtain user consent, such practices are both technically dangerous and illicit in many countries. They are at odds not only with GDPR but may also infringe national laws on unfair competition, consumer protection or antitrust.”
Noyb is undertaking crucial and socially important actions to curb the uncontrolled misuse of user data for AI training purposes. Kolochenko believes that similar complaints will soon be filed in other jurisdictions with strong data protection laws, including some countries in Latin America and APAC region.
“Reportedly, in response to complaints lodged by Noyb in June, Facebook has recently paused its ambitious plans to train its proprietary LLMs on data of European users. Moreover, Facebook will likely have to make similar pull-back decisions in other jurisdictions,” he adds. X will probably follow the same avenue to avoid heft monetary fines or even suspension of its service in Europe. Moreover, new European legislation – including the EU AI Act and Digital Services Act (DSA) – raised the compliance bar even higher for AI systems, interrelated technologies, and platforms, forcing tech giants to continually improve their transparency, enhance the security of their AI ecosystems, and ensure the legality of data processing,” Kolochenko ends.
Setting a Precedent
As the legal pressure mounts, the outcome of these proceedings could set a significant precedent for how AI technologies are regulated under the GDPR. The involvement of multiple EU data protection authorities increases the likelihood of a comprehensive investigation and could compel Twitter to revise its data practices to comply with European law.
Noyb continues to push for full enforcement of GDPR, stressing that companies like X must respect user rights and obtain proper consent before using personal data for AI development. The coming months will reveal whether the Irish DPC and other European authorities are willing to take stronger action to protect user privacy in the age of AI.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.