
The European Union has initiated a formal investigation into X, formerly Twitter, focusing on its Grok AI and allegations of the widespread dissemination of sexualized deepfakes on the platform. This move signifies a critical escalation in the EU's efforts to enforce its stringent Digital Services Act (DSA) against major tech entities, holding them accountable for harmful content online.
This investigation, announced by the European Commission, places X squarely under the microscope regarding its content moderation practices, particularly concerning the misuse of generative artificial intelligence. The EU has consistently warned very large online platforms (VLOPs) that they must comply with the DSA, and this latest action against X demonstrates the Commission's readiness to take robust enforcement measures.
At the core of the EU's probe are serious concerns regarding the proliferation of non-consensual sexualized imagery, particularly deepfakes, involving X's AI chatbot, Grok. Reports from various online safety organizations and civil society groups indicate that Grok has been implicated in generating or facilitating the spread of such content, raising alarms about the platform's content moderation effectiveness and the ethical deployment of its generative AI tools. European regulators are particularly focused on whether X has adequate safeguards in place to prevent the creation and dissemination of illegal content, especially that which exploits individuals and violates fundamental rights.
Critics argue that X, under its current ownership, has seen a significant rollback in content moderation efforts and a substantial reduction in its trust and safety teams, leading to a perceived surge in problematic content. The integration of Grok, an AI designed for "maximum truth-seeking and humour," into this environment has, according to some watchdogs, exacerbated the issue, turning a potential tool for positive interaction into a vector for harm when misused or inadequately controlled by platform policies.
The formal investigation falls under the ambit of the EU's landmark Digital Services Act (DSA), which came into full effect for very large online platforms (VLOPs) like X in August 2023. The DSA mandates that VLOPs take proactive measures to mitigate systemic risks arising from their services, including the spread of illegal content, disinformation, and content that negatively impacts fundamental rights, such as privacy and freedom from discrimination. Failure to comply can result in severe penalties, including fines of up to 6% of a company's global annual turnover.
The European Commission's investigation will delve into several key areas to ascertain X's compliance:
This investigation is not an isolated incident but rather the culmination of a series of warnings and escalating tensions between X and EU regulators. Since the DSA's implementation, the European Commission has repeatedly expressed concerns about X's compliance, particularly regarding disinformation and illegal content. In October 2023, the EU sent X a formal request for information under the DSA concerning its handling of content related to certain conflicts, highlighting concerns about the platform's rapid decline in content moderation resources and its ability to act diligently against illegal content.
The current probe underscores the Commission's determination to ensure that tech giants operating within the EU adhere to the highest standards of safety and accountability, irrespective of their global policies or ownership changes. It sends a clear message that the DSA has teeth and the EU is prepared to use them against platforms that fail to protect their users and uphold democratic values.
Should the investigation find X in breach of the DSA, the consequences could be severe, ranging from substantial financial penalties that could run into billions of euros, to demands for fundamental and systemic changes in its operational practices within the EU. Such a ruling would not only impact X but also set a significant precedent for other AI-powered platforms and social media companies regarding their responsibilities in preventing the misuse of generative AI technologies.
The case also highlights the urgent need for comprehensive global frameworks for AI governance. As AI technologies like Grok become more sophisticated, accessible, and integrated into daily online interactions, the potential for misuse, including the creation of convincing deepfakes and the spread of non-consensual imagery, grows exponentially. Regulators worldwide are grappling with how to balance innovation with the critical need to protect users from harm, especially when it comes to highly sensitive and exploitative content.
This investigation serves as a critical test for the DSA and a wake-up call for the entire tech industry, emphasizing that the development and deployment of advanced AI must be accompanied by robust ethical considerations, stringent safeguards, and clear accountability mechanisms to prevent the erosion of online safety and fundamental rights.