The European Union on Monday announced a formal investigation into X, Elon Musk’s social media platform, over its failure to combat illegal content and disinformation, a lack of transparency in advertising and “deceptive” design practices.
The investigation may be the largest regulatory action yet against According to researchers, the company’s new policies have led to a rise in inflammatory content on the platform, leading brands to scale back their advertising.
In pursuing X, the European Union is for the first time using the authority it gained through the passage of the Digital Services Act last year. The law gives regulators sweeping new powers to force social media companies to monitor their platforms for hate speech, misinformation and other divisive content.
The European Commission, the executive arm of the 27-nation bloc, had signaled its intention to investigate X’s business practices more closely. In October, regulators opened a preliminary investigation into the spread of “terrorist and violent content and hate speech” on X after the start of the Israel-Gaza conflict.
The research highlights a major difference between the United States and Europe in Internet surveillance. While online posts are largely unregulated in the United States due to protections for free expression, European governments have imposed stricter restrictions on hate speech, incitement to violence and other harmful material for historical and cultural reasons.
The Digital Services Act was an attempt by the EU to force companies to introduce procedures to more consistently comply with rules governing such online content.
Monday’s announcement marks the start of an investigation with no set deadline. The investigation is expected to include interviews with outside groups and requests for further evidence
EU officials said X may not be adhering to rules that require online platforms to respond quickly when alerted to illegal and hateful content such as anti-Semitism and incitement to terrorism. The law also requires companies to carry out risk assessments regarding the spread of harmful content on their platforms and to take countermeasures.
Officials also raised concerns about X’s content moderation policies in non-English languages, particularly with elections looming across the continent in 2024.
In addition, the investigation will examine X’s efforts to counter the spread of false information. The company is relying on a feature called “Community Notes” that allows users to add context to posts they believe are misleading — an approach that EU officials say may not be sufficient. Regulators will also examine how posts from X users who pay for authentication, symbolized by a blue checkmark, will be given more visibility.