In a major international legal development, French prosecutors have launched a raid on the Paris offices of X the social media platform owned by billionaire entrepreneur Elon Musk. The action is part of a widening cybercrime investigation that has grabbed global attention and raised difficult questions about algorithmic governance, artificial intelligence content moderation, and the limits of tech company responsibility.
What Happened in Paris?
On Tuesday, 3 February 2026, prosecutors from the Paris Public Prosecutor’s Office, working with France’s cybercrime unit and supported by Europol, conducted a search and inspection of X’s French offices. The action comes more than a year after the initial investigation was opened and involves a range of serious allegations.
According to the official statements, the investigation was originally launched in January 2025 following multiple complaints about the platform’s automated systems and content moderation practices. Over time, the probe has expanded significantly.
Why Are Authorities Investigating X?
French prosecutors have said the investigation now covers several suspected offences, including:
📌 Distribution of Illegal Content
- Investigation into sexually explicit deepfake content, including non-consensual AI-generated imagery.
📌 Hate Content and Denial of Historical Crimes
- Prosecutors are looking at instances of Holocaust denial content that appeared in or was generated through X’s services, including via its AI tools.
📌 Algorithm Manipulation and Automated Systems
- The probe also examines whether X’s automated data systems, including recommendation algorithms, may have been manipulated or operated in ways that violated French law by amplifying harmful content, promoting bias, or distorting public information flows.
This range of concerns reflects evolving global scrutiny of how social media platforms manage both user-generated and AI-assisted content.
Elon Musk and X Leadership Summoned
In addition to the office raid, the Paris Prosecutor’s Office has issued summonses for:
- Elon Musk, owner of X
- Linda Yaccarino, former CEO of X (served until July 2025)
- Other X employees (to be heard as witnesses)
They are being asked to appear for voluntary interviews in April 2026 in Paris to provide testimony and respond to the allegations.
Importantly, these are currently voluntary summons meaning the French authorities have invited them to appear without formal arrest or charges yet being filed. The process aims to clarify facts before any formal decisions are made.
Broader Context: Why This Matters Globally
🧠 AI, Algorithms and Responsibility
One of the striking aspects of this investigation is how it combines traditional legal concerns (like child exploitation and hate content) with issues stemming from modern automated systems and artificial intelligence.
A key target of scrutiny is “Grok” an AI chatbot integrated into X that has been criticized for generating problematic and inappropriate content in the past. These concerns have led regulators across multiple countries to look more closely at how AI is deployed on social platforms and whether companies are doing enough to prevent harm.
🟡 Europe’s Digital Services Act
European regulators have been tightening rules on digital platforms under the Digital Services Act (DSA) a wide-ranging framework designed to hold social media and tech giants accountable for illegal content, transparency, and systemic risk management. While this raid is a criminal investigation rather than an administrative sanction, it exists alongside other EU actions such as recent fines and compliance demands placed on X.
🗣️ Free Speech vs. Legal Compliance
Elon Musk and representatives of X have previously accused French investigators of political motivation and argued that the probe could be used to restrict free expression online. Meanwhile, French authorities have framed their actions as enforcing legal obligations for platforms operating on national territory.
This tension highlights one of the most debated questions in global digital policy today: How do we balance freedom of speech with protections against harm and illegal activity online?