A clear diagnosis, a moral urgency, and concrete demands to make online spaces safe for everyone.
I. Digital spaces as public squares
Social networks today function as public squares. They connect billions, influence public discourse and shape behaviour. They must uphold human dignity, not enable its destruction. Platforms that tolerate unchecked abuse turn public spaces into arenas of trauma.
II. Refuse the spectacle of violence
The dissemination of videos showing murders, suicides or torture is not content — it is harm. These images consume empathy, normalize violence, and can trigger vulnerable people. Platforms must categorically ban this material and remove it at scale.
III. X (ex-Twitter): an example of failure
Following its change of ownership, X has shown how platforms can regress: violent clips circulate, hateful communities find refuge, and moderation proves inadequate. This is not free speech in action — it is a failure of governance.
IV. Case study — the death of an influencer
An investigation has been launched into the death of a French streamer known for extreme challenges.
Raphaël Graven, also known as Jean Pormanove, was found dead at a residence in Contes, a village north of Nice, prosecutors said.
The 46-year-old had been subject to bouts of violence and sleep deprivation during streams, and died in his sleep during a live broadcast, local media reported. This tragic case underscores the immediate need for proactive filtering and human escalation pathways to protect life.
V. The cycle: hate attracts hate
Hate fuels more hate. A single unmoderated attack multiplies into thousands; a violent clip becomes a magnet for further cruelty. We must break the cycle by removing the platforms' incentives to amplify harm.
VI. State actors and information warfare
Certain governments instrumentalize social networks to sow anti-Western sentiment, stoke division, or target minorities. These are not organic social movements — they are campaigns. Platforms must detect and neutralize such operations.
VII. AI — a necessary filter, not a luxury
Modern AI is capable of analyzing text, images and video at scale to detect hate, threats and graphic violence — in real time. Deploying AI with human oversight must be mandatory on all major platforms to prevent harm before it spreads.
VIII. Education — the long term remedy
Technology alone cannot heal social wounds. Schools, universities and institutions must teach digital responsibility, empathy and respect. Early education about the consequences of sharing harmful content reduces risk and builds resilient communities.