Hate Free Networks

Manifest & Petition — For responsible, safe and humane social networks


Manifest — A Breath of Light for Human Networks

A clear diagnosis, a moral urgency, and concrete demands to make online spaces safe for everyone.


The Issue

23 August 2025.

“Social” networks have become an integral part of our daily lives, granting us the ability to communicate, connect, and share experiences across the globe. However, these platforms often become breeding grounds for hate speech, violence, and harmful content, posing severe risks to individuals and communities. Therefore, it is essential for these networks to implement intelligent content filters that effectively identify and remove such damaging material.

Responsible moderation by these platforms is not an option; it is a necessity. Current filtering mechanisms are often inadequate, allowing toxic content to slip through the cracks and inflict harm. Advanced artificial intelligence and machine learning technologies offer new opportunities for accurate realtime detection and swift removal of inappropriate content. By employing these tools, social networks can elevate their content moderation and ensure a safer online environment.

Moreover, education plays a crucial role in digital responsibility. Users, especially younger audiences, must be educated on the impact and consequences of sharing hateful or violent material. Platforms should implement educational initiatives that foster empathy, respect, and responsible online behavior.

Taking concrete steps towards responsible content moderation and user education is not only a moral duty but also a step towards sustainable growth for these platforms. Effective measures will not only protect users but also create a more inclusive and supportive digital space.

Join us in urging social networks to take responsibility for online safety. Call for the implementation of intelligent filters to combat harmful content and prioritize digital education for a safer, more respectful internet community.

I. Digital spaces as public squares

Social networks today function as public squares. They connect billions, influence public discourse and shape behaviour. They must uphold human dignity, not enable its destruction. Platforms that tolerate unchecked abuse turn public spaces into arenas of trauma.

II. Refuse the spectacle of violence

The dissemination of videos showing murders, suicides or torture is not content — it is harm. These images consume empathy, normalize violence, and can trigger vulnerable people. Platforms must categorically ban this material and remove it at scale.

III. X (ex-Twitter): an example of failure

Following its change of ownership, X has shown how platforms can regress: violent clips circulate, hateful communities find refuge, and moderation proves inadequate. This is not free speech in action — it is a failure of governance.

IV. Case study — the death of an influencer

An investigation has been launched into the death of a French streamer known for extreme challenges. Raphaël Graven, also known as Jean Pormanove, was found dead at a residence in Contes, a village north of Nice, prosecutors said. The 46-year-old had been subject to bouts of violence and sleep deprivation during streams, and died in his sleep during a live broadcast, local media reported. This tragic case underscores the immediate need for proactive filtering and human escalation pathways to protect life.

V. The cycle: hate attracts hate

Hate fuels more hate. A single unmoderated attack multiplies into thousands; a violent clip becomes a magnet for further cruelty. We must break the cycle by removing the platforms' incentives to amplify harm.

VI. State actors and information warfare

Certain governments instrumentalize social networks to sow anti-Western sentiment, stoke division, or target minorities. These are not organic social movements — they are campaigns. Platforms must detect and neutralize such operations.

VII. AI — a necessary filter, not a luxury

Modern AI is capable of analyzing text, images and video at scale to detect hate, threats and graphic violence — in real time. Deploying AI with human oversight must be mandatory on all major platforms to prevent harm before it spreads.

VIII. Education — the long term remedy

Technology alone cannot heal social wounds. Schools, universities and institutions must teach digital responsibility, empathy and respect. Early education about the consequences of sharing harmful content reduces risk and builds resilient communities.

Concrete demands

  • End toxic anonymity: require verifiable accounts for contributors while protecting privacy for vulnerable users.
  • Zero tolerance for violent content: immediate removal of murder/sexual violence videos and disciplinary escalation.
  • Mandatory AI + human co-moderation: real-time detection, human review, transparent appeals.
  • Algorithmic transparency: no promotion of violent or hateful content for engagement.
  • Institutional education: classes on respect, digital citizenship and conflict de-escalation.
  • Victim protection: clear reporting, legal support and rapid takedown procedures.
  • Legal accountability and sanctions: platforms and malicious state actors must be held responsible.

⚠️ URGENCY: Immediate implementation could save countless human lives and protect vulnerable people from harm.

✍️ Petition — Demand Responsible Social Networks

We, citizens and users, demand immediate measures to stop the spread of hate and violent content online.

The petition

We call on governments, regulators and platforms to adopt the measures listed above as a matter of urgency. This is not censorship — it is protection of human life, dignity and mental health.


By signing you support: AI moderation, legal accountability for platforms, transparency, education and the end of toxic anonymity.


📢 Share

Help this manifesto reach institutions, educators and decision-makers.