How our reporting tools makes a difference

When people hear we have a tool for reporting online hate, some people get concerned. Those who are peddlers of hate have good cause to be concerned. Those using social media to invite violence or plan extremist attacks on the public should certainly be worried. For everyone else, our tool should be celebrated.

First let’s clear up a few misconceptions. There are automated ways of blocking or removing posts based on word filters. Some online systems use these filters to eliminate, spam, profanity or other content they don’t want on their site. is not sort of tool. It is designed for content found on the major social media platforms, currently Facebook, YouTube and Twitter. It doesn’t collected data in an automated way, instead it relies on people reporting content to it. Finally, it doesn’t remove content, instead it provides transparency on how the social media companies respond when users report something to them.  

If something is taken down it is because a platform like Facebook reviewed it, and their staff decided to remove it. Where our system comes into play is when there is a gap between what the public expect and what the companies are doing. In other words, when there is something which should be removed and isn’t. creates lists of such cases and allows tracking of whether they are still online. It is then up to the community (including OHPI itself) to keep reporting these items, campaign on them, publish about them, and explaining why they should be removed. These activities can lead to social media companies looking at the item again in a new way and taking action, but the final decision is still made by the company. also plays a role with more dangerous types of content. At the Online Hate Prevention Institute most of our work is published on our website, but there is a part of it, around 20%, which involves confidential reports. These reports relate to incitement to violence, cases of serous threats to individuals, and content related to the promotion of violent extremism including terrorism. Our confidential reports are shared with law enforcement, security agencies and the social media platforms.

Research at universities, government agencies, think tanks, and human rights organisations also use our data. We are able to provide lists of content which has been reported to us and which has not been removed. The researchers use these items for academic research, campaigns, reports, and to inform government and organisational policy. brings transparency. Without such an approach, we don’t know how many items are reported. We don’t know what is taken down and what isn’t. Each of us would only know about the items we personally reported. fixes this by collating the items and giving experts accept to follow up on those cases which need further attention.

While does not automate the collection of data, it does use automated approaches to judge the quality of the reports made to it. This allows the content that is incitement, and most likely to result in harm to someone, to be given priority. It allows a researcher to get more certain cases to look at with less chance of mistaken or deliberately false reports. The system is calibrated against experts views, and for now, the content is also manually checked before release.

Bottom line is that there is a problem here which needs a solution. The problem is that in most cases when people report things to social media companies, their very valid reports are rejected. Our solution helps bring transparency, which is the first step to creating change so more of these reports are properly dealt with by the companies. The ultimate decision of what comes down and what doesn’t is made by the social media platform. The point of our system is to hold them accountable, for example through publications, campaigns, etc, where there are cases that they are getting wrong.

If you want to help make a difference, head over to, click to login / register with Facebook and next time you see an item of hate… report it to the platform, then report it to