A few comments on our Facebook page have required an extended response from us. This briefing shares some of what we have posted, and some messages we feel are important.
Third party content on the OHPI Facebook page
We regularly post links to articles in the media, or statements by other bodies or Government, related to online hate. We do this because we believe people following the Online Hate Prevention Institute are likely to be interested in such content. We believe the content we share is interesting and an important contribution for people to consider, but that doesn’t mean we always agree with every point raised in the content we share.
Discussion on the OHPI page
Just as we don’t always agree with everything in articles we post, we don’t expect all our supporters to agree with everything in these articles, or in our own articles. That’s ok. Discussion about the articles, or about OHPI’s own content, is welcome. It should, however, be on topic and fair and reasonable.
Posts advocating hate, or suggesting there is no real issue and it’s all just people whining about hurt feelings, may well be removed or see people banned. People associated with hate pages may also be banned, even if they haven’t done anything wrong on our page. The OHPI page is there to promote the mission of OHPI, it is not there to engage in “debate” with people who fundamentally disagree with tackling the problem of online hate.
The problem of competitive victimhood
Whenever we write about one type of hate, someone invariably posts a reply saying “that isn’t important, what’s important is…” and tries to push attention to another issue. This is an aspect of competitive victimhood which doesn’t help any victims. One group or person’s suffering does not lessen the experience of another group or person because it is not a zero sum game.
At OHPI we deal with a wide variety of online hate. We get that many of our supporter are particularly concerned about one kind of hate; and which kind of hate they care about is different for different supporters. We welcome that diversity, and we respect that not everyone will be energised into action on every topic we cover. At the same time, posts saying we shouldn’t deal with a particular topic because another topic is more important, are not on.
Think about it logically, if we had a supporter base of 5 groups, and each cared about a different type of hate, but each also had a veto to stop us working on any topic they think is not important, it’s likely someone would always veto any proposed area we wanted to look at. The only way to cover all the hate types is if everyone accepts that sometimes we will be working on a topic they think is low priority, or which they don’t believe is an issue at all. Having said this, we do hope that through our work people learn about other types of hate and what the issues with them are.
Aggressive competitive victimhood becomes trolling
People that insist on attacking our work, or our choice to look at a particular issue, are a problem for us. They bring negativity to the page, anger those who are affected by the type of hate they are dismissing, and can cause a sharp increase in the level of moderation needed on our page. As a result we often end up solving the problem by banning such people. This means they then can’t contribute on the topic they really do care about.
If we are writing about a topic you really aren’t interest in, please just ignore it and wait until we post something you are interested in. That way you can positively contribute to the page. Better yet, have a read about other topics we cover with an open mind. You might just find our take on them interesting.
The social media industry
OHPI believes social media platforms are a fundamental part of modern life. They bring people closer together, facilitate communication, and enable collaboration. We’re a fan of the technology and we’re as much a tech startup ourselves as we are a human rights organisation.
We also believe that computing and engineering professionals have ethical responsibilities, and that much of the harm that results from social media platforms could be avoided or at least minimized. Reducing harm is a cost to the business. It takes up resources, and may reduce revenues. We recognise this, but we believe this is a cost the social media companies have to bear. They have grown so big so fast in part because of a lack of regulation, loops holes in taxation, and other factors that “differentiate” the new economy from the old.
This differentiation is not because technology is fundamentally different, it is because society, politicians, and the law, have lagged behind the technology. That’s changing, and we believe that’s a good thing. We believe online hate is a form of pollution, it’s not intentional, but it is a predictable by-product of a social media industry. Just as traditional industry had to bear the cost of cleaning up environmental pollution, so should social media bear the cost of cleaning up online pollution. We reject efforts by technology companies to rake in the profits while seeking to gain legal immunities, and pushing the cleanup costs onto tax payers.
Yes, users are responsible for what they post. At the same time, social media companies must be responsible for the systems, or lack of systems, to reduce the impact of abuses of the technology. The technology changes the reality by increasing the potential for harm, and it does so while financially benefiting the social media company, as a result they have a duty to minimize the harm, including by having robust systems for removing harmful content. That includes hate speech.
How does OHPI fight hate?
People also often ask how exactly OHPI combats online hate. We do this in a number of ways, but the primary way is by exposing systematic flaws in social media platforms and the systems they use to handle the reports users make about hate. We also publish recommendations to fix the problems we highlight. Some recommendations involve recognising that something is hate, that is, a policy change; other recommendations may involve changes to software, processes or training, that is, changes to the system itself.
From May we will also be releasing phase two of Fight Against Hate, our online reporting tool. The tool currently (in phase one) allows the public to report and categorise the hate they see online. When phase two of the tool is released it will give stakeholders (NGOs, researchers, government agencies etc) access to the compiled list of items that have been reported about a hate type they are focusing on, where at least one of the reports of that item comes from that stakeholder’s jurisdiction / geographic area.
Phase two of Fight Against Hate will enable a new discussion to be had about how social media companies respond to the complaints users make. The reports will show how long items have remained up since they were reported (if they are still up), how many reports resulted in content being removed, how long it took for content to be removed, and by reviewing items the platforms choose not to remove, researchers (for example) will be able to see if they are systematically missing certain kinds of hate speech. We have demonstrated this problem in our previous reports, but with the software it can be done on a much larger scale with many more civil society, academic and governmental players playing getting involved.