FOR IMMEDIATE RELEASE: The Online Hate Prevention Institute (OHPI) is concerned that despite a significant improvement in public relations, there are fundamental underlying problems at Twitter when it comes to the challenge of dealing with hateful and dangerous content. OHPI will continue our open and frank discussions with Twitter on how this issue can be addressed, but will not be joining the new “Trust and Safety Council”.
OHPI have been advised by Twitter that the focus of the new Council will not be to address the issue of incitement and dangerous content Twitter is currently hosting, or how Twitter can improve in this area, but will instead look at ways Twitter can get more people using Twitter more often. The Council’s purpose will be increasing the use of Twitter in the area of anti-hate messages. We believe this is an important move for Twitter, indeed imperative to the survival of the company. Large numbers of users have been leaving Twitter in response to cyber-bullying and abuse which is often of a racist or misogynistic nature.
Dr Andre Oboler, CEO of the Online Hate Prevention Institute explained that, “OHPI supports the use of counter speech. We engaged in a major counter speech campaign as recently as last month. We are, however, concerned because counter speech alone will not solve the problem. Counter speech can help to build community and individual resilience, but is harder to spread than bigotry and hate. We are concerned that a focus on counter speech alone may be ineffective as it may not reach the people poisoned by the spread of online hate. We believe a balanced approach is needed, both to continually improve efforts at removing content that incite hate and violence, and to promote positive content that strengthens community cohesion. The message that social media is not a place to spread hate must go out, but the hate must also be removed if that message is to be believed. Actions always speak louder than words. This message is needed now more than ever given the sharp rise in hate speech in 2015, particularly on Twitter.”
Recently analysis by OHPI into Twitter based hate speech against one minority group showed that 78% of the hate content was still online 10 months after it had been reported. Twitter has a significant need for improvement when it comes to removing dangerous and hateful content, and this work is needed if Twitter is to stem the loss of users. The Trust and Safety Council is a start, but some sort of forum focused on closing the gap between what people are reporting and what is being removed is also urgently needed.
All social media platforms and human rights organisations recognise that there is a balance which must be struck between competing human rights. Freedom of expression, the right to life, and the right to human dignity all need to be considered.
Twitter is ultimately bound by US Law and in the US the right to life takes primary priority, as it does in other countries. As a result true threats, fighting words, and a range of other forms of dangerous speech have never enjoyed freedom of speech protection under the First Amendment. The US Supreme Court stated as far back as 1942 that, “There are certain well-defined and narrowly limited classes of speech, the prevention and punishment of which have never been thought to raise constitutional problems” (Chaplinsky v. New Hampshire, 315 U.S. 568, 62 S. Ct. 766, 86 L. Ed. 1031).
“Hate speech” is a far broader concept than the categories of unlawful speech under US law. The US First Amendment means that the US Government cannot pass laws either mandating that social media companies remove hate speech, nor can they pass laws mandating that social media companies must host such hate speech. Under US law the choice is left entirely in the hands of each company. This position is based on the primacy given to freedom of expression under US law. It is the freedom of the companies to determine what they wish to host. The position in Europe is quite different, and the protection of human dignity is given greater weight. Many countries have national laws against hate speech, and some of those contain criminal law provision. An international treaty from the Council of Europe, the Additional Protocol to the Convention on Cybercrime, explicitly calls for criminal provisions against online racism and xenophobia.
Social Media companies like Facebook are making significant efforts to increase the identification and removal of online hate content. Twitter is making significant efforts to remove content promoting terrorism, which is unlawful in the US, but is making far less effort when it comes to content promoting hate speech. Twitter’s underlying position on content removal was outlined in a 2011 blog post titled “The Tweets Must Flow”. In this post Twitter argues that “we strive not to remove Tweets on the basis of their content”. Dr Oboler explains that, “there is still an underlying corporate culture of resistance to content removal as an approach to tackle online hate speech. That corporate culture at Twitter needs to change in order to keep people safe from harm, and to keep Twitter viable as a platform people are willing to use.”
The Online Hate Prevention Institute is a specialist charity based in Australia which seeks to reduce the risk of harm to people as a result of online content. OHPI aims to make online hate as unacceptable as real world hate. OHPI operates the FightAgainstHate.com software tool which produces empirical evidence on trends in online hate and on the effectiveness of social companies in responding to this hate. The focus of OHPI’s work includes topics such as: antisemitism, anti-Muslim hate, misogyny, homophobia, countering violent extremism, freedom of expression, law reform and more.
Please help share this press release:
The Online Hate Prevention Institute’s reports are released for free to increase their impact. The work is made possible by public donations, you can donate to help support this vital work. You can also support OHPI on our Facebook page and stay updated by joining our mailing list.