On the advice of its Trust and Safety Council, Twitter has expanded its understanding of prohibited hate. For the first time Twitter will consider content on users profiles not just their tweets. This means hate speech found in account names, description and profile images will now be considered.
The change appears to be an extension of Twitter’s action against violent extremism more than a reappraisal of their approach to hate speech itself. Twitter has traditionally taken the view that hate speech should be countered with more speech, not with censorship. This approach is obviously beneficial for a platform whose life blood is the content users post, but it is an approach which is generally rejected around the globe – with the United States being the exception to the rule. In Australia, around 80% of the public support laws against hate speech.
The gap between violent extremism and hate speech has narrowed after the “Unite the Right” rally in Charlottesville in August 2017. The rally which became violet, ultimately led to the vehicular attack by a 20 year-old far-right activist which killed Heather D. Heyer and injured 19 others. President Trump’s refusal to call out right wing extremism following the attack became a tipping point in American attitudes to hate speech.
In November President Trump retweeted three tweets by the far-right Britain First group, a splinter group that broke away from the British National Party, itself defined as a neo-Nazi political party by the Oxford English Dictionary. This led to a rebuke from the UK’s Prime Minister. Today, Twitter closed the account of Britain First under its new policy, along with many others including Antipodean Resistance here in Australia (the group responsible for post campaign of hate around universities in Melbourne), but stated that the rules which would see an account closed do not apply to military or government entities.
The new rules seek to “reduce the amount of abusive behaviour and hateful conduct”, a move which still seeks to create some sort of distinction between “speech” and “action” even as it enlarged the scope of what crosses the line into unacceptable activity on Twitter. A spokesperson explain that, “If an account’s profile information includes a violent threat or multiple slurs, epithets, racist or sexist tropes, incites fear, or reduces someone to less than human, it will be permanently suspended”. When it comes to Tweets, hateful imagery will be hidden and users will have to click a button to see it. Such imagery includes “logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin”. This sends the message that those promoting hostility and malice against minorities can continue to use Twitter for this purpose provided their account isn’t clearly setup exclusively for this purpose.
Mean time, the far-right are have coined a new term, claiming they are being “shoahed” a play on the Hebrew work for the Holocaust, the Shoah. This Holocaust trivialisation in the face of the purge of the far right on Twitter should perhaps be unsurprising. The responses we saw from accounts that remain range from outright hate, to white pride ideology, to a supposedly anti-racism account that is clearly satirical. We also saw people making the argument that booting the far-right from Twitter would only make them stronger, only to see those same people in other Tweets self identify as far-right activists. This reflects research discussed in the Cyber-Racism book in which it was primarily those who self identified as people engaging in cyber-racism who opposed laws against racism.
One interesting question is whether hidden content will still be accessible through the Twitter API (Application Programming Interface), the gateway which allows other software to interact with Twitter. A growing body of research into hate speech by both academics and civil society organisations is creating increased pressure on Twitter. This research is generally based on artificial intelligence approaches designed to detect online hate speech using relatively simple approaches based on text analysis which access Twitter through the API. These approached work by having a list of hate speech terms and searching through social media looking for content where these words or phrases occur. Finding hate symbols like swastikas or racial slurs with such approaches is straight forward as the context in which they appear seldom matters. Other hate speech which uses more general language is harder to detect.
Using these AI tools on Twitter is far easier than on other platforms like YouTube or Facebook because Twitter is a simpler platform in technical terms, all the content on Twitter is public, and API gives access to everything a researcher would needed. The result is that hate speech, particularly involving hate symbols and racial slurs, is now increasingly being detected and written about both academically and in reports by civil society, and based on the data they have, the attention is increasingly Twitter-focused.
Twitter does have a lot of room to improve, and the changes they are now making, including taking into account profile information and not just Tweets, is one important improvement. At the same time, the Online Hate Prevention Institute’s own data, which is based on reports made by the public rather than automated approaches, suggests far more of the problem is on YouTube that Twitter. Our analysis also shows that YouTube is far less effective at removing such hate. The problem is that the AI approaches being used have greater difficulty identifying hate speech on YouTube.
Our data is supported by survey data shared in the new book Cyber Racism and Community Resilience: Strategies for Combating Online Race Hate which includes data from large scale surveys of the Australian population on the topic of online racism. The survey, conducted in 2013, examines where Australian’s encountered online racism. The top places were Facebook (40%), news sites (18.5%) and YouTube (15.7%) while only 1.9% of the racism occurred on Twitter.
We commend Twitter for taking steps to tackle online hate, but we remain concerned that rather than tackling the online hate which has the most impact on the public, the focus is instead drifting to those forms of hate which are the easiest and cheapest to find. To keep people safe we need to focus on impact, and the government needs to step up and start funding this work. The Online Hate Prevention Institute’s appeals to the Attorney General’s department for support to tackle the rising tide of hate have for years been met with praise but not a cent in funding. We hope this might changes following the current cabinet reshuffle. The problem of online hate is not going away.
Help share this article:
Our work depends on public donations. To support future work and continued monitoring of online hate, please donate here.