Rising extremism, in Australia and around the world, sees demand for the Online Hate Prevention Institute’s expertise and action skyrocketing. The same factors that led to the deadly violence on Capitol Hill in Washington DC are also present in Australia.

Extremism and Politics

This briefing is part of our work for a submission to the Inquiry into Extremist Movements and Radicalism in Australia being held by the Parliamentary Joint Committee on Intelligence and Security. The inquiry was called, at Labor’s
urging, on the last sitting day of Federal Parliament in 2020. We commend the Federal Opposition for pushing for this vital inquiry, and the Federal Government for supporting it.

Since then, we have seen the attack on the US Congress by a combination of neo-Nazis and QAnon insurgents, incited by President Trump, Don Jnr and Rudy Giuliani at a rally (see our resulting newsletter, article and moderation report). A Special Report from the American Jewish Congress exposed the White Supremacist, neo-Nazi and QAnon sources behind the storming of the Capital. The attack was designed to cower Republican members of Congress, seeking to push them to following President Trump’s demand to reject certification of the Presidential election results out of concern for their own safety and that of their families. Under Australian law this would be an open and shut case of terrorism.

Hate, threats and extremism targeting elected representatives are not unique to the United States, we’re seeing this content from Australians as well. In fact, we observed a number of Australian account right in the middle of the online incitement. The Government has been slow to condemn the extremism in Washington DC. We concerned that a number of conservative commentators, and some politicians, are seeking to feed the outrage and then ride the wave of hate they fuel. This irresponsible, dangerous and should be condemned by all Australians. When it comes to extremism there’s no skirting around the edges, or using it in a controlled manner. We must all give a hard no to those seek to fuel extremism.

White Supremacy & Twitter

Twitter is often the platform of choice for political debate, including with politicians. In another article published today we highlighted the threat of White Supremacy seeking to invade Australian politics via #auspol. Unlike Facebook which is using automation to remove White Supremacy, we show how on Twitter automation has been used to spread White Supremacy.

Phrases such as “Its ok to be white” are, after the blunder by the Government in the Senate in 2018, easily understood in Australia as being racist. The phrase originates in the United States, so it isn’t just a local phenomena which Twitter might have “missed”. Despite this one account posted it 590 times, 340 of them with an image as shown below. It was posted day after day, for month after month. All of them tagged ot #auspol.

The danger is not that its offensive. The danger is that its a dog whistle to extremists. It empowers White Nationalists who believe in the White Genocide conspiracy theory. Makes them feel there is support, and people demanding they take action.

Our Halle Report discussed the 2019 attacks in Halle (Germany) and Christchurch (New Zealand) which shows how this ideology has, in the recent past, led to deadly violence. It explains how the radicalisation occured online and was shaped by neo-Nazis who took over /pol/ and changed its culture. This is just one of the threats from online extremism.

Twitter and Parler

Those on the far-right promoting hate on Twitter, particularly White Nationalists, have been expecting a crackdown for some time. Twitter’s slowness seems to surprise them. Many used the description space on their account to advertise their Parler account and were ready to jump ship as soon as Twitter belatedly moved against them.

Parler grew rapidly in both the United States and in Australia in the last couple of months since the November 2020 US election. User’s initial engagement on Parler was enthusiastic, but the lack of diversity (i.e. a lack of people with liberal views to troll) saw Parler users rapidly lose interest. It highlights that their focus is not free discussion with each other, but rather trolling those who disagree with them.

This says a lot about Twitter’s business model and the role online toxicity and argument plays in keeping part of their audience engaged. It’s a sad state of affairs, a negative impact on humanity, and something that needs to change.

The suspension of Parler at the same time as the suspension of far-right Twitter accounts has thrown a real spanner into the works for some of these far-right activists. Many are of course on other platforms, but Parler was the closest to the Twitter experience. Unless Twitter acts to stop them, the platform may soon see many of these extremist users returning to Twitter with new accounts, and picking up right where they left off. Twitter isn’t equipped to stop that.

Twitter must do more in 2021

Back in December 2017 we congratulated
Twitter for starting to apply its policies on hate speech and extremism to
usernames and handles, not just tweets. At the same time we were “concerned
that rather than tackling the online hate which has the most impact on the
public, the focus is instead drifting to those forms of hate which are the easiest
and cheapest to find”. Unfortunately, that view was in hindsight overly
optimistic.

Facebook has made great progress in the automated removal of hate speech, while Twitter has resisted improvements and lagged further and further behind. The Twitter transparency report is framed to highlight government efforts to regulate Twitter . It negatively frames efforts by government to enforce standards that Twitter should be enforcing itself. Digging into the data shows that 1.45 million items of hate speech were removed by Twitter in the latest 6 month reporting period, but the fact that this latest report is for the second half of 2019 (and even this was only released in August 2020) is concerning.

With a market capitalisation of US$42 Billion, Twitter is not a small company, nor is it a start-up. It can and should be investing an appropriate amount in the development of automated systems to remove hate, and in sufficient expert staff, including locally in each country, to get on top of the problem of online hate and extremism. The failure to detect even the most obvious and auto-repeated White Supremacy slogans demonstrates just how far behind community expectations Twitter is at the start of 2021.

Australia must do more in 2021

Following the attack on the Capitol, the closure of President Trump’s Twitter account has been widely reported. The Special Report from the American Jewish Congress highlighted the thread of antisemitism running through neo-Nazis, White Supremacist and QAnon involvement in the attack on the Capitol. It also highlighted how many accounts were suspended following the attack, and shared data from them which was collected in the immediate aftermath of the attack and before Twitter took action.

The misinformation spread on social media significantly contributed to the attack on the US Capitol. The suspension of Parler, a Twitter like service without moderation, is a positive move. Twitter, however, was also part of the problem. As reported by the American Jewish Congress, rather than routinely removing extremist content as it occured, Twitter swept in after the violence, it sought to prevent the further spread of extremism which is welcome, but also removed the evidence of its own complicity. I wait to hear how Twitter will learn from this, and from Parler’s suspension, work to improve routine efforts to remove hate and extremism and the threat they pose to public safety.

The release of report via Twitter led to a huge response.

A number of the tweets in reply were from Australians seeking to draw the attention of Treasurer Josh Frydenberg MP and Dave Sharma MP to the report. This follows the Treasurer saying he was uncomfortable with Trump’s ban on Twitter. He added that “When it comes to breaching of hate or very violent terrorist-related material on the internet, the government has taken action.”

While some action has been taken, it is far from enough. We warned the Government of the threat of right wing extremism years before Christchurch. We offered to partner with them in tackling the problem and were politely told they had everything under control. Clearly they didn’t. The message now echoes the message then. It was wrong then, and it is wrong now. As we recently told the United Nations, government can’t do this alone and civil society can’t help effectively if governments refuse to provide the funding that is needed.

Part of our efforts this past year have been spent cleaning up White supremacist terrorist manifestos, visible in Australia, which government has failed to remove. We spent other time documenting extremist threats and problems like the automated white supremacy trolling our political discussion in Australia as discussed above. It’s time government came to the party and gave us the resources we need to better meet the incredible demand that’s placed on us as Australia’s Harm Prevention Charity for tackling online hate.

Comments and support

You can support our work tackling extremism through a new dedicated fundraiser:

You can comment on this article in this Facebook post, or by replying to this tweet.