The Far-Right on Social Media

extremism_01

The threats, incitement and expressions of violence appearing in the comments on the pages of far-right groups have been causing concern for police and the community at large. These comments are often more blatant and violent than the posts made by the pages themselves.

The threat

We were asked whether, given the comments, these groups pose a real threat. Our reply is that when it comes to right-wing extremists, the question is when, not if such an attack will occur. An attack is less likely to come from the leadership of a group like UPF than from an individual, or a groups of supporter, who tire of rhetoric and decide they need to push the envelope. Attacks from older far-right groups, those not based on social media, are also a risk as these groups seek to prove, both to themselves and potential supporters, that they are still relevant.

The administrators of Facebook groups quickly learn there is a line it would be dangerous for them to cross. Without Facebook, groups like UPF and Reclaim Australia would have very little traction. What deters them from posting more obvious incitement is not police, but rather the risk that Facebook might step in and close their accounts. Such action would render them not only powerless but largely irrelevant.

This is not to say there is no desire among those running far-right groups to go further, but the potential cost is far too high. Instead, the attack work is left to smaller pages like “Left Wing Bigots and Extremists Exposed” (https://www.facebook.com/LWBE3/) now in it’s third iteration after two previous closures by Facebook (one of them after they targeted OHPI). These pages may be run by the same people or their associates, but the separation helps to protect the larger pages from removal.

Facebook doesn’t provide a deterrent

What the larger far-right pages provide is a “safe space” for bigotry and a ready audience of like-minded people. In this space people can post vile comments and even incitement without fear that those running the page blocking them or removing their posts. Facebook may still remove specific comments and can ban repeat offenders from Facebook for a few days, but this penalty is without teeth. It has no lasting impact, unlike the closure of pages which permanently remove a valued asset and access to an audience that has been build up over time. With no real deterrent from the page administrators or Facebook, deterrence is left to police and the law.

Existing law is not nimble enough

There are criminal law provisions police could use to secure convictions for online incitement, but the last vast majority of inciting posts are simply ignored as there are insufficient resources dedicated to the task. This undermines the value of the law as a deterrent. To put it in perspective, imagine the impact on speeding if there was only one officer dedicated to the problem and they only secured a handful of convictions a year. A new approach is needed, one that enables small but meaningful penalties with a minimum of process as both a warning and a deterrent, while larger penalties remain for repeat offenders and more serious cases.

It seems bizarre that we live in an age when there can be on the spot fines for putting feet on public transport, riding a bicycle without a helmet or speeding, but threatening someone online is in most cases likely to be ignored because processing it is too resource intensive.

Facebook’s approach could be improved

We have made recommendations to Facebook that page administrators be held accountable for moderating their pages. It would not take much effort for Facebook to ensure the initial complaints about comments on a page are shown to the page administrator, while hiding who reported it, and that the administrator is given an opportunity to review it before Facebook itself steps in. This avoids the need for page administrators to check every comments, while ensuring they can see the comments causing other visitors concern.

A page which is repeatedly attracting the sort of bigotry, hate and incitement which violates Facebook’s own terms of service, and in many cases the law as well, and where the administrators are doing nothing to prevent this abuse, is one whose overall effect is negative and harmful. Such a page should not only be closed, but the administrators should be considered negligent and should be prohibited from administering other pages for a period of time.

The growing problem

The intention of far-right groups, even if not explicitly stated, is clear by the very nature of the groups. The posts are an open invitation to engage in discussion which almost inevitably leads to expressions of hate, bigotry and incitement. The content is not a form of dog-whistling, with a hidden message the public at large is not meant to hear, but an invitation for the posting of hate which is openly expressed.

The level of incitement, bigotry and racism has been growing over the last three years and particularly in the last 18 months. Three factors have contributed to this: the growth of the far-right anti-Muslim movement in Australia; the lack of deterrence as existing laws are seldom been enforced; and finally statements, particularly by some politicians, about S 18C of the Racial Discrimination Act which have led some people to incorrectly believe they have a right to express bigotry and even incitement. When expressed online, abuse and incitement is not free speech but potentially a crime which breaches the Commonwealth Criminal Code. It can lead to up to three years in jail.

The above briefing was shared in advance with media and quoted in the following coverage:

Please help us share this article:

Shares

You can also support our work with a donation, join us on our Facebook page, or sign up to our mailing list.