Responding to Christchurch: Tackling Online Hate and Extremism

Following the terrorist attack in Christchurch New Zealand, live streamed on Facebook, which left 50 people dead hate speech and extremism online as in the spotlight. As Australia’s only Harm Prevention Charity dedicated to to tackling online hate and extremism we at the Online Hate Prevention Institute have suddenly found ourselves and our work in the spotlight.

Given how much we do in this space, this briefing is designed to pull together some of the essential information to help the public and key stakeholders in government and the technology sector get the information they need.

Current News

Our report on the terrorist attack in New Zealand and the response online is bring published raw, updated as we add further data and analysis. A final version with additional chapters on technical, policy and legal considerations will be published in a few months time.

ABC 7.30 put together a powerful report on Monday looking at the aftermath of the attack. There is a segment of that program looking at how we deal with those who spread hate in which our CEO, Dr Oboler, is interviewed. In Australia the full episode is available online until March 24.

Dr Oboler’s interview with ABC News on Tuesday discussed monitoring online hate and extremism and what can and can’t be done. Artificial Intelligence can play a role but it won’t be a complete solution. The interview was picked up by MSN for global coverage.

Dr Nasya Bahfen, a journalist, lecturer and director of OHPI, wrote in the Asia Media Centre in New Zealand about the role sections of the media have played in allowing far-right hate and extremism to spread. She writes, “Even if a journalist did not explicitly attack Muslims or Islam, she or he was a willing actor in a latent process of normalising hate under, for example, the guise of free speech, or the ‘need’ to criticise and question Islam, or to silence practising Muslims (usually Muslim women) and speak on their behalf.” As a practising Muslim reflects personally and writes that,”Muslim groups have to put together tool-kits on coping with community trauma, teaching members of their communities how to explain the hatred directed towards them to their children.”

A week prior to the attack Dr Oboler told 10 Daily that the “alt-right” was a “much more recent movement” than traditional neo-Nazis and that “they tend to use digital channels to get more publicity”. He described them as “extreme nationalists who are also Islamophobic” and warned: “They need to be prosecuted and its up to police and the government to track these people down and hold them accountable. We have the laws, they just need to use them.” At the same time he was speaking with Business Insider about hate speech and conspiracy theories on streaming services. The attack in Christchurch brought these two problems together to devastating effect.

Absolute freedom of speech online leads to places like 8Chan which played a significant role in radicalising Brenton Tarrant, in turn leading to the tragic events that unfolded in New Zealand. Ginger Gorman is a journalist, author and cyberhate expert who has both experienced and explored the world of predator trolling. OHPI’s Carly McClen interviews her about her new book “Troll hunting: Inside the World of Online Hate and its Human Fallout” following the New Zealand terrorist attack

The Australian Financial Review have looked at regulation of social media companies. The article includes an extensive interview with Dr Oboler and one key point he raised was that, “If we wait to deal with the problem until we’re dealing with that actual terrorist attack, we’ve left it far too late. We need to do more to address the hate speech earlier on, further down that food chain.”

We discussed the use of the OK sign by white supremacists with 10 Daily after it was flashed in court by Benton Tarrant. Dr Oboler explain, “It’s a message to empower and embolden the faithful while being just ordinary enough to make those calling it out jump at shadows.” The use of the symbol by the Alt-Right goes back to 2015. Those suggesting it came from 4Chan in 2017 and was simply a from of trolling, were taken in by a hoax.

China Daily interviewed Dr Oboler on the attack and their aftermath. Dr Oboler highlighted the nature of the extremism as being a mix of the Alt-Right and Islamophobia.

In an op-ed in the Herald-Sun Dr Andre Oboler, looked at how live streaming video can rapidly taken down to prevent a repeat the magnification of terrorism that occurred in Christchurch due to the live streaming of the attacks. A range of approaches are considered along with policy advice on what will work, and what won’t. Dr Oboler explores the options of a “notice system” which government could use to contact companies to get an immediate response, a tool giving government direct access to suspend content on major platforms, and requiring companies to act immediately once they become aware of a problem. The last option becomes deeply problematic once legal and implementation details are considered in light of the nature of a rapidly unfolding situation where timing is critical and the incident may last mere minutes from start to end.

The APP Interviewed Dr Oboler about the new crackdown on white supremacy by Facebook. Dr Oboler highlighted the need for partnerships between the tech giants and civil society including those with local knowledge. MSN have taken the story global.

News laws have been rushed through parliament. They have some significant flaws as we discussed with ABC Science.

Deep Background

Monitoring and regulating online hate and extremism was discussed in depth in an article for The Conversation by Dr Oboler a year ago. To looks at the way experts, artificial intelligence and the law can work together to address the growing problem.

At the Asia-Pacific launch of Tech Against Terrorism, Dr Oboler presented on “Open Source Intelligence during High Risk situations”. While the meeting was a closed door affair, Dr Oboler’s presentation was filmed with the consent of the organisers and attendees. An earlier talk in 2017 on the role of engineers and technology professionals in tackling online extremism and hate is available through IEEE TV.

The Online Hate Prevention Institute has undertaken extensive research into past mass casualty events in Australia. Research into the January 2017 Bourke Street car attack and the actions of the far-right and the Counter Jihad Movement following that attack resulted in a 130 page report. This was shared with police and politicians but has not yet been released publicly. Reports into December 2017 Flinders Street attack and the November 2018 Bourke Street attack similarly highlight how events are used by the far-right to stoke hate and fear. This creates forums on social media where incitement to violence starts to occur in the comments.

A whole of community approach is needed and the solution designed by the Online Hate Prevention Institute to facilitate this is discussed in a recent journal paper “Building SMARTER Communities of Resistance and Solidarity” which is freely available. A paper on antisemitism looks at how this could world with a coordinated approach. Dr Oboler’s first paper warning of the danger of hate speech in social media was published in 2008. A 2012 paper raised the need for more ethical approaches to social media management and warned of the potential for problems like the manipulation of elections.

OHPI believes the best public policy is evidence based. We’ve been working for years to gather that evidence and refine our methodologies. In particular we have been looking at how effective social media companies are at removing hate and incitement. Our reports on antisemitism and Islamophobia a few years ago highlighted poor response rates by technology companies even 10 months after content has been reported. The companies have got better since then, but due to a lack of resources to do this monitoring at scale on an on-going basis we can’t tell you how much its improved by. We do know there are still clear cases of failure.

The Online Hate Prevention Institute

The Online Hate Prevention Institute was founded in January 2012. Since then we have had a huge impact in tackling the problem of online hate and our work continues play a significant role internationally.

Despite the urgent need to tackle online hate and extremism, rather than growing to meet the challenge, since our peak activity in 2016 we’ve been cutting costs and reducing our capacity in order to remain viable.

Public donations through the PayPal Giving Fund are allowing us to slowly increase our capacity. As an Australian Harm Prevention Charity registered with the Australian Charities Non-for-profit Commission donations to support our work are tax deductible for individuals and businesses in Australia.

Tackling hate – all of it

In a comment to our page visitor recently asked, “So that means you will act on all the hate directed at Donald Trump, Tony Abbott, right to life advocates, Christianity, conservatives, patriots?” The comment was clearly intended rhetorically, but the answer may have surprised them.

The Online Hate Prevention Institute has indeed worked to remove a page calling for Tony Abbott to die. We also worked on a similar page targeting Julia Gillard. The two are discussed in this briefing.

The report on the New Zealand terrorist attack documents hate and incitement against minorities, but also the hate and incitement against Senator Anning and his supporters.

When document hate and incitement we remove the names and pictures of those responsible to protect them from public attacks. Our belief that online hate should be prevented holds true even when the hate is being directed against a person because that person engaged in hate. We discussed this when Senator Nova Peris OAM was harassed online and a vigilante mob sprung up in response.

We believe:

  • Groups advocating hate do not have a right to use advanced technology platforms like social media to coordinate and grow
  • We believe companies have a right to ban such activities on their platforms and the public should pressure them to do this
  • We believe governments have a right to demand that illegal content, including incitement to violence, is promptly removed
  • We believe governments has a right to apply penalties to platforms when efforts to prevent and remove illegal content are below what could be reasonably expected in the circumstance
  • We believe companies should resists government demands when those demands are a clear violation of international human rights law

Where content spreading hate, incitement, harassment etc is against the law, it is for police to bring charges and the courts to pass judgement. That’s what it means to live in a society under the rule of law. That’s what we support. Not a hate based arms race. Our goal as a charity is to reduce the risk of harm to human beings as a result of online hate. That’s what it says on the tin, and that’s what we do.

Technical Solutions

The Online Hate Prevention Institute has been building tools to monitor and tackle online hate and extremism since 2012. Our Fight Against Hate software has been described as innovative by UNESCO, presented in the United Nations and reports based on its data have been cited by the United Nations.

Fight Against Hate allows the public to report online hate and extremism found on Facebook, YouTube and Twitter and allows them to classify what sort of hate they are seeing. The latest version of the tool allows it to be customised for specific communities and embedded on the websites of relevant organisations. Volunteers or staff from those organisations would have access to both summary graphs and lists of the reported data. Other approved stakeholders could potentially also access the data in real time.

The license for the tool is just $3,500 per year after a significant redevelopment effort to lower the operating costs. There are currently a number of free licenses available for Australian organisations to enable them to take part in a research project to evaluate how their organisation can use the tool. Interested organisations can contact us.

A version of the tool configured for reporting Islamophobia can be found at the bottom of the article on the New Zealand terrorist attack. There is also a version configured for reporting antisemitism. Other configurations can be created rapidly in consultation with the organisations requesting them.

OHPI’s Research

The Online Hate Prevention Institute has produced 15 detailed reports as well as over 200 shorter briefings since we were founded in 2012. We include recommendations for key stakeholders (such as technology companies, government and civil society organisations) in our reports while our briefings are designed to draw attention to specific issues or incidents.

Reports and submissions

Briefings

These are divided thematically into the following categories: racismantisemitismIslamophobiaracism targeting Indigenous Australiansviolent extremismHolocaust denialhate targeting military veteransserious trollingcyber-bullyingmisogynyhomophobia, and griefing.

Contributions to books

Our CEO Dr Oboler has contributed to the following books

Journal and media articles by our CEO on the topic of online hate speech can be seen on his website.

Older News on Online Hate

The Online Hate Prevention Institute has had a significant amount of media coverage over the years. He’s a few highlights. Please note that the technology companies have all improved over time to different degrees so older material may well reflect how things were, but not how they are now.

The following is a selection from media coverage of the Online Hate Prevention Institute over the last few years.

  • Racism laws fail in the world wild web“, SBS News, 26 August 2013. This news story discusses racial against Indigenous Australians. (Video of segment is embedded below).
  • Muslims Trolled“, ABC Radio National Drive, 10 December 2013. OHPI’s CEO discusses the first ever major report on Islamophobia on Facebook with host Walid Ali (play audio).
https://www.youtube.com/watch?v=LhAKfx2gt_o
SBS News, 26 August, 2013
  • Andy Park, Racial abuse caught on smartphone, SBS News, 26 August 2013. Dr Oboler explains how video can be used to call out racism, but it needs to provide clear context so such videos don’t inspire further racism.
  • Peter Kohn, “18C helped beat Facebook hate page”, The Australian Jewish News, 16 May 2014, page 4. Discusses OHPI’s work with Facebook to close the Facebook page of the Australian branch of the far-right Greek political party Golden Dawn.
  • Chris Johnston, “Bendigo mosque a cause celebre for right-wing outsiders“, 27 June 2014. The article used research by the Online Hate Prevention Institute that highlights the way the internet was by the far-right anti-Muslim activists. In page opposing a local mosque, only 3% of people were from the same city, 59% were from an entirely different state and 13% of supporters were from overseas.
  • Matt Khoury, “Tackling the online hate of right-wing extremism“, The Point Magazine, 4 December 2014. The article discusses the launch of Fight Against Hate and the need to tackle right wing extremism. Dr Oboler highlighted the need for people to get on board.
  • Richard Jackson, “OHPI: Tracking down the data to combat online hate“, The Big Smoke, 11 December 2014. A report of the launch event for Fight Against Hate.
  • The Project, Channel 10, January 19, 2015. Dr Oboler is interviews about online hate.
  • David Blumenstein, “Who’s Ben Garrison“, June 2, 2015, The NIB. This cartoon, based on in-depth interviews, explains /pol/ and work by OHPI to tackle part of the harm it was causing. It was 8Chan’s /pol/ where Brendan Tarrant posted the message with the link to his live stream. For those wanting a bit of background on/pol/ this is worth the read.
  • Luke Waters, “Charity takes aim at anti-Muslim sentiment on social media“, 25 September 2015. OHPI’s Spotlight on Anti-Muslim Internet Hate project, gathering data through our Fight Against Hate software is discussed.
  • Christine El Khoury, “Background Briefing” November 22, 2015, ABC Radio National. Dr Oboler provided a background on how social media is being used to promote hate groups, and what terror tactics they use online to intimidate and threaten Muslims. He draws the parallels with the way antisemitic speech and anti-Muslim hate speech operate online.
  • 2015: The year that angry won the internet“, BBC News, Dec 30, 2015. Dr Oboler rejects the approach the platforms had at the time which was that the solution to hate speech was more speech. He said platforms were responsible for addressing the problem and warned that unless things changed, users would leave.
  • Interview with Hack on Triple J, September 15 2016. Tom Tilley interviewed the Online Hate Prevention Institute’s CEO, Dr Andre Oboler about the online hate and incitement after racism and incitement on Facebook in a local community was linked to the death of a 14 year-old Aboriginal boy.
  • Extremism taking us to dark places“, The Daily Telegraph, 15 June 2016 (pay wall)
  • Lilly Maier, “Twitter Gives Online Hate Speech Its Biggest Platform — Why?“, Forward, 18 October 2016. Dr Oboler discusses a culture problem at Twitter that leaves them with “neither the will nor appropriate systems to handle the problem that is running rampant on their platform,”
  • Josh Butler, “How The Bourke Street Rampage Was Quickly Claimed To Be ‘Islamic Terrorism’“, Huffpost, 24 January 2017. Dr Oboler explains how anti-Muslim conspiracy are created and spread.
  • Shira Rubin, “Holocaust Denial Sees New Dawn With Social Media“, Voactiv, 27 January 2017. Dr Oboler explained, “The alt-right and far-right have always been steeped in anti-Semitism, but with the election over the past months, we’ve seen that all inhibitions have faded away, people who were once afraid to come out are no longer. What is now known as the ‘alt-right’ spent years if not decades promoting Holocaust denial on places like 4Chan and Reddit, now they’ve graduated to fake news, a network of blogs and websites that alter search results, and mainstream social media platforms like Facebook.”
  • Patrick Wood, “Tara Moss shines light on trolls and cyber hate in new TV series“, ABC News Breakfast, 15 Mar 2017. The article quotes an OHPI submission which states that, “By the end of 2015 many people felt their hate was acceptable and were comfortable posting it under their real name or their regular social media account.”
  • Andrew Jakubowicz, “Here’s how Australia can act to target racist behaviour online“, The Conversation, 16 October 2017. Prof.
    Jakubowicz writes, “The Online Hate Prevention Institute (OHPI) has become a reservoir of insights and capacities to identify and pursue perpetrators. As proposed by OHPI, a CyberLine could be created for tipping and reporting race hate speech online, for follow up and possible legal action. Such a hotline would also serve as a discussion portal on what racism looks like and what responses are appropriate.”
  • Jenna Price, “Why you should argue with racists“, The Sydney Morning Herald, 6 November 2018. Dr Oboler explained, “Today we are in a new phase . . . it is not the racists who are under challenge but those who stand up to them. These are very dangerous times. Shifting those attitudes may well take a generation.” Dr Oboler noted that social media platforms were doing better at trying to address the problem, but were being swamped.
  • Liam Mannix, Nino Bucci, “Dutton, Turnbull legitimising anti-immigrant vigilantes, say experts“, The Age, 15 January 2018. In the article Dr Oboler notes how the use of laws against religious vilification by police and the courts has a significant impact on the far-right in Victoria who were looking to pivot to another target in response, and politician’s language about “African immigrants” risked making the African community a target of the far right.
  • Benjamin Goggin, “Amazon and Hulu’s algorithms are recommending conspiracy theory films, and the consequences could be more serious than you might think“, 9 February 2019. Dr Oboler warned about the danger of antisemitic conspiracy theories on streaming services. He notes that, “Unfortunately, platforms tend to be reluctant to remove such content or to accept that it is a breach of their hate speech policies… The bottom line is that such content attracts eyeballs, which translates into revenue.”

Comments?

Comments can be left in this Facebook thread. Please note we have a “no platform policy” and trolls and supporters of hate pages get banned.