Tackling Hate on Twitter – Civil Society and the Bots

Last year we received a suspicious tweet directed to us, followed by one from another account warning us the first account was a troll. That was how we met Impostor Buster, the author of the second account. Impostor Buster was not a person, but an automated piece of software designed by a software developer on one side of the US, Neal Chandra, based on an idea by a journalist living on the other side of the country, Yair Rosenberg. The account was run by Rosenberg for some time without issue, but sadly now Impostor Buster is no more.

A concerted campaign by neo-Nazis to report the account tripped up Twitter’s automated safeguards and the account was disabled in the final days of 2017. The account by it’s nature sends very similarly worded tweets that: (a) mentioning the accounts of a troll, (b) warn it’s a troll account, (c) direct their warning to people the troll account has directed its tweets to. The trolls all blocked the bot so their own feeds didn’t fill up with its warning messages. Twitter closed the Impostor Buster account with the justification that, “A large number of people have blocked you in response to high volumes of untargeted, unsolicited, or duplicative content or engagements from your account”.

Twitter is, as usual, about 4 years behind Facebook when it comes to their policies and approaches to tackling the misuse of their platform. Facebook too ran into problems where the trolls used the automated system to attack those opposing them. In that case it was organisations tackling antisemitism like They Can’t and Palestinian Media Watch which were temporarily shut down. Facebook, however, recognized that their automated tools and their staff were being manipulated and took steps to prevent this.

To be fair to Twitter, Impostor Buster was closed once before and they reinstated it. Perhaps they will do so again and they are just a little slow given one of their key staff for resolving such issues has just left the company (thank you Patricia Cartes for all you did to make Twitter a better place). We hope Twitter will not only reinstate the account, but also white list it so that it can’t be disabled again.

Impostor Buster specialized in a type of troll we have been documenting in the Facebook context since 2012. Impostor trolls are fake accounts that either impersonate a specific person, usually from a minority group, or use a picture of one person from a minority community and a generic sounding name from that community to impersonate the community more generally.

Our 2012 report on “Aboriginal Memes and Online Hate” showed how fake profiles were created to impersonate our CEO, Dr Andre Oboler, using his picture and name to create an account that was used to promote hate and to harass him. 

Our report “Attacking the ANZACs” and related follow up briefings showed how fake people were created using stolen photographs and made up names. In the ANZAC report a French politician’s photograph and our CEO’s last name were used to create the fake profile, along with photo-shopped images claiming the fake person was at what was described as a pro-Israel rally, and in another post he was said to have just had a meeting with his Rabbi, all to try and establish the Jewish credentials of this fake account. We’ve also seen fake attribution of Facebook hate pages claiming they are associated with the Online Hate Prevention Institute, including a RIP Trolling page shortly after the murder of British Soldier Lee Rigby.

Facebook has generally cleaned up such accounts within 2 or 3 days at most, and often within hours of us contacting them. Twitter clearly has a systemic problem given they were aware of the account, a leading civil rights organisation (the ADL) had advised them not to close it, they previously reinstated it… yet this still happened again and it has not been resolved more than a week later – despite coverage in the New York Times.

In his New York Times article Yair Rosenberg contends that Impostor Buster played a vital role. On this we agree with him. He also argues that grassroots exposure of fake accounts, and not top down bans by platforms, are the way to tackle online Nazis. On this point we disagree with him. Rosenberg argues that given the terabytes of data on a platform like Twitter it is “unfeasible to expect them to effectively regulate their content”. That’s a little like arguing that in a large city it is impossible for police to be everywhere at once and prevent all crime, so we may as well disband the police force. It’s a flawed argument. On the technical side, the terabytes of data on Twitter are not a problem, if they were, Twitter wouldn’t be able to function. The argument is therefore flawed at the technical level as well.

Rosenberg’s argument was also about effectiveness. There are a range of approaches a company like Twitter can use to prevent hate. There range in effectiveness, quality and cost.

  • Effectiveness is the degree with which they prevent hate
  • Quality is the degree to which they avoid collateral damage, that is closing legitimate accounts
  • Cost is how much time, effort and resources are needed to implement a given approach.

No intervention is 100% ineffective, so the fact it is free is irrelevant as is the fact it avoids any collateral harm. It doesn’t prevent the abuse which is quite literally costing lives as people are bullied to death via Twitter. There is a cost and a responsibility linked to inaction.

Automated approaches, how effective they are, and conversely how much collateral damage they cause, depends on their programming and configuration. It’s a trade off. If the automated system only bans accounts which have a very high chance of being trolls, the collateral will be minimized but the system won’t be very effective. The trolls will find ways to stay just below the algorithms triggering point. On the other hand, if the algorithm is quick on the trigger, the collateral damage may be high.

Having teams of paid staff manually reviewing reported content is another approach. How effective this is depends on the number of staff and how much much time pressure they are put under. A similar argument applies to that of an AI based approach. If the guidelines for taking action set a low threshold even people will make mistakes. If the bar is set much higher, more trolls will slip under it, but less collateral damage will occur.

The point thing is to have a robust system for correcting any damage, regardless of whether it occurs from an automated or human response. That’s what Twitter currently lacks. A robust way of correcting mistakes linked to a willingness to make some mistakes in order to be more effective. Unlike a real war, in the war against trolls the damage of closing an account can be readily fixed while the harm in leaving it open to abuse others can potentially lead to real casualties. Ultimately both automated and human elements are needed in the system.

Surprisingly, Rosenberg argues against the closure of any accounts. He claims that this is censorship which is simply sweeping the problem of bigotry and abuse under the rug. If platforms didn’t ban people who were abusing their system, they would be enabling everything from terrorism to incitement to genocide.

His argument is far too extreme. It relies on a different form of censorship, that of the state intervening and locking people up, and away from a keyboard, when the most serious abuse occurs. That argument may work when it comes to public speeches which incite violence, but the speed and impact of online communications means an online way of shutting such speech down is also needed. Either a company like Twitter polices their own platform, or the government must regulate the content and tax the social media company for providing this services.

We can’t have social media spaces become a wild west for bullying and abuse. The idea of an unregulated internet has been dead for some time and for good reason. It we want to live in a civilized society, we need shared values and shared rules and the internet can’t be an exception to that. Those rules need to be backed up by force. Platform providers like Twitter have that force and the legal right to use it. Rosenberg’s concern about a private company having that power are justified, but the response is not to prevent them using the power, but rather to ensure they use it in a manner which is transparent and fair. There should be a degree of accountability to the public, either directly, or through government imposed regulation.

Tools like Impostor Buster, the articles by journalists like Rosenberg and the research by civil society organisations like the Online Hate Prevention Institute all help to provide transparency and accountability that improve social media platforms and the Internet as a whole. This pushes the platforms to do better and remind both them and governments not to abuse their power. It pushes platforms to act for the public good. The top down approach plays a vital role in preventing the spread of hate and the abuse of individuals and communities. The bottom up approach explores new solutions and helps to ensure the platforms’ responses are effective and the collateral damage is minimal and rapidly repaired.

There is a role for civil society to play and there are problems we should be working together to address, as we wrote recently in an article about Twitter’s tougher stance on hate symbols and abuse. Let’s work together to reduce the hate, abuse, extremism, bullying and toxicity that is pervading social media. Twitter needs to do more to work with civil society, and it can start by reinstating the Impostor Buster.

Please help us share this article: 

Shares

As a registered Australian charity our work is supported by public donations, help us continue our important work in 2018. To see more of our work as it happens, follow us on Twitter @OnlineHate and via the Online Hate Prevention Institute Facebook page. To see our work tackling hate in action, see our latest major report which looks at the far-right response to the Flinders Street Incident that occurred in Melbourne on December 21st 2017. 

Media and blogs are welcome to reproduce this article free of charge and credited to the Online Hate Prevention Institute. If you have questions, please contact us.