Virginia shooting: Lessons for social media companies

The shooting last night of two journalists in Virginia, US, has again sparked debate about the policies of social media companies.

The shooter Vester Lee Flanagan shot the murders and uploaded them on Twitter and Facebook. Twitter suspended his account within 8 minutes of the video being uploaded, and Facebook too removed his profile and a page created by him. However, the time lapse was enough for the video to get circulated on different social media platforms.

What is more distressing that on both platforms, the video played automatically, unwittingly exposing many people to the graphic violence.

According to this article on the Wired posted an hour ago: “But copies of the original video are still floating around on the social networks, and elsewhere on the Internet. One copy I found, on YouTube, was eventually taken down, but not after amassing more than 2,000 views. Another copy, on Facebook, was up five hours with over 39,000 views, though the company eventually added an interstitial warning cautioning users that the video may be shocking, offensive and upsetting.”

The incident underlies two important aspects of social media.

Virality:  Virality refers to the automatic exponential spread of content on the Internet. While this is not a new phenomenon, virality is written into the very code of social media, where the platform works on the basis of each user pushing content forward to hundreds of other users by the mere click of a button (like, share or retweet). This when applied to positive campaigns is a great tool. But when applied to hate speech can have negatively influence or affect people without any action on their part.

Longetivity: The longer a piece of content remains on the Internet, the harder it becomes to remove it. That is because more and more people make copies of it. As this WSJ article discusses, even within the short period of time the video of the Virgina shootings was online people captured the video and reposted it. Some made copies of the video and posted them.

This is also why OHPI focuses on hate speech on social media more than individual websites. Currently social media companies wait for people to report content as hate speech before taking removing it. This can take between 24 to 48 hours, during which the item can be shared hundreds of times. It also brings the speed and accuracy of their responses to reports in sharp focus. In our experience, the accuracy of their responses can be woefully random.

OHPI believes that the social media platforms cannot shirk their responsibility where the negative consequences of their platforms. We think that the spread of online hate using social media platforms is akin to the environmental pollution caused by traditional industries. Just as we expect traditional industries to take responsibility of the pollution and clean up the mess, so should we expect the social media companies to as well.

For now, here is an article that helps you switch off autoplay on your social media platforms so that videos don’t automatically start playing on your newsfeeds.

Please take a moment to share this briefing:

Shares

To read all our publications on social media policies, go here. To support the work of the Online Hate Prevention Institute you can also like us on Facebook, follow us on Twitter, or donate to support our work.