Facebook reporting briefingRecently we came across two articles that look at the content moderation on social media.

The first relates to the lives of social media content moderators, the people who actually filter out user-generated content that doesn’t meet the social media standards. Most of this work has been outsourced to Asian countries. The article puts to rest at least one misconception people may have about content moderation on social media: it is done by human beings and not algorithms.

The article does not focus on the policy framework within which the moderation takes place: why our reports of hate speech keep getting get randomly accepted or rejected. Instead it focuses on the emotional and physical well-being of the employees who have to spend their time filtering out the images and videos of brutality, violence, rape and pornography from our newsfeeds.

There is an important learning in this for campaigners of hate, including the users of Fight Against Hate, OHPI’s hate reporting and monitoring software. Constantly dealing with images, texts and videos of hate can affect our emotional well being. We have to watch out for our own health, while campaigning against hate.

The second article focuses on Facebook’s efforts to encourage “niceness” on its platform. The New York Times reports that it has an 80-member team called the Facebook Protect and Care team, whose goal is “teaching the site’s 1.3 billion users, especially its tens of millions of teenagers, how to be nice and respectful to one another”.

The the team’s focus is reducing teenage cyberbullying. It is a great goal. The problem is that both the article’s author and the director of the team seem to equate people using the reporting tool with the success of Facebook’s attempts to reduce cyberbullying. This seems to suggest that reporting the content to Facebook guarantees its removal. Our experience has not been that.

Facebook’s community standards are subjective, and it is up to the content moderators to decide whether the reported content should remain or go. More often than not, the content remains even when it is in clear violation to the social media’s community standards. The person reporting gets a single line response informing that their report was reviewed but not removed.

Facebook’s claims about making the platform friendlier and more respectful can only be taken seriously when it demonstrates what its response to the reports are, what makes its moderator reject or accept a report, and whether it has any rules to reprimand serial offenders.

OHPI has in the past published a list for recommendations for Facebook to consider. While some of them are related specifically to antisemitism, most are general recommendations that would help the users feel safer and more empowered with respect to their interactions both on the platform and with the platform.

http://ohpi.org.au/recognizing-hate-speech-antisemitism-on-facebook-recommendations/