Some argue that the only solution to hate speech is more speech. We believe there is a need for three things to tackle hate speech:
- the removal of content which cross the line
- messages responding to things which don’t cross the line but are inappropriate
- messages promoting the positives values which hate seeks to undermine
Removal when things cross the line is needed because responding isn’t always safe, effective or sufficient to prevent real harm. Responding is needed, when it is safe to do so, in order to set norms and let people know what we expect in our society. Positive values need to be promoted online because the online world is not a value free space. We need to ensure positive values are seen as part of the norm of our society. Negativity spreads more easily, and bad news has always sold. In social media we collectively influence what is spread and shared. Let’s use that power to share both responsibly and positively.
Where is the line?
There are really two lines, both are about established rules and consistency. The lower line is whatever the law says is unlawful in a given country (there are obvious exceptions). The higher line is whatever the social media platform declares to be unwanted in its terms of service.
Law is a blunt instrument and one of last resort when it comes to shaping a good society. The relies on other mechanisms being used to prevent harm, and it sits there as a backstop for when all else fails. It does serve both an educative effect, setting minimum standards, and deterrent effect, threatening those who would breach the law with punishment.
Above the law we have what society will accept. If people cross that line, they risk becoming unpopular as people cut their connections to them, they risk being ridiculed in the press, if they are politicians they risk being voted out of office. For the average person, what they risk most is being exposed as a bigot and excluded from polite company. In terms of social media, they can have their comments deleted by people running a page, they can be banned from the page, they can have their content removed from the platform (e.g. Facebook) and they can ultimately have their account suspended. It’s important to note that the higher line set by the social media platform is actually a form of message “responding to things which don’t cross the [lower] line but are inappropriate”, that is, it provides a message from a social media company to a person using that platform which says “what you did may be legal, but we don’t like it, and we don’t want that here”. Such messages are important in setting the norms for society.
The first line, laws prohibiting certain content, can be said to be a form of censorship. Not all censorship is however bad. Australia’s prohibition on misleading and deceptive conduct in commerce is a useful form of censorship that ensure purchases are not mislead by false advertising. The law of defamation is a form of censorship that prevents false statements being used to destroy people’s lives. Laws against incitement to violence help to keep people safe. Laws against racial vilification and religious vilification protect the public good of an inclusive society.
The second line, what private companies and people allow or do not allow in spaces they control is not censorship. This is because people still have the legal right to speak elsewhere. It is like someone coming into your house and saying things which are legal, but highly offensive to you. You have the right to tell them to leave. Their right to speak doesn’t create an obligation on you to listen, nor an obligation to provide them with a venue.
Discussing where the lines should be is essential to the exercise of free speech. It is amazing that some people still don’t see the hypocrisy in seeking to silence those advocating for hate speech to be prohibited either by law or by terms of service. Those who truly support freedom of speech, rather than using it as an excuse to support bigotry, believe in a free marketplace of ideas. For that concept to work, there need to be people in that market place who are advocating for changes to the breadth of those freedoms. When you are trusting to the market place, that’s how the boundaries are tested and the needed exceptions, or changes in the attitudes of society, discovered.
Different societies have different standards. This is reflected in different laws. In some cases the standard at law and that of the social media company may be significantly different. Take the case of incitement to violence. The platform may prohibit it outright, but there may be a country where inciting violence against a particular minority is the norm and is legally protected. Let’s remember, there are countries where genocides are still occurring today. The idea of international human rights is that there is a minimum level of rights which is fundamental and can’t be altered by national laws. The right to life is the most obvious, no country has the right to randomly kill its citizens as Nazi Germany did in the Holocaust. These fundamental human rights were created as a result of the Holocaust to try and prevent something like that ever happening again.
Platforms should not seek to impose the culture of one country upon another, but they should support the system of international human rights by prohibiting things which are seen as harming these fundamental rights. Additional, it can be argued that the principles of international law which prohibit content are more relevant to platforms than the principles which protect freedom of speech. The reason is that they can unilaterally host content which undermines fundamental human rights. On the other hand, content they prohibit can always find another home. A prohibition by a platform is not censorship, it is just a refusal to allow the content in that specific venue.
Help us share this article:
As of right now, March 11th 2016, OHPI is critical short of funds. Any support to allow our work to continue would be greatly appreciated.