The Facebook boycott in the United States is deeply concerning. It is a symptom of popularism and the modern post-fact world – not a solution to those problems. The demands attached to it in fact endager democracy.
Action by European regulators, and particular Germany, on hate speech led to significant improvements by Facebook over recent years. Action by Australian regulators on terrorism, in response to the Christchurch attack, led to significant improvements by Facebook in tackling extremism. As I reported to the IHRA Antisemitism Committee, the problem today is mostly on platforms like Gab, Telegram, BitChute and 4chan, not Facebook or YouTube. The mainstream platforms have made life so difficult for those promoting hate that many have just left these platforms and moved elsewhere. That’s a sign of serious progress.
Facebook also recently moved forward with their new independent governance board. The board, made up of international experts, will now have final say on content moderation. We congratulate Prof. Nicolas Suzor from the Queensland University of Technology Law School who was appointed to this board. It’s great to see Australia represented at this level of global decision making.
Given how much progress Facebook has made, and how far ahead it is compared to other companies, we have to question the advertiser boycott at this time. The boycott is not civil society leadership on a company that is recalcitrant and has major issues, but rather grandstanding that aims to exploit the public fears and anger about racism more broadly. It targets a company that, while it can surely improve, is making a real effort to do so.
One of the demands behind the current boycott, that Facebook to use its own judgement to censor political campaigns, is dangerous to democracy. It would allow a private company to exert a huge influence over elections, dampening some voices while amplifying others and doing so outside of the rule of law. Facebook, while happy to remove hate speech in other circumstances, doesn’t want this power in the context of elections. Facebook’s effort to avoid taking on this role is part of the reason it is being boycotted.
As to the other examples of hate, yes, on a platform with the volume of content Facebook deals with, there will always be some examples of hate. The question is how Facebook handles them. The situation is like attacking a government that is trying hard to reduce crime because the crime rate is not yet zero. We need to measure the level of hate and the effectiveness of both human and AI efforts to respond to reports. We need civil society, not just Facebook itself, to review decisions and ensure policies, standards and training are in fact getting the right results. We need to seek constant improvements. As long as that is happening, we can ask no more.
The other basis for the campaign is that Facebook is exempting political speech by candidates and their campaigns from the usual rules. That’s a move we support. If there was a law, and the law made no exception for political campaigning, then Facebook would be required to uphold that law. Anyone upset with Facebook’s interpretation of the law, or with the law itself and its impact on democracy, could challenge it in court.
The US has no such law. In
the US hate speech is protected speech under the First Amendment to the US
Constitution. That means any US law that seeks to ban or penalise hate speech will
be found invalid by the US Supreme Court. In the US the law cannot force
Facebook to remove hate, nor can it force Facebook to keep it online. At law, it
is entirely up to Facebook.
When it comes to ordinary
speech, we support Facebook’s right to remove hate and we encourage them to do
so. When it comes to political speech directly relevant to an election, the
equation is very different as there is a broader public interest in having an
informed electorate. Just as fake news should be supressed, the true speech of
political candidates, however distasteful, should be visible to the electorate.
Only then can democracy function.