There is a tug of war going on within Facebook as they both lead the way in mitigation efforts, and persist with a corporate culture that seems to regularly make the wrong call, putting profits before people and undermining their own efforts. We need to support those positive efforts while pushing them to address the problematic culture – some of which can only be done through U.S. law reform, particular to 47 U.S.C. § 230.

The news today and yesterday that “Facebook is bad” has left me confused. Surely, we all knew that companies like Facebook were causing serious harm? At least, I thought everyone knew profits was their overriding goal and this often meant a lack of action on harmful content. In the main, the public, companies, media organisations, politicians, and governments, have all been accepting this degree of harm for years. The real outrage right now, I believe, is not that Facebook is harmful, but that within Facebook there are those that chose to allow a high risk of harm when they had the means to avoid it. If Facebook sold a physical product with a known (and fixable) safety risk, such action would likely be seen as negligent and open them up to legal risk. The fact they face no such risk, and no penalty for putting profits above safety, is what causes the current outrage.

Interest in this space has been growing for a while. At the start of the year the IEEE Computer Society noted this fact and decided to experiment with a Tech Forum on Social Media and Societal Harms. The forum which occurred last month was very well attended and the panels included experts from the UN, governments, tech platforms, academia, and industry. The discussion was fascinating, and the demand for change was at times quite strong. An on-demand viewing of the event is available.

Personally, I’ve been working on hate in social media, and particularly Facebook, since 2007. Media reports in those early years had Shimon Peres (later Israel’s President), praising Facebook for the way it could bring people together and help eliminate antisemitism. I replied warning of the way Facebook was exacerbating antisemitism (Jerusalem Post, 2008). I remember the sensationalism of an interview in 2009 where I said, “Antisemitism, part of its danger, is the way it can infect a society. It’s a virus… imagine what Hitler could have done if he had Facebook” (full video).

Holocaust denial is one of the most obvious forms of hate speech. The difficult in addressing it highlights the attitudes taken by social media companies over time. I flagged Holocaust denial as a significant part of antisemitism 2.0 in 2008 and noted how Facebook was going out of its way to avoid removing it in an article in 2009. In 2011 I co-chaired an international working group on internet antisemitism for the Israeli Government. A senior representative of Facebook joined us via skype. We urged Facebook to do something about Holocaust denial and were shocked when the response we received live and then in writing said nothing would be done. It wasn’t until 2020 that Facebook banned Holocaust denial. YouTube acted before Facebook, but only by a year. The change in attitudes was very slow, but it did happen.

Platforms and some civil society organisations focused on tweaking policies, while the work I did in 2013, 2014, and 2016 highlighted the problems which continued to grow across the major platforms. Much of the antisemitism in the 2016 report (an analysis of over 2000 items) wasn’t removed even after 10 months. Work in 2012 found problems with racism against Indigenous Australians, and work in 2013 and 2015 highlighted problems with Islamophobia. In fact there is a whole series of reports and articles across many different types of hate, and stretching back almost a decade, at the Online Hate Prevention Institute.

Neither the existence of these harms, nor the reluctance of platforms to solve the issues is news. Harmful content generates money. It keeps users on the site where they see more ads. Those views earn the companies revenue. Removing the harm and reducing the outrage costs the companies. It reduces the income and increases the operating costs. This is why I have always pushed for regulation. It is only with the threat of regulation that the cost of business as usual might become greater than the cost of mitigating the harm.

It is the fact Facebook can mitigate the harm, yet chose not to, which is causing such concern. The fact this is possible is down to the protections in the U.S. for online services in Section 230 (47 U.S.C. § 230). I presented on this a couple of weeks ago in a keynote for the IEEE Industrial Application Society’s GUCON conference, highlighting the difference in position between a recent Harvard Business Review article calling for Section 230 to be changed, and the position of the EFF which strongly supports Section 230 as it is. Those in the technology space need to be willing to relinquish this “special treatment”. We need a cultural change, and yes, that might well impact on profitability – just a pollution controls impact on the profitability of manufacturing.

Within Facebook the culture of “move fast and break things” has been changing for the better. There are still elements of a toxic corporate culture, where revenue is pursued with unacceptable levels of risk to the public, but there are also many people, policies and projects within Facebook that are seeking to improve things. Facebook is leading the way both in toxic culture and in positive culture. The Facebook Oversight Board is a clear example of positive leadership. There is a tug of war going on within Facebook.

The challenge we have is:

  • Encourage meaningful the positive engagements – particularly those that expose problems and increase transparency, allowing safety to be improved.
  • Setting standards and expectations so there are penalties for insufficient efforts to mitigate the risk of harm.
  • Tackling the toxic culture which encourages dangerous practices or those with a high risk of harm, if the profit returned is high enough – this needs to be tackles from the boardroom down. Given the sums of money involved, it may require criminal sanctions.

This is the sort of positive engagement we need to see going forward. Facebook, which due to its business model profits from the harmful content, is making a donation to help address and eliminate that harm. The program is not about feel good content, or educational materials for schools, it is directly addressing the harm. It is helping civil society to independently monitor for harm. It is giving government the data to understand the scope of the problem. It is giving Facebook the data to improve their efforts at automate removal, and the training of the staff who review reports. It is giving the impacted community a chance to comment on the impact of the problem, and what else they might be seeing which is causing them harm. This is a strongest commitment to fix the problem of online hate that we have seen, either from technology companies or from government. We hope it will lead to similar equally effective partnerships in the future. And yes, it is Facebook leading the way.