Online Anti-Muslim hate surged following the events of October 2023 and the Israel-Gaza war. These online incidents incited offline incidents including hate crimes, reflected the hostile offline environment, and promoted a social acceptability of anti-Muslim hate that sought to normalise this hostility. In response the Online Hate Prevention Institute based in Australia, which tackles all forms of online hate, and the Online Hate Task Force based in Belgium, which tackles all forms of online religious vilification launched the “moment project” to monitor and document this rise in hate. The project simultaneously monitored the rise in antisemitism using the same methodology.
This report provides a vital in-depth analysis of online anti-Muslim hate as well as online racism against Palestinians and Arabs based on data gathered between 27 October 2023 and 8 February 2024. The data is the result of 160 discrete instances of data collection, each lasting one hour. One instance was collected in turn on each of ten online platforms before a new cycle was begun. The platforms were: Facebook, Instagram, TikTok, X (Twitter), YouTube, Telegram, LinkedIn, Gab, Reddit, and BitChute. The collected data was categorised into one or more of 11 categories, as discussed below. We also reported the content and monitored the response by platforms to our reports.
The report includes a discussion on the terminology of anti-Muslim hate, anti-Muslim racism, and Islamophobia. There is a common core of hate speech all these terms cover, which describes the data in this report, however, each of the terms also cover other matters and in respect to these other matters they differ in their scope. We have chosen to use the term anti-Muslim hate for this report as it is the term most directly connected with the sort of hate speech this report addresses.
The report was featured on ABC News (TV, radio and online) on the day of its release, watch the segment from the ABC’s official YouTube channel below. Links to further media coverage are below.
Key findings
The report presents and deconstructed 64 images of online hate speech. Almost all the examples were anti-Muslim hate, some involved racism against Palestinians or Arabs, some fell into both categories. The examples came from a larger pool of 1169 items of hate against Muslims, Arabs, or Palestinians and were collected from Facebook, Instagram, TikTok, X (Twitter), YouTube, Telegram, LinkedIn, Gab, Reddit, and BitChute between the 27th of October 2023 and the 8th of February 2024.
The report gives the relative prevalence of this hate on each platform. This was possible as a consistent amount of time (16 hours per platfrom) was spent collecting the data from each of the ten platforms. The results showed that the level of anti-Muslim hate was disproportionately high on X (Twitter). It was also notably higher on the minimally moderated platforms Gab and Telegram. LinkedIn had the equal-lowest level of hate, the same as YouTube, and not much lower than TikTok or Instagram, but this is still a surprising and higher than expected given LinkedIn’s position as a professional platform where the content people post may impact their professional standing and future employment.
Each item we collected was also placed into one or more of 11 categories of hate. Ten reflected different forms of anti-Muslim hate and the last category being racism against Palestinians and Arabs. The categorisation data showed that the most common categories were demonising / dehumanising Muslims, presenting Muslims as a cultural threat, and presenting Muslims as a security threat. Also very high was the category of “other” (35%) a large portion of which could form a new category for slur words and imagery.
The categories are:
- Demonising / dehumanising Muslims (404 items, 35% of all items): Content that compares Muslims to animals, says or implies Muslims are inferior to other people, presents Muslims as the devil or the devil’s agents.
- Other anti-Muslim hate (404 items, 35% of all items): Much of this category involves anti-Muslim slurs, some of it involve other specific but far less common anti-Muslim narratives.
- Muslims as a cultural threat (385 items, 33% of all items): Content that presents Muslims as “threat to our way of life”. It includes content that claims having Muslims in society will lead to the replacement of the Western liberal democratic systems by Sharia law.
- Muslims as a security risk (331 items, 28% of all items): Content that presents all Muslims as terrorists, criminals, or otherwise a danger to the safety of society.
- Xenophobia / anti-refugee (230 items, 20% of all items): Content that applies xenophobic or anti-refugee sentiments specifically to Muslims. It may among other things: seek to cast doubt on the legitimacy of asylum claims, treat all Muslims as if they are non-citizens, call for Muslims to be prevented entry or expelled.
- Socially excluding Muslims (158 items, 14% of all items): Content that may seek to make it more difficult for Muslims to live in society and participate as a part of the broader community. This includes campaigns against Halal food and the creation of mosques.
- Anti-Muslim jokes (134 items, 11% of all items): Content presented in the form of a joke, but based on negative generalisations of Muslims, or with messages that seek to exclude Muslims. In general the other themes discussed in this report but in joke form.
- Inciting anti-Muslim violence (111 items, 9% of all items): Content that calls for Muslims or Muslim property to be harmed.
- Racism (against Palestinians or Arabs, 9% of all items) (107 items): Hate speech that has a clear basis in Palestinian or Arab ethnicity. It may be combined with anti-Muslim hate, or may attack all Palestinians or all Arabs regardless of their religion.
- Muslims as dishonest (70 items, 6% of all items): This theme presents dishonestly as an inherent trait of Muslim people, or a religious practice when dealing with non-Muslims.
- Undermining Muslim allies (15 items, 1% of all items): This content has two forms. One seeks to undermine efforts to combat anti-Muslim hate, presenting such hate as acceptable, rationale, reasonable etc.. The other directly targets non-Muslim people or organisations because of their work tackling anti-Muslim hate.
The report also provided the removal rate for each platform, showing how effective each platform was at removing the hate we reported to them. The figures were deeply concerning showing that between 4 months and 7 months since the data was first reported by us, the removal rate varied from a top of 54% on BitChute down to a low of 15% on YouTube. While YouTube has a poor take down rate, it also had the equal lowest level of anti-Muslim hate in absolute terms. More concerning are X, Telegram, and Gab which have some of the highest rates of hate coupled with some of the lowest removal rates. This situation allows hate to rapidly accumulate and spread.
The report included a detailed discussion of ‘Islamophobia’, ‘anti-Muslim Hate’, and ‘anti-Muslim Racism’ as terminology. All would cover the sort of hate speech presented in this report, but the terms do have different definitions and scope. Some experts use the same terms with different meanings. There is currently a lack of concensus. We use anti-Muslim hate as it most closely, and narrowly, relates to hate speech – the focus of this report. We also briefly discussed types of criticism that are not usually anti-Muslim hate, and how to recognise when they across the line.
Some of the hate we see makes false claims about Sharia law. We include some background on Sharia law in report, and how it is used around the world. This understanding may help readers understand where claims about Sharia law lack substance and may in fact be promoting anti-Muslim hate, for example, seeking to present Muslims as cultural threat.
The report provided 18 recommendations for platforms, governments, civil society, and social media users to help society better address anti-Muslim hate and racism against Palestinians and Muslims. One of the most critical recommendations is for platforms to provide specific transparency reports on religious vilification against Muslims, and on other specific forms of hate. This should replace the current generic hate speech reports.
Further work is needed and we hope to secure the funding to carry out a new analysis, directly comparable to this one, with data gathered one year later. This will allow us to identify changes in the level of hate on each platform, the nature of that hate, and the effectiveness of the platforms in removing it. You can donate to support this work.
Media Coverage
Detailed coverage by ABC News on TV, radio and online occured on Sunday 4 April, 2024.
- Natalie Whiting, “Online hate analysts are calling for greater eSafety powers after study finds rise in anti-Semitism and Islamophobia“, ABC News (website), 4 August 2024.
- “Online hate analysts are calling for greater eSafety powers“, ABC News (website – video), 4 August 2024.
- “Online hate analysts are calling for greater eSafety powers”, ABC News 24 (TV), Broadcast 4 August 2024.
- ABC News Victoria, TV Broadcast 7pm, 4 August 2024. (Available on iView)
- “Online hate analysts are calling for greater eSafety powers after study finds rise in anti-Semitism and Islamophobia“, MSN, 4 August 2024.
The video is on the ABC News channels on YouTube and Daily Motion.
Support our work
You can support the Online Hate Prevention Institute by:
- Following us on LinkedIn, Facebook, or X (Twitter)
- Joining our mailing list (see the archive here)
- Donating to support our work (you can make a general donation, or pick an issue)
Your help sharing this report with decision makers and experts is also appreciated.