The Right to Hate? How Online Abuse Silences Public Discourse

OHPI recently concluded an advanced training program for a cohort of volunteers who developed public briefings about various forms of online hate. These briefings were shared via OHPI’s Facebook and LinkedIn pages and received significant engagement. While many people responded positively through likes, shares, and affirming comments, others reacted with negativity, including laugh reactions and openly hateful comments. OHPI staff moderated these spaces, removing content that violated our community standards and platform policies

Our team of volunteers was not surprised by the presence of online hate. Many noted that this kind of backlash is now expected when tackling topics that engage questions of gender, race, or equity. What stood out, however, was the brazen and unapologetic tone of many comments.

This briefing explores the themes evident in these responses. Rather than treating these comments as outliers, this briefing considers them part of a broader pattern of digital backlash. Many of these comments demonstrate what The Global Internet Forum to Counter Terrorism (GIFCT) identifies as “borderline content” namely, material that sits just on the edge of what is legally or technically removable, but which still causes harm, continues to exert silencing effects, while encouraging others to express prejudice with increasing boldness.

Borderline Content is, unsurprisingly, difficult to regulate. It is online material that doesn’t clearly break rules or laws, but can still encourage hate or violence. It might look harmless at first, but it can slowly influence people in dangerous ways, especially toward extremism or terrorism. The goal when formulating policies around online hate should be to find fair, effective ways to limit dangerous content before it causes harm, while still protecting freedom of speech. Platforms don’t want to take down content that isn’t objectionable, but most social media users don’t want social media to be used to spread hate or dangerous ideas.

The comments we are dealing with in this briefing often fall into this category of Borderline Content. They may subtly encourage divisive narratives or glorify extremist ideologies. But they are often challenging to define and regulate uniformly, sometimes because they state subjective opinions which arguably have a place in mainstream discussion. This difficulty is amplified by the fact that many tech companies do not have experts or staff to handle tricky cases. If a social media platform is relying largely on automated AI systems when assessing content for removal, those systems are unlikely to have the capacity to differentiate between acceptable and unacceptable cases of borderline content. As a result, much of this content is liable to remain online.

It’s also worth mentioning that, while many of the comments on our recent articles were borderline, many also contained outright hate-speech. It is especially surprising that users were willing to make this kind of comment with their publicly accessible personal online profiles, without fear of reproach.  

This briefing uses a qualitative analysis approach to examine how language is used to construct power and position individuals in social interactions. In this case, we focused on what the commenters aim to achieve with their words (be it undermining our work, reinforcing ideological dominance, or silencing others) and the social consequences of this content. Our findings reveal three key patterns.

  1. The Brazen Right to Hate

Users asserted their “right” to make hateful or bigoted statements, reflecting a growing culture of free speech absolutism (the belief that any restriction on speech, including hate speech, is an attack on personal freedom). These comments did not simply express a hateful opinion; they defended their right to express such views as a legitimate entitlement.

For instance, several commenters implied they should be able to be racist, transphobic, or misogynistic without facing consequences. The following post claims that “Tribalism is in our DNA .. if I don’t like homosexuals ! Muslims,Aboriginals. I have the right to say so .” 

The next comment describes those who try to curb hate-speech as “modern day Hitlers”, while another describes fighting hate speech as “discriminatory”.  

The next comment also emphasizes an apparent right to engage in hate speech, starting with “I’ll dislike or hate whoever I want!”, before expressing explicitly hateful views about “Africans and muslims”.

The following commentators insisted they have a right to engage in hate speech because hatred is a natural human emotion.

Equating hate-speech with the emotion of hatred is a strategy for defending the right to hate; surely nobody doubts that we have a right to feel however we like. If hate speech is equated to the feeling of hatred, it follows that we also have a right to engage in hate speech.

But, actually, hate-speech refers to the overt act of using speech to attack people on the basis of their membership in a group in society, which is distinct from just experiencing the emotion of hatred. So one can consistently advocate against hate-speech without this implying that the emotion of hatred should be regulated or curbed in any way. Those who criticise our work against hate-speech by defending their right to feel hatred are therefore misunderstanding the point of the articles that they criticise. 

This rhetoric echoes a wider discursive trend where hate is reframed as honesty, and accountability as censorship. As the Australian Human Rights Commission reports, this form of “weaponised free speech” is often used to mask structural harms. Such comments reflect what we have termed a brazen entitlement to hate.

  1. Inversion of Victimhood

Another common tactic was to reframe the speaker (rather than the target) of hate-speech as the victim. This inversion of victimhood is a hallmark of populist and reactionary speech, where dominant groups position themselves as under threat.

In the following comments, users ask why “they” haven’t focussed on other types of hate, specifically towards antivaxers and, in the following comment, those dubbed as “racist”. 

The following comment argues that, actually, people considered “woke” are the ones most likely to engage in hateful behaviours:

These posters position themselves as a victim of so-called “woke” culture and justify their hostility as a rational response. This mirrors what researchers describe as “reactive online hate” that frames progressive discourse as oppressive, thereby legitimising backlash. 

This rhetoric exploits the language of social justice to delegitimise it, an approach commonly found in research on reactionary populism. Such inversions are not just rhetorical; they actively weaken the social credibility of marginalised groups and erode public understanding of systemic inequality.

  1. Silencing Through Ridicule and Threat

Finally, many comments worked to intimidate or belittle those speaking out, aiming to silence both OHPI contributors and the communities we seek to support. 

Some comments shut down conversation point blank, refusing interaction. These posters simply told commentators who disagreed, or felt bullied, to withdraw from the online space. The following comment suggests showing victims of cyberbullying where the “off button is”, implying that any such victims should remove themselves from online spaces. 

Telling opinion holders to leave an online space is not dissimilar to telling them to leave in real life. It is not merely an invitation to end the conversation- it silences public debate. Online spaces should be considered part of the public sphere; social media and forums allow for public discourse, information sharing and the formation of political will.

It is clear these commenters are not open to hearing other perspectives or engaging in any type of exchange, narrowing this part of the public sphere to a singular world view. With some in the digital space emboldened to narrow the public discourse in the public sphere by entirely shutting down dissenting voices, this poses a serious limitation to the functioning of an effective democracy.

Another tactic uses mockery, insults, and veiled threats to discourage public advocacy. The following comments criticize those who advocate against hate-speech, and thereby threaten to silence both victims and advocates. 

Our team of volunteers reflected on the silencing effect of such comments. While some felt more defiant, others described feeling hesitant to engage publicly online out of concern for being targeted. This is consistent with research showing that such hostile climates have measurable psychological impacts, including anxiety and withdrawal from public discourse, which can deter full participation in civic life. 

We discussed OHPI’s moderation policy and largely agreed that removing hateful comments and banning those who post them helps create safer spaces. Some suggested that it might be worthwhile to engage with users expressing disagreement in non-hateful ways, but recognised this may not always be feasible, especially when dealing with coordinated campaigns or repeat offenders.

Conclusion: 

To conclude, the online backlash to our briefings shows that hate speech is no longer whispered in shadows. It is loud, deliberate, and often proud. The brazenness of these comments is not accidental. It is a feature of a digital culture in which hate has become increasingly normalised, performative, and defiant. The power of this discourse lies not just in what is said, but in what it achieves: silencing others, reframing hate as opinion, and blurring the boundary between free speech and abuse. This underscores that social media is not a level playing field for democratic dialogue. Without meaningful moderation and intervention, platforms risk becoming tools of suppression, not conversation. Unless this “right to hate” is meaningfully addressed, the internet will continue to be a space where the most harmful voices are the loudest.