Saturday 18 April.
Dr Andre Oboler, CEO of the Online Hate Prevention Institute (OHPI), was the guest speaker at Caulfield Shule’s Seudah Shlishit, where he addressed the growing challenge of online antisemitism and the structural factors driving it.
Drawing on more than two decades of experience combating online hate, Dr Oboler spoke about why antisemitism continues to grow online, and the role played by social media platforms, corporate incentives, and political dynamics.
He explained that while online antisemitism reflects broader societal trends, it is also amplified by the systems designed to maximise engagement.
“It is your attention that is being sold,” Dr Oboler said. “. The longer you are on the platform, the more of your attention there is to sell.”
Dr Oboler highlighted how scapegoating, conspiracy narratives, and a sense of insider knowledge are used by extremist groups to attract and retain audiences. He also pointed to the role of state actors, including Iran, in spreading antisemitic propaganda and disinformation.
Platforms, profit, and the erosion of safeguards
A key focus of the talk was the role of platform design and corporate decision-making.
Dr Oboler described how early investments in trust and safety — including automated detection systems — had begun to reduce harmful content. However, these efforts have been scaled back in recent years.
According to OHPI research and reporting, including analysis covered in , major platforms have significantly reduced the amount of hate speech they remove, despite rising levels of online antisemitism.
He noted that cuts to trust and safety teams, combined with policy changes limiting automated moderation, have created an environment where harmful content is more likely to spread.
“Improved safety doesn’t generate profits,” he explained. “It costs money.”
Disinformation and real-world harm
Dr Oboler also connected online hate to real-world incidents, including coordinated disinformation campaigns following antisemitic attacks.
He referenced OHPI’s recent report into responses to the London ambulance attack, which found that a large proportion of online reactions were antisemitic or driven by conspiracy theories.
The report highlighted how tactics such as DARVO (deny, attack, reverse victim and offender) are used to shift blame onto Jewish victims and reduce public empathy.
Dr Oboler also discussed the recent Nazi videos in Queensland, and OHPI’s analysis of how such content spreads and the legal implications of its distribution.
OHPI’s ongoing work
Dr Oboler outlined OHPI’s current work, including supporting government, law enforcement, and community organisations, as well as contributing to court cases and international policy discussions.
He noted that antisemitism has been a major focus of OHPI’s recent output, with the majority of reports and articles over the past two years addressing this issue.
Prepared remarks
The following are the prepared remarks from Dr Andre Oboler’s address:
Why online hate keeps growing.
There are numerous reasons online hate and online antisemitism keep growing. The antisemitism in society is a reflection of the antisemitism online, but it also feeds it.
Yes, there are those like Hamas and Iran using antisemitism as a tool, whether commissioning crimes, or spreading propaganda designed to incite hate.
I want to speak about something else. The technology, corporate culture, and political culture which provide the infrastructure and environment that serve as an accelerant.
I want to suggest to you that the business model of social media is wrong. Like the tobacco industry, the business model only works if the harm it causes is ignored.
How does social media work? Yes, it shows you content, but you don’t pay for this. You are, as is often said, the product not the customer. It is your attention that is being sold. And the more that is known about you, the higher the premium that can be charged to put specific content in front of people more specifically like you.
There is another side to this. The longer you are on the platform, the more of your attention there is to sell. So other content is also pushed to you. Content that will keep you engaged and spending more time than you intended scrolling.
What makes content engaging? Emotional engagement. This could be done by showing you content that is too cute, or funny, or happy to walk away from — but that only goes so far. What’s far more effective is content that incites anger and eats away at you.
There is a way to do both. Get people angry and feeling empowered. This is not new to social media. Scapegoating channels anger and hate is even more engaging. It is how neo-Nazis and other hate groups attract people. Also effective is making people feel included and special, like they know the truth when everyone else is in the dark. So conspiracy theories spread.
The challenge posed by platforms & corporate interests
I’ve been working in this space since Facebook was new and trying to topple MySpace from dominance. For a long time, platforms pretended there wasn’t a problem. There was no transparency, and companies claimed any focus on harms would stifle innovation.
Through lobbying and pressure, both positive and negative, on institutions that would otherwise sound the alarm, the problem was pushed down the road. Even a delay was worth millions to the companies.
In the early to mid-2000s we started seeing some improvements. Investment in what was known as “trust and safety” and early efforts to build safeguards (often described as “AI”) into the products. At OHPI’s suggestion YouTube began using digital fingerprints to detect re-uploads of videos it had already removed. Facebook started using auto-detection to remove hate speech before anyone even saw it. In 2018 they were claiming around 95% of all the hate speech removed was automated and removed before anyone actually saw the content.
Covid disrupted this progress. Social media companies reduced investment in trust and safety, and those cuts continued even as usage and profits grew. Improved safety does not generate profits, and worse, it costs money.
Without getting too far into US politics, part of the “base” for President Trump came from the far-right. Some of those figured ended up in the Whitehouse in his first term. There is a whole chapter here we don’t have time to dive into. It brought the far-right far more into the open.
A key event was at the end of the first Trump Presidency, the January 6th 2021 insurrection, after which major social media platforms purged the accounts of many insurrectionists. Many of them migrated to smaller platforms, often those promoting themselves as “free speech” platforms, often with active white supremacist and Nazi groups on them.
Elon Musk bought Twitter in 2022 and decided to make Twitter, one of the mainstream platforms, into another “free speech” platform. He fired trust and safety staff. He welcomed back people who has been banned. He engaged in trolling and antisemitism himself. Let’s not forget the Nazi salute at the Trump victory celebration in January 2025.
Let’s also not forget how close Musk and Trump were in 2025. Mark Zuckerberg, wanting to align himself with this, make a critical announcement in January 2025. In a video he said that in his view 1 to 2 out of every 10 items Meta was removing were a mistake. He said these items were being removed due to the expectations of governments and communities, but he disagrees with society’s consensus. To correct what he saw as a flaw, he announced that moving forward automated removal tools would only be used on serious violation.
According to Meta’s own transparency reports through 2025 the amount of hate speech that was being removed dropped by 70% to 80% on Meta products like Facebook and Instagram. On both Facebook and Instagram, the amount of content (measured in millions of items) being removed for hate is at its lowest level since transparency reporting began. This is despite the massive rise that OHPI and others have documents in online antisemitism in recent years.
Our challenges today
OHPI has 10 staff working to address online hate. They come from government, consulting, academia, and industry.
Our work supports government, community organisations, court cases, and police. It informs and support global thinking on tackling online antisemitism and online hate more generally.
Where does antisemitism sit in our work? Over the 2024 and 2025 financial years we published 5 reports, 4 of them on antisemitism. We published 71 articles on our website, 70% of them on antisemitism.
Just to give an idea of life at OHPI, next week will be meeting with the Royal Commission, the AFP, DFAT, staff in the Special Envoy to Combat Antisemitism’s office… and all of that is before the end of Tuesday.
Rabbi Rabin often shares some of the harmful content that is circulating online to raise awareness and speak out again it, something that is much appreciated. A few days ago he wrote about the horrific video from Panel House in Brisbane.
We also responded to these videos, but what we do is a little difference. An analysis we provided of the videos noted can be found here.
We also noted that posting the video was breach of Section 474.17 of the Commonwealth Criminal Code which makes it an offence to “menace, harass, or cause offence” using the a communication service such as the internet. We also highlighted that companies can be charged with this offence if they are involved and in that case the fines are 5 times higher.
We’ll be raising this in our meeting with the AFP.

Read OHPI’s latest report: Australian responses to the attack on London ambulances
OHPI LinkedIn • OHPI Instagram • OHPI Facebook • Bondi Report • Ambulance Attack Responses Report.
