Ethical issues for Artificial Intelligence

On 12 August 2025 our CEO, Dr Andre Oboler, presented on a panel at the G20 Interfaith Forum in Cape Town, South Africa, on the topics of “Ethical issues for Artificial Intelligence”.

Panel Description:

Challenges linked to technology and especially the fast-developing systems of Artificial Intelligence (AI) present distinctive challenges for religious communities. These communities can also, with practical experience and pertinent ethical teachings, contribute to the active global and local debates on the topic. This session will build on ongoing work and dialogue with a view to highlighting the links between AI’s potential and its dangers and the G20 and IF20 priorities, especially for the most vulnerable communities.

Moderators:

  • Manisha Jain, Distinguished Engineer, Microsoft, USA
  • Carike Noeth, Southern Africa Regional Manager, Globethics, South Africa.

Panelists:

  • Medlir Mema, Head – AI Governance Programme, Global Governance Institute, USA
  • Dr Andre Oboler, CEO, Online Hate Prevention Institute, Australia
  • Golan Benoni, Global CIO, IDT, USA
  • Lilia Khasanova, Executive Director, A Common Word Among the Youth (ACWAY), USA
  • Rabbi Dr. Aharon Ariel Lavi, Managing Director, the Ohr Torah Interfaith Center, Israel

Dr Oboler’s Prepared Remarks

I want to start by sharing some details from work my organisation, the Online Hate Prevention Institute, will soon be publishing. A report related to the abuse of AI to spread hate, specifically hate against a faith-based community.

In May we were notified of an Instagram account dedicated to sharing antisemitic videos and seeking public donations to fund their campaign of hate.

The video features politicians like Donald Trump and Angela Merkle expressing their subservience to Israel, Jews identified by comically exaggerated noses, Jews people smuggling dangerous immigrants, in one case depicted as Black Africans wielding machetes.

All of this and more in AI generated videos. In 21 weeks they had built a following of over 6,700 people and published 149 AI generated videos. In 90 days they had raised US$785 to support their hate creation campaign.

Working with Meta, we had their Instagram account removed. They were back with a new account within days. That account, created at the start of June has over 17,700 followers and has published 197 AI generated antisemitic videos as of this morning.

This user has had over 70 accounts banned across multiple platforms including Instagram, YouTube, and X. They keep returning and have built systems help their supporters find them again when their accounts are inevitably closed.

What can we learn from this?

1. AI is supposed to have guard rails to prevent it being used to create such abuse. It doesn’t work.

In one video we see a Rabbi people smuggling. The generated image is in the garb of a catholic priests, but with the Jewish Magen David instead of a cross. That’s how easy it is to convert something to antisemitism.

The AI doesn’t understand that adding exaggerated noses and a backdrop of the Kotel, the wailing wall, gives an antisemitic stereotype.

2. We have been relying on AI to tackle online hate, but this AI is typically focused on identifying copies (including with some manipulation) of content that has already been identified manually. AI generated content means the material is always new and sufficiently different not to match.

Some other experiences

During our monitoring for antisemitism we found an image showing the McDonald’s clown, Ronald McDonald, holding a pot of human body parts. It was posted with a comment saying this is the food given to the “Israhell demon forces”. The content is clearly a blood libel, but when we found the source of the image, it was part of a set of anti-McDonalds memes. It is only antisemitic and a reference to blood libel when posted along with the comment.

This shows AI being used to generate content that can be reframes to promote an age old antisemitic trope.

In another examples, we have been supporting a women’s group who successfully campaigned to have computer gaming sites remove games depicting rape, torture, incest, and child exploitation.

We are helping them because of the backlash they are receiving for instigating this “censorship”. Death threats, rape threats, threats of doxxing to share their addresses so people closer to them can attack them. It’s happening at scale, and it is horrific. A mass reporting campaign has led to AI, or perhaps just an algorithm, suspending their Instagram account. Then the personal accounts of their staff. The AI doesn’t have a valid reason for doing this, but is responding to human signals, people can manipulate the AI to get what they want.

My final example, in the last year we’ve seen Technology platforms deliberately reduce the effectiveness of the AI they use to tackle hate. Comparing Instagram’s own transparency reports from the start of 2024 to the start of 2025, we found 2.5 million fewer items of hate detected and removed by AI.

That’s a 29% reduction in effectiveness. This is in part due to a policy change designed to allow huge amount of hate to remain online, in order to have a very small reduction in the number of items the AI is wrongly removing, and which staff can then restore. This is a step backwards.

My recommendation?

AI is good at pattern matching. It should be used as a tool, not a replacement for human intelligence and decision making. I don’t think we can stop the abuse of AI to cause harm, but we need better ways to respond to it, particularly at scale.