Recent Shifts in Online Safety: Social Media Bans, Location Tracking, and AI-Driven Hate

This past month has seen some major developments in the world of social media and online hate. Yesterday, Australia’s world-leading social media ban for under 16s came into effect, with the goal of helping to shield young people from harmful online content. We also recently saw X roll out its new location-display feature, as well as new data on the prevalence of harmful AI material online. This briefing discusses and analyses some of these recent developments in the tech world. 

Social Media Ban: 

From the 10th of December, the under-16s social media ban took effect and a swathe of social media platforms are now required to block or ban younger users. The ban will affect hundreds of thousands of teenagers who previously used major social media platforms. Penalties up to 49.5 million AUD can be issued to platforms that do not effectively enforce these restrictions. 

The intention of the ban is to protect young Australians from risks associated with social media use, including exposure to harmful content. All major social media platforms, including Facebook, X, Instagram, and Tiktok are age restricted. 

There are a number of types of online platforms that are currently exempt from the age-restrictions. This includes platforms that primarily host educational content, messaging services and online games. According to the PM,  “These types of online services have been excluded from the new minimum age obligations because they pose fewer social media harms to under 16s, or are regulated under different laws.” 

The exclusion of online games from the ban is questionable given young people can be exposed to harmful content whilst gaming just as easily as on social media. Earlier this year, for example, OHPI published a briefing about Roblox. Roblox is an extremely popular online multiplayer gaming platform with 83.5 million daily users, 38.1% of which are thirteen years old or younger. 

Our briefing explored the expanding far-right and extremist presence on the platform since at least 2009. Extremist groups have deliberately used Roblox as a tool for recruiting young, impressionable new members. It is therefore disappointing that Roblox is listed as one of the platforms that are exempt from the under-16 ban, given that theintention of this ban is to help shield young people from harmful content.  

It is also worth noting that certain messaging platforms, like Telegram, are hotbeds for hateful and extremist content. Telegram has consistently shown one of the highest volumes of online hate when compared with other platforms in OHPI research. Oftentimes, users that have been banned from more mainstream social media platforms find refuge on Telegram, and it serves as an important online meeting place for dangerous extremist communities. 

As a messaging service, Telegram has not yet been included amongst the age-restricted platforms, despite harbouring more hateful and extremist content than many of the platforms that are included. This is especially concerning given the risk that, when the age-restrictions come into effect, young people will flock to those online spaces to which they do have access. Given that Telegram is currently one of these spaces, the potential for young people to encounter harmful content may actually increase as a result of the ban if they choose to migrate to the these less regulated platforms. 

Concerns have also been raised that the social media ban will give scammers the opportunity to peddle “prove your age” scams. Social media platforms have been tasked with verifying users’ age themselves, and can employ a variety of methods to do so. But the more data we are required to give to social media platforms the more we risk being scammed by those posing as social media companies and asking for our details. 

In our submission on an inquiry on Social Media and Australian Society, OHPI recommended using a system that protects privacy by issuing a time limited token that verifies age without sharing a person’s identity. The Australian government has ultimately given social media platforms the discretion to verify user’s age how they please, which means the public needs to be educated in identifying scams that may arise as a result. 

Age Restricted Material Codes 

New age-restricted material codes, separate from the under 16 social media ban, will come into effect on the 27th of December. These codes will require social media engines like google and bing to blur pornographic images for all users (including adults), and will re-direct users seeking information about self-harm to mental health services. Adults can decide to click through and view the explicit images, and they will not have to create an account to do so. 

The intention behind these measures is to stop children being accidentally exposed to harmful material. eSafety Commissioner Julie Inman Grant has pointed out that much of young people’s initial exposure to pornography is accidental, and that these measures could therefore help reduce the amount of harmful content consumed by young people online. 

X displays user locations

Last month, X rolled out a new feature that revealed a user’s geographical location on their profile. This feature was implemented in conjunction with a number of others that were designed to encourage more authentic interaction and combat the presence of bots. Profiles also displayed when users were using VPNs that might hide their actual location. 

Concerns were raised about the veracity of the information, which were validated by the platform itself which acknowledged the displayed location may not always be accurate. ABC News was listed as being based in Ireland, the ALP in the United States, and The Australian National University’s Strategic Defence Studies in India. 

In one instance, a Gaza journalist was listed as operating from Poland, which was seized upon by Israel’s foreign ministry to cast doubt on the reliability of Palestinian journalism. The journalist responded by posting a video of himself walking around Gaza, asking “Tell me if you can recognise such tents and buildings in Poland”. There were also various cases in which MAGA accounts were listed as being based in countries other than the US, like Thailand, Turkey and Nigeria. 

The changes did not have the intended effect of facilitating more authentic conversation. If anything, it’s made people more suspicious and less trusting of the people they are interacting with online, and has cast further doubt on the veracity of political commentators on social media. 

New report on AI generated hate: 

A new report from AI Forensics has highlighted prevalence of AI generated content on Tiktok. The report found over 43,000 AI posts that had garnered over 4.5 billion views. 

Half of the most popular accounts showed content that sexualised female bodies, including minors. The content also included fake news reports that pushed anti-immigrant narratives. Less than 2% of this material was identified and labelled by Tiktok as AI generated content. 

This report further demonstrates that AI has helped facilitate the spread of online hate. OHPI has previously drawn attention to this phenomenon, highlighting how AI technology can be used to create hateful content that is cartoonish and more digestible for a younger, more vulnerable audience. We have also discussed how AI can automate the dissemination of hateful content, increasing the volume of those posts on social media. 

There is an asymmetry in the powers of AI; it is very good at producing and disseminating hateful content, but not very good at identifying and removing that same content. This means that the introduction of AI has worsened online hate, and has not yet been effectively mobilised to help combat it.