Today is the Safer Internet Day. It is a day to talk about how to keep our children and young people safer on the Internet, and to encourage responsible use of online technology.
We appreciate and understand the focus on children and young adults, when it comes to online environments. They are amongst the most vulnerable in our society, and it is our duty to keep them from harm. OHPI has created a range of resources for people to help them combat cyberbullying. You can also view the resources built by the UK organisation UK Safer Internet Centre (which first declared February 9 the Safer Internet Day) or the advice, tips and guidance of the Office of the e-Safety Commissioner in Australia.
However, at OHPI, we think that Safer Internet Day should go beyond just cyberbullying and helping parents remain on top of technology. We need to talk about how to make the Internet safer for everyone: for men, women and communities as well. Social media platforms operate on the basis of forming groups of like-minded individuals working towards a goal. Unfortunately, too often the goal is excluding and/or harassing someone who differs from you. The difference could be gender, sexual orientation, ethnicity, race, religion or political beliefs. What is consistent is the remarkable ease with which one can create cybermobs to attack such persons or communities for their perceived difference.
OHPI’s CEO Dr Andre Oboler is a distinguished speaker for the IEEE Computer Society (an academic body for software engineers). He has often highlighted the need to create a focus on ethics (alongside skills) when it comes to software engineering. Safety of the users should form an essential aspect of the architecture of a social media platform. Too often, social media companies are happy to put out products first, and then consider the safety of their users as complains of abuses occur. Belatedly efforts are made to create reporting mechanisms, which too often are simply not effective in truly addressing the extent of the abuse and damage caused. As any engineers knows, retrofitting is always harder than in-building features into any system at the outset.
In a recent report “Measuring the Hate: The State of Antisemitism on Social Media” we tracked 2,000 items of antisemitism on Facebook, YouTube and Twitter over 10 months. All the items were reported to the platforms as hate speech. Yet, over 10 months only 20 percent of the items were removed. These include items inciting violence against Jews. The low rate of removal of items that are already causing violent action in real life (terrorists targeted Jewish institutions in France and Denmark, and there were a spate of knife attacks in Israel last year) is simply unacceptable.
Slowly but surely, national governments are taking the bull by its horns. Many countries, including Australia, have set up regulations to control cyberbullying among children. Australia is currently examining whether to bring in overarching federal legislation against revenge porn. Most significantly, the German government forced Facebook, Twitter and YouTube to remove hateful speech within a day of it being reported. (German started a criminal investigation against the country head of Facebook, to emphasise how serious they were, before the deal was struck).
The Internet is merely a tool for us to use. Much of its effects will depend on how we choose to use it. However, that does not absolve social media companies of all responsibilities. They should put the safety and security of their users at the forefront, when designing and developing their platforms. They should response accurately and quickly to misuse of their platform. They should work with communities and governments to tackle the problems created by their platforms. Only, then we can hope for a safer Internet for all.
Help us share this work: