Hate Speech and Illegal Speech Laws Collide With The Basis of Democracy

10/10/2019

Introduction:

“We must declare our virtual selves immune to your sovereignty, even as we continue to consent to your rule over our bodies. We will spread ourselves across the Planet so that no one can arrest our thoughts.”

John Perry Barlow, 1996

In agreement with this quote from ‘A Declaration of the Independence of Cyberspace‘ (Barlow, 1996), this essay encourages Australia to remove the obligation for social platforms to moderate hate speech and illegal speech. German hate speech law, known as Germany’s Network Enforcement Act (NetzDG) and Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online are demonstrations of how web laws are in direct violation of our democratic rights of free speech. Laws governing what can and cannot be posted on (social) media platforms can very easily become weapons of ideological oppression byway of censorship. One definition of hate speech is defined as “any speech, which attacks an individual or a group with an intention to hurt or disrespect based on identity” (Chetty & Alathur, pp. 117, 2018). Therefore, it is noted that although removal of illegal hate speech can aid in decreasing online acts of cyberbullying for example, the extent to which the cons outweigh the pros can be seen in the dire consequences of social platform censorship suppressing political opposition (Denardis, 2014).

Background:

(Peter Steiner’s cartoon, as published in The New Yorker, 1993)

 

The New Yorker famously published a surfing dog with the caption ‘On the internet, nobody knows you’re a dog’ which highlights that even back in the beginning of internet technological progression, questions over internet anonymity were raised. These questions still resonate today in regards to the effectiveness of hate speech and illegal speech laws on social platforms when anonymous accounts are so prevalent meaning minimal real life repercussions can take place (Denardis, 2014). Despite these laws attempting to eliminate offensive online behaviour, they unfortunately appear to encourage this behaviour for offenders to simply hide behind anonymous or fake accounts that the government is unlikely to unmask. The chart below depicts Pew Research Center‘s 2013 investigation into how prevalent attempting to hide one’s online behaviour or identity is (Rainie, Kiesler, Kang & Madden, 2013). This therefore provides evidence that users will endeavour to hide their online behaviour through a myriad of different methods in order for this online behaviour to not be traced back to their real identities. Hate speech and illegal speech will forever remain an issue on account of the fact that users can hide their identity if they are wishing to participate in objectionable online acts. Governments have little ability to follow up on every individual offender who’s identity is concealed by pseudo and anonymous accounts as a result of the sheer quantity of anonymous accounts in use at any given time.

(Pew Research Centre’s chart demonstrating the lengths people go to in order to remain anonymous online, 2013)

 

Dating as far back as 1996 as previously mentioned, John Perry Barlow was able to predict government attempts at gaining more control over social media platform regulation. Since 2010, Google has released reviews of internet freedom and has displayed concern over increasing political censorship by the government over political content (even in democratic societies) (Denardis, 2014). This is disturbing as censorship is the easiest form of suppressing political opposition and political activism. The internet is used for individuals to feel as if they have a voice (even if it is a marginalised opinion), if governments involve themselves in what can and cannot be censored online this can very quickly eliminate this tool for open public discourse and thus questions the very basis of free speech and democratic values.

Cyberliberalism had it’s peak in the early developmental period of the web. It’s basis was that the online realm would and should be a society in itself that must remain untouched and unregulated by physical governments (Wenguang, 2018). The cyberliberalism movement argued that the cost and immense delays of constantly moderating and checking online content would be a waste of government funding. Furthermore, the moderating of social platforms often proved ineffective due to the mass amounts of data needed to be sifted through and the limited amount of resources and staff dedicated to doing so (Wenguang, 2018). Therefore, if the Australian government were to take a backseat approach to social platform moderation it will be far more economically beneficial to the nation.

Cross-cultural and individual subjectivity:

“We will spread ourselves across the Planet so that no one can arrest our thoughts.”

John Perry Barlow, 1996

Due to the nature of being able to access the web from almost anywhere in the world, a major problem for the Australian government is it’s differing rules and regulations to other nation states in regards to hate speech and illegal speech laws. Deciding where to intervene and where to stand back is extremely difficult as every country differs in it’s views and value on freedom of speech and civil liberty (Gillespie, 2018). Countries find it difficult deciphering whether freedom of speech includes hate speech (Ellian, 2011). Political activism in one country is seen as a political threat in another, censorship of information is generally accepted within a communist regime in China for example and is rejected in democratic nations such as Australia, yet people living in democratic societies often unknowingly experience media censorship under the disguise of ‘hate speech and illegal speech laws’ everyday.

It is difficult to analyse the history of hate speech without drawing attention to the cross-cultural differences of values in each society (Denardis, 2014). In Thailand for example, traditional emphasis is placed on protecting the reputation of the monarchy. This is clearly depicted in 2012 whereby the Thai government demanded that 14 videos on YouTube to be removed for defamation of the monarchy. The level of impact that a government can truly have on private social platforms was demonstrated in the fact that Google chose to only remove 3 out of those 14 (Denardis, 2014). Therefore, even when harsh laws are put in place to restrict political unrest or opposition, the ability for governments to actually enforce them are mockingly weak (Denardis, 2014).

Finally, overtime laws and regulations become outdated particularly in regards to the fast paced technological progression of the internet and social media realm. Not only do laws become outdated but so too does user vocabulary as it is constantly replaced with new youth slang or progressive terminology that was once considered offensive or alternatively what was once considered acceptable becomes offensive. Various nation’s colloquial and common phrases could be interpreted as insulting to another. Therefore, what is considered to be offensive to an individual is based on a whole manner of cultural, age and personal factors that differ between each person belonging to a nation let alone between different nations all together (Denardis, 2014). Defining what is offensive is far too subjective for a supposedly objective justice system to be monitoring and defining what content should and should not be produced.

The dark side of moderating:

(Inside the tragic life of a Facebook moderator, The Verge, 2018)

Facebook employs 15,000 moderators around the world, who typically spend six full hours a day reviewing reported content.” (The Verge, 2019)

The mental toll on employed moderators is questionable at best. The government should not be able to expose a portion of the population serving as content moderators to such constant graphic and traumatising content who needn’t be if their role is proving ineffective anyway. The private sector already employs a significant amount of employees to moderate content and the Australian government employing even more moderators only increases the scope of individuals exposed to daily continuously traumatising reported content.

Conclusion:

This essay recommends that the Australian government takes a back seat approach to social platform hate speech and illegal speech laws and regulation. The ambiguity of hate speech and illegal speech definitions creates both cross-cultural and internal confusion which make deciphering whether content is offensive or acceptable extremely challenging for moderators. Moreover, the potential for hate speech and illegal speech laws to cross the line into more general media censorship is a grave concern as this directly impacts on civil liberties and free speech (Denardis, 2014). The fact that private businesses are already investing in hiring more than 15, 000 employees to moderate content in the case of Facebook means that the government has no need to spend financial resources on overstepping their role into media regulation territory. Additionally, due to the nature of the globally reaching internet, the Australian government attempting to regulate it’s Australian online users in such a globally scaled arena of constant mass content production is futile and quite frankly a waste of time and money (Wenguang, 2018).

References: 

Barlow, J. P. (1996, February 8). A Declaration of the Independence of Cyberspace. Retrieved 12 October 2019, from Electronic Frontier Foundation website: https://www.eff.org/cyberspace-independence

Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression And Violent Behavior40. doi: https://doi.org/10.1016/j.avb.2018.05.003

DeNardis, L. (2014). The global war for internet governance. https://doi.org/10.12987/yale/9780300181357.001.0001
ISBN: 9780300182118,9781306168281,0300182112,1306168287,9780300181357,0300181353, DOI: 10.12987/yale/9780300181357.001.0001
Ellian, A., & Molier, G. (2011). Freedom of speech under attack (1st ed.). The Hague: Eleven International Publishing.
Gillespie, T. (2018). Custodians of the Internet (1st ed.). New Haven: Yale University Press.
Rainie, L., Kiesler, S., Kang, R., & Madden, M. (2019). Anonymity, Privacy, and Security Online. Washington: Pew Research Centre. Retrieved from https://www.pewinternet.org/2013/09/05/anonymity-privacy-and-security-online/
Wenguang, Y. (2018). Internet Intermediaries’ Liability for Online Illegal Hate Speech. Frontiers Of Law In China13(3).

Be the first to comment

Leave a Reply

Your email address will not be published.


*