Australia should not implement laws for social platforms to remove hate speech and illegal speech online by obligation, in exception for the grounds of terrorism propagation. While Australia should prioritise criminalising an advocation of terrorism online, implementing laws across the digital sphere for a large portion of toxic behaviour, much like how Germany’s Network Enforcement Act (NetzDG) and Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online (Code of Conduct) does, can have inherent consequences to the wider socio-political framework.
Regulations by the lists of offences these laws outline can threaten democratic concepts within a web which increasingly accommodates political discourse and becomes a space to amplify oppressed voices – much needing the protection of freedom of expression. It can also risk media overregulation due to the blurred lines of appropriate and inappropriate digital content, and the ambiguity of online terminology.
On the other hand, it should also be considered that the increase in terrorism advocacy online and the rise of pseudonymous and anonymous user accounts provide an unrestrained space for hate speech and illegal speech online – calling for legal action to regulate toxic behaviour in the digital sphere.
While legal action to prevent the propagation of terrorist activity online should be implemented, hate speech and illegal speech laws as enshrined in Germany and Europe should not be adopted in Australia at the expense of risking democratic values.
A Brief Look Into History
There is a growing disillusionment towards the online space as it continues to evolve from a simple communicative and entertainment service towards a public sphere of social influence and political catalysts. The rejection of any authoritative engagement in the digital sphere was called out through a controversial text by Barlow in 1996, on his Declaration of Cyberspace Independence. This reflects the political libertarian’s position on how legal constraints should not be applied in this free web of communications.
Regardless of this debate, Germany in 2018, and Europe in 2016, each enshrined laws to forcefully remove hate and illegal speech online. A major part of the reasons behind this criminalisation is displayed through the mindset that the web magnifies the discord surrounding hate crimes and political tension to toxic behaviour outside the digital sphere (Laub, 2019).
Germany’s NetzDG laws mirror how this country had implemented one of the world’s toughest laws around hate speech in response to its Nazi history (Laub, 2019). Their propagation of fake news and racist material online following Germany’s recent influx in refugees catalysed this legal action (“Germany approves”, 2017), from the raised fears that hateful speech online encouraged offline toxic behaviour.
Likewise, Europe also faced a need for legal regulation online in response to their terrorist attacks reaching high levels by 2016, reflected in EU Commission Vera Jourova stating how the “recent terror attacks have reminded us of the urgent need to address illegal online hate speech” in a 2016 press release.
Democratic Threats To Adopting Hate Speech And Illegal Speech Laws
Criminalising hate speech and illegal speech online in the way Germany and Europe does can raise threats to the democratic concepts of freedom of speech as the cyberspace increasingly becomes an influential space for socio-political discourse and empowering the oppressed (Fuchs, 2014, pp. 97). This can be seen by how Facebook and Twitter were used to catalyse the Arab Spring protests in 2012, and the Occupy Wall Street hashtag movement (Kanalley, 2017), displaying how modern digital space now harnesses a strong integration into the socio-political framework.
— Occupy Wall Street (@OccupyWallStNYC) July 15, 2011
A Twitter post contributing to the Wall Street Protests. Retrieved from Twitter
The internet should be viewed as a public sphere where freedom of speech must be protected and subsidized public access maintained as this digital web transitions into a powerful modern space for socio-political influence (Abbate, 2017, pp. 12). Using a blanket regulation across hate speech and illegal speech on the web raises risks to media overregulation, which already stands as a problem in Australia’s media landscape from underlying principles in their media policies on strong ownership and control restrictions. Australia is seen as one of the leading western countries on media regulation (Miller 2019).
“ The problem of balancing the regulation of hate speech with freedom of speech is one that no one at the conference seemed to have a solution to. ”
Nick Miller, 2019
A key problem with enshrining laws like Germany and Europe is the ambiguity of terminology blurring what is considered appropriate content, and subjectivity of what defines hate speech and illegal speech online. Europe’s Code of Conduct was met with criticisms regarding the vagueness of terminology, reflected in a UN report which found no consensus among states on the meaning of terms “incitement”, “hatred”, or “hostility” (“Striking a balance”, 2012), and shown in EU’s inconsistent definition of “hate speech” (Harmful Speech Online, 2017, pp. 25).
Another issue leading to potential overregulation is how Germany’s NetzDG law does not provide a publicly accessible database for users to dispute their flagged content, risking removal of legitimate content (Harmful Speech Online, 2017, pp. 25). As laws regulating the online space is not limited by geographic borders, the create “blacklist database” risks total censorship (Citron, 2018, pp. 1069). The Australian media landscape, which already struggles with limiting overregulation, and the digital sphere increasingly harnessing a socio-political framework, shows how hate speech and illegal speech laws as enshrined in Germany and Europe should not be adopted in Australia.
A Youtube video on how social media companies threatening democracy. Retrieved from Youtube
Terrorism Spreading Online And The Growth In Pseudonymous & Anonymous Accounts
However, the rise of terrorism advocacy through social media and an increase in pseudonymous and anonymous accounts calls for the need for a strict legal regulation managing the digital sphere in Australia. Germany’s NetzDG law and Europe’s Code of Conduct work to prevent such toxic behaviour from being spread across the web by regulating hateful and illegal speech. Terrorism can be deliberately disseminated across the web to advocate and publicize their behaviour (Hossain, 2018, pp. 3), labelled as a performance crime.
This can be seen in the Facebook live streaming of the Christchurch shooting in 2019, and the extremist group ISIS promoting their agenda. Social media has also provided ISIS with a key tool for recruitment, drawing at least 30, 000 foreign fighters through this digitalised space (Brooking & Singer, 2016). A crucial aspect of this problem is that it will likely influence other terrorist and extremist groups to follow their footsteps in exploiting the web to advocate their agenda.
An additional key issue is the increased use of pseudonymous and anonymous user accounts on social platforms, and how this can link to an increase in toxic behaviour online. Hiding one’s true identity online can cause a deindividuation process within users, reducing their fear of consequence and a sense of responsibility (Rösner & Krämer, 2016, pp. 2). An example is ‘The Fappening’ incident in 2014 which consisted of the widespread distribution of nude celebrity photographs on Reddit (Massanari, 2017). The ability to create multiple pseudonymous or anonymous accounts with little moderation policies created a “seemingly leaderless, amorphous quality” (Massanari, 2017, pp. 333), allowing this content to spread rapidly. This may have been prevented with Germany and Europe’s online laws, which both outline that reported content must be dealt with under 24 hours, forcing social media firms to act quickly. Such laws could prevent the propagation of hateful and illegal speech online in Australia.
What does this mean for the ordinary internet user?
Not adopting hate speech and illegal speech laws in Australia, with the exception of terrorism advocacy, can raise implications for ordinary internet users due to perspectives that it pivots attention away from their personal issues for the prioritisation of terrorism risks. However, the community guidelines of each social media platform already place effective procedures to confront these problems. This can be seen in how Facebook received only 600 takedown requests by NetzDG in the first 6 months of 2018, in comparison to the 2.5 million content they removed for its violation of their community guidelines (Echikson & Ibsen, 2018). Facebook outlines specific categories of objectionable content, showing their efforts to tackle hate speech and illegal speech online for the ordinary internet user:
- Hate Speech
- Violence and Graphic Content
- Adult Nudity and Sexual Activity
- Sexual Solicitation
- Cruel and Insensitive
Australia should not adopt hate speech and illegal speech laws as enshrined in Germany’s Network Enforcement Act and Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online. This is due to the threats they hold towards democratic concepts of expression of speech and media overregulation, issues which should especially be monitored in Australia’s media landscape.
However, the widening socio-political framework can lead to exponential growth and influence of terrorism advocacy online. Criminalising online behaviour regarding terrorism should thus be legally implemented in Australia as an exception, as it directly ties to inherent risks to Australia’s safety and security. Laws emphasising on criminalising terrorism advocacy online applied together with pre-existing community guidelines, would be the most effective procedure in Australia to prevent serious crimes emerging from the digital sphere, while protecting the integrity of democratic values.
Abbate, J. (2017). What and where is the Internet? (Re)defining Internet histories, Internet Histories. Digital Technology, Culture and Society, 1(2), 12-13. doi: 10.1080/24701475.2017.1305836. Retrieved from https://www-tandfonline-com.ezproxy1.library.usyd.edu.au/doi/pdf/10.1080/24701475.2017.1305836?needAccess=true
Barlow, J. P. (1996, February 8). A Declaration of the Independence of Cyberspace. Retrieved 27 February 2017, from Electronic Frontier Foundation website: https://www.eff.org/cyberspace-independence
Brooking, E. T. & Singer, P. W. (2016). War Goes Viral. The Atlantic Monthly, 318(4). ISSN: 1072-7825. Retrieved from https://www.theatlantic.com/magazine/archive/2016/11/war-goes-viral/501125/
Citron, D. K. (2018) Extremist Speech, Compelled Conformity, and Censorship Creep. Notre Dame Law Review 1035, (1), 1-39. Retrieved at https://ssrn.com/abstract=2941880
Echikson, W. & Ibsen, D. (2018, November 23). Germany’s new anti-hate speech law needs teeth if it has any hope of stamping it out online. Euronews. Retrieved from https://www.euronews.com/2018/11/23/germany-s-new-anti-hate-speech-law-needs-teeth-if-it-has-any-hope-of-stamping-it-out-onlin
Fuchs, C. (2014). Social Media and the Public Sphere. Triple C, 12(1), 57-101. Retrieved at http://www.triple-c.at
Germany approves plans to fine social media firms up to €50m. (2017, June 30). The Guardian. Retrieved on https://www.theguardian.com/media/2017/jun/30/germany-approves-plans-to-fine-social-media-firms-up-to-50m
Harmful Speech Online. (2017). Perspectives on Harmful Speech Online. Retrieved from https://cyber.harvard.edu/sites/cyber.harvard.edu/files/2017-08_harmfulspeech.pdf
Hossain, S. M. (2018). Social Media and Terrorism: Threats and Challenges to the Modern Era. South Asian Survey, 22(2), 136-155. doi: 10.1177/0971523117753280. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/0971523117753280?journalCode=sasa
Kanalley, C. (2017), Occupy Wall Street: Social Media’s Role In Social Change. The Huffington Post. Retrieved from https://www.huffingtonpost.com.au/2011/10/06/occupy-wall-street-social-media_n_999178.html?ri18n=true
Laub, Z. (2019, June 7). Hate Speech on Social Media: Global Comparisons. Council on Foreign Affairs. Retrieved from https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons
Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. doi: 10.1177/1461444815608807 Retrieved from https://journals.sagepub.com/doi/full/10.1177/1461444815608807
Miller, N. (2019, July 12). Australia leads the Western world on media restrictions: UN rapporteur. Sydney Morning Herald. Retrieved from https://www.smh.com.au/national/australia-leads-the-western-world-on-media-restrictions-un-rapporteur-20190712-p526ko.html
Rösner, L. & Krämer, N. C. (2016). Verbal Venting in the Social Web: Effects of Anonymity and Group Norms on Aggressive Language Use in Online Comments. Social Media + Society, 2(3), 1-4. Retrieved from https://journals.sagepub.com/doi/full/10.1177/2056305116664220
Striking a balance between freedom of expression and the prohibition of incitement to hatred. (2012, October.) United Nations Human Rights newsletter, Retrieved from https://www.ohchr.org/EN/NewsEvents/Pages/Strikingabalancebetweenfreedomofexpression.aspx