Illegal Hate Speech Online – Should Australia outsource regulation to Social Platforms?

Governed by Google and Regulated by Reddit!

The Dangerous Power Play of Social Platforms as Moderators of Free Expression. Image: The Economist, © Selman Hosgör, The Economist Newspaper Ltd (Sept 6th 2018), All Rights Reserved.

Introduction

The sheer size of the internet; with its anonymity and connectivity, makes it a hot bed for extremist views, harassment and hate speech. The complexity of regulating content extends beyond the capacity of traditional law enforcement. Whilst an imperative exists to monitor online content, who exactly should be responsible for governance of illegal hate speech online? This essay provides a brief overview on the debate of whether Australian social platforms should follow in the footsteps of the EU Code of Conduct on Countering Illegal Hate Speech Online and the German Network Enforcement Laws (NetzDG), concluding that outsourcing regulation to these platforms is wrought with danger, due to technological limitations, potential bias, unfettered power and political, economic and social agendas of social platforms, which compromises transparency and freedom of expression.

A Brief History of Online Hate Speech

No universal definition for Hate Speech exists. Many diverse definitions prevail, yet, a common recommendation by the Council of Europe in 1997 embraced by many nations, suggests hate speech includes “all forms of expression which spread, incite, promote or justify racial..or religious hatred or intolerance”. In the digital age in which we live, this has been extended to include fake news; where false information is spread with intent to harm. The Wharton School of Business’ “How Can Social Media Firms Tackle Hate Speech?” podcast provides an insight into the role of social platforms as moderators.

Social Media Demon. Image: Vox Political, © Mike Sivier, Vox Political (April 5th 2019), All Rights Reserved.

What is the EU Code of Conduct?

Following the Bataclan concert hall terrorist attack that occurred in Paris in 2015, The EU Code of Conduct was agreed upon with Facebook, Microsoft, Twitter and YouTube to counter the spread of illegal hate speech online, with Instagram, Google+, Snapchat and Dailymotion joining in 2018. Monitoring of hate speech has resulted in 72% of hate speech removed within 24 hours.

Germany as a Regulatory Model

The NetzDG Law was introduced on 1st January, 2018 as part of an extension of its Voksverhetzung“incitement to hatred” criminal code, in response to increased far-right propaganda surrounding Merkel’s decision to open German borders to immigration. This law requires all social platforms to remove obvious instances of hate speech and abusive content within 24 hours, or face potential fines of up to 50 million euro.

Can Germany Fix Facebook? Image: © Andrey Popov, Shutterstock, Facebook, Zak Bickel, The Atlantic, Some Rights Reserved.

What is Australia currently doing about online hate speech?

In 2019, a new Criminal Code Amendment Sharing of Abhorrent Violent Material Bill 2019 was legislated in response to the mass shooting in New Zealand by an Australian white nationalist, who live streamed the massacre and posted a hate-filled manifesto video to online 8chan. New legislation holding social media platforms accountable for abhorrent violent posts such as videos which show murders, rapes, kidnappings or terrorist acts was implemented, to ensure, according to Australian’s Prime Minister Scott Morrisson, that “these platforms should not be weaponised”. Penalties for failure to remove expeditiously, includes fines of up to 10% of companies annual profits and up to 3 years imprisonment for employees. Limitations to these new social media laws is that under the Racial Discrimination Act 1975, it applies only to race hate speech and excludes religious based hate speech.

The YES argument:

  • Online hate speech is too dangerous to ignore.

Online hate speech is more detrimental than offline hate speech, as the Internet’s global reach has enabled greater exposure to a larger audience. Social platforms, due to their extensive popularity, pose a particular risk (Oboler 2014). Incitement to terrorism, along with harassment from online trolls and propaganda from political trolls, means an imperative exists to protect Australian social media users from the direct harm of speech that incites violence.

  • Social Platforms are responsible for monitoring illegal content

Social platforms control the bulk of the world’s information flows and have the power to shape opinions. They therefore have a corporate responsibility and obligation to monitor and restrict dangerous, racist and illegal hate speech. An evaluation of the EU Code of conduct initiative, revealed that platforms have doubled their notifications, increased bot identification and improved algorithms; an initiative that could have similar results if implemented in Australia.

Regulatory Approaches. Image: © Dahrendorf Forum Working Paper No. 6 (Dec 28th 2018), All Rights Reserved.

The NO argument:

  • Social Media Platform Bias & Agendas

Outsourcing regulation and moderation to social media platforms runs the risk of allowing social platforms to decide which online speech to control, based upon their own political, economic or social agendas. As articulated by Tusikov and Haggart (2019b), platforms that are pressured into a rapid response, without regard or accountability for social problems, may interpret rules themselves and give unprecedented control to public companies, with limited transparency (Cobbe 2019). This therefore allows for potential exploitation, self-interest and societal biases, exposing the subjective nature of censorship, particularly by profit-making organisations who may seek to sway public opinion for their own political or economic gain.

  • Censorship gives undue Power to Social Platforms

Social Platforms already wield a great deal of power in society. By placing the right to censor in the hands of these corporations, we give them wider reaching power of control over our lives. As acknowledged by Cowan (2019), “expecting Facebook to stop the spread of fake news by fact checking a user’s news feed, we give Facebook the power to subjectively determine truth. By asking Youtube to pre-vet content, we give them the power to determine what thoughts are and aren’t acceptable”. Censorship should not be controlled by Social Platforms that already hold immense economic and political power and can profit by deciding what content is made available to the public.

If facebook became a digital censor. Image: © Niko Efstathiou, Pro Journo Davos 2017, Medium, Some Rights Reserved.
  • Freedom of Expression

Suppression of online content restricts our individual rights to freedom of expression, and whilst the Australian Constitution does not have the same strong stance as the USA on freedom of speech (which is contained within its First Amendment), Australia is a Democratic nation and has implied rights under the International Covenant on Civil and Political Rights 1966. The recent Violent Material bill imposed on social media platforms in Australia has angered proponents of free speech and Australian media companies. Advocates for free speech suggest that counter-speech rather than censorship is the most effective means of tackling racist and radicalised rhetoric.

  • Limitations of Technology

Social Platforms are largely ineffective in regulating against hate speech, as they rely upon algorithms to detect and filter online hate speech words, such as racist slurs or incitements to terrorism. Automation and AI tools have a high error rate, and increasingly, racist trolls are bypassing these systems by using encryption and their own invented codes that avoid detection from platforms automatic filters, rendering them ineffective.

Facebook Algorithms to Detect Hate Speech. Image: © Michael Kan, PCMag, All Rights Reserved.

Implications

Outsourcing regulation of hate speech to social platforms in Australia may seem like a credible option, however an evaluation of the EU Code of Conduct and  Germany’s NetzDG laws appear to be backfiring, with the outsourcing of “free speech to commercial enterprises resulting in many platforms hitting delete by default to avoid fines” (Roxborough 2018). Australia’s new Violent Material bill has received criticism for its “knee jerk” response and difficulty sentencing the offenders, whilst enforcing actions against companies such as Facebook, who are not based in Australia, may prove difficult. Additionally, allowing social platforms to determine what speech constitutes a violation of hate speech, lacks transparency and offers no judicial scrutiny. With no ‘nuanced understanding of context, culture and law (Human Rights Watch 2019), social platforms may seek to place self-interest ahead of social good, and if faced with short periods to review content, may elect to risk free expression rather than risk hefty fines.

“Many platforms hit delete by default to avoid fines” – Roxborough 2018

Conclusion

Whilst it is acknowledged that gains to reduce online hate speech have resulted from the EU and German models and it is desirable to reduce illegal hate speech, propaganda and violence online, I oppose the introduction of similar laws in Australia, due to the significant risks that are posed by placing regulation in the hands of social platforms. The introduction of such laws would place an enormous degree of power and control over censorship of free speech into the hands of ‘for profit’ companies that have their own biases and agendas. This would be compounded by limited transparency and no judicial oversight. The suppression of speech is open to an abuse of power as social platforms decide what constitutes hate speech, the potential for inconsistent intervention and the limitations of algorithms, which poses significant risks to an open internet. Australia should instead seek to find a balance between Germany’s strict laws and the USA’s freedom of speech that does not consolidate or compound the immense power already held by our media companies.

To Break a Hate-Speech Detection Algorithm, Try ‘Love’. Image: Wired, © Casey Chin, Some Rights Reserved.

Hyper-textual Article Reference List:

ABC News. (2015). “Paris attacks: More than 120 killed in concert hall siege, bombings and shootings; suspected terrorists dead”. Retrieved from <https://www.abc.net.au/news/2015-11-14/paris-attacks-120-dead-in-shootings-explosions/6940722>.

Australian Human Rights Commission. (2013). “Freedom of information, opinion and expression”, Rights and Freedoms. Retrieved from <https://www.humanrights.gov.au/our-work/rights-and-freedoms/freedom-information-opinion-and-expression>.

Bradshaw, S. and P.N. Howard. (2017). “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation”, Computational Propaganda Research Project. Samuel Woolley and Philip N. Howard, (Eds). Working Paper 2017.12. Oxford, UK. 37 pp. Retrieved from <https://comprop.oii.ox.ac.uk/research/troops-trolls-and-trouble-makers-a-global-inventory-of-organized-social-media-manipulation/>.

Cheik-Hussein, M. (2019). “The ‘chilling’ unintended consequences of Australia’s new social media laws”, AdNews. Retrieved from <http://www.adnews.com.au/news/the-chilling-unintended-consequences-of-australia-s-new-social-media-laws>.

Cobbe, J. (2019). “Algorithmic Censorship on Social Platforms: Power, Legitimacy, and Resistance”, SSRN. Retrieved from <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3437304>. DOI: 10.2139/ssrn.3437304.

Cowan, S. (2019). “It is easier to control guns than thoughts”, The Centre for Independent Studies. Retrieved from <https://www.cis.org.au/commentary/articles/it-is-easier-to-control-guns-than-thoughts/>.

De La Baume, M. (2017). “Angela Merkel defends open border migration policy”, Politico. Retrieved from <https://www.politico.eu/article/angela-merkel-defends-open-border-migration-refugee-policy-germany/>.

Echikson, W. and Knodt, O. (2018). “Germany’s NetzDG: A key test for combatting online hate”, CEPS Research Report. No. 2018/09. Retrieved from <https://www.ceps.eu//system/files/RR%20No2018-09_Germany’s%20NetzDG.pdf>.

European Commission. (2017). “Countering online hate speech – Commission initiative with social media platforms and civil society shows progress”, European Commission – Press Release. Retrieved from <https://europa.eu/rapid/press-release_IP-17-1471_en.htm>.

Farrar, T. (2019). “Fake News and Social Censorship: An Overview”, Government Europa. Retrieved from <https://www.governmenteuropa.eu/fake-news-and-social-censorship/94016/>.

Grattan, M. (2019). “Morrison flags new laws to stop social media platforms being ‘weaponised”, Computer World. Retrieved from <https://www.computerworld.com.au/article/659270/morrison-flags-new-laws-stop-social-media-platforms-being-weaponised/>.

Heffernan, V. (2018). “Ich Bin Ein Tweeter”, The Wired. Retrieved from <https://www.wired.com/story/germany-twitter-social-media-trolling/>.

Human Rights Committee. (1966). “International Covenant on Civil and Political Rights”, United Nations Human Rights Office of the High Commissioner. Retrieved from <https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx>.

Human Rights Watch. (2018). “Germany: Flawed Social Media Law”. Retrieved from <https://www.hrw.org/news/2018/02/14/germany-flawed-social-media-law>.

Jones, D. and S. Benesch. (2019). “Combating Hate Speech Through Counterspeech”, Berkman Klein Centre for Internet and Society at Harvard University. Harvard University Press. Retrieved from <https://cyber.harvard.edu/story/2019-08/combating-hate-speech-through-counterspeech>.

Jourová, V. (2016). “Code of Conduct on countering illegal hate speech online: First Results on Implementation”, European Commission. Retrieved from <https://www.sisumma.com/wp-content/uploads/2016/12/factsheet-code-conduct-8_40573.pdf>.

Jourová, V. (2019). “Code of Conduct on countering illegal hate speech online: Fourth Evaluation Confirms Self-Regulation Works”, European Commission. Retrieved from <https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/countering-illegal-hate-speech-online_en#howitperforms>.

Kan, M. (2019). “Facebook Taps Next-Gen AI To Help It Detect Hate Speech”, PCMag. Retrieved from <https://www.pcmag.com/news/368104/facebook-taps-next-gen-ai-to-help-it-detect-hate-speech>.

Karakeva, S. (2018). “Monitoring and Tagging Hate Speech in Social Media”, Connecting the Dots: The Future of Collective Management. Retrieved from <http://ifrro.org/sites/default/files/datascouting_sophia_karakeva_oct2018.pdf>.

Law Council of Australia. (2019). “Livestream laws could have serious unintended consequences, chilling effect on business”, Law Council Media Releases. Retrieved from <https://www.lawcouncil.asn.au/media/media-releases/livestream-laws-could-have-serious-unintended-consequences-chilling-effect-on-business>.

Matsakis, L. (2018). “To Break a Hate-Speech Detection Algorithm, Try ‘Love”, The Wired. Retrieved from <https://www.wired.com/story/break-hate-speech-algorithm-try-love/>.

Muno, D. (2014). “Racial Discrimination Act: The Two Minute Version”, Amnesty International. Retrieved from <https://www.amnesty.org.au/racial-discrimination-act-the-two-minute-version/>.

Network Enforcement Act. (2017) “Act to Improve Enforcement of the Law in Social Networks”. Article 1. (July 12th 2017). [Act]. Retrieved from <https://www.bmjv.de/SharedDocs/Gesetzgebungsverfahren/Dokumente/NetzDG_engl.pdf?__blob=publicationFile&v=2>. 

Oboler, A. (2014). “Legal Doctrines Applied to Online Hate Speech”, Computers and Law Journal. Retrieved from <http://www.austlii.edu.au/au/journals/ANZCompuLawJl/2014/4.pdf>.

Parliament of Australia. (2019). “Violent Material Bill 2019”, Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019. [Act]. Retrieved from <https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=s1201>.

Rodriguez, J. (2009). “Hate Speech”, Council of Europe. Retrieved from <https://rm.coe.int/168071e53e>.

Roxborough, S. (2018). “Why an Ambitious New Online Anti-Hate Speech Law Is Backfiring in Germany”, Hollywood Reporter. Retrieved from <https://www.hollywoodreporter.com/news/why-an-ambitious-new-online-anti-hate-speech-law-is-backfiring-germany-1074232>.

Stein, J. (2016). “How Trolls Are Ruining the Internet”, Time Magazine. Retrieved from <https://time.com/4457110/internet-trolls/>.

Suzor, N. (2019). “What do we mean when we talk about transparency in content moderation?”, Digital Social Contract. Retrieved from <https://digitalsocialcontract.net/meaningful-transparency-can-help-social-media-platforms-fight-perceptions-of-bias-and-conspiracies-2dac062d3e97>.

Tingle, R. (2016). “Googles, Skypes and Yahoos: Racist trolls have made up their own slang so they can use vile slurs online without being caught by automatic filters”, Daily Mail UK. Retrieved from <https://www.dailymail.co.uk/news/article-3821215/Googles-Skypes-Yahoos-Racist-trolls-slang-make-vile-slurs-online-without-caught-automatic-filters.html>.

Tusikov, N. and B. Haggart. (2019a). “Stop outsourcing the regulation of hate speech to social media”, The Conversation. Retrieved from <https://theconversation.com/stop-outsourcing-the-regulation-of-hate-speech-to-social-media-114276>.

Tusikov, N. And B. Haggart. (2019b). “It’s time for a new way to regulate social media platform”, The Conversation. Retrieved from <https://theconversation.com/its-time-for-a-new-way-to-regulate-social-media-platforms-109413>.

Vice News. (2019, April 5th). “In response to the Christchurch terror attack, Australia’s parliament fast-tracked new laws seeking to punish social media platforms and their executives for failing to remove violent videos, “expeditiously.” #VICENewsTonight“. [Twitter Post]. Retrieved from <https://twitter.com/vicenews/status/1113947447938682883?s=20>.

Yaraghi, N. (2018). “Regulating free speech on social media is dangerous and futile”, Brookings. Retrieved from <https://www.brookings.edu/blog/techtank/2018/09/21/regulating-free-speech-on-social-media-is-dangerous-and-futile/>.

Multimedia Reference List:

Chin, C. (2018). “To Break a Hate-Speech Detection Algorithm, Try ‘Love”, The Wired. [Image]. Retrieved from <https://www.wired.com/story/break-hate-speech-algorithm-try-love/>.

The Economist. (2018). “How Social-Media Platforms Dispense Justice”. [Image]. Retrieved from <https://www.economist.com/business/2018/09/06/how-social-media-platforms-dispense-justice>.

Efstathiou, N. (2017). “If Facebook became a Digital Censor”, Pro Journo Davos, Medium. [Image]. Retrieved from <https://davos.projourno.org/if-facebook-became-a-digital-censor-a8da6d2e09f7>.

Carroll, J. And D. Karpf. (2018) “How Can Social Media Firms Tackle Hate Speech?”, Knowledge@Wharton, [Podcast]. Retrieved from <https://knowledge.wharton.upenn.edu/article/can-social-media-firms-tackle-hate-speech/>.

Goldzweig, R., M. Wachinger, D. Stockmann and A. Römmele. (2018). “Dahrendorf Forum IV”, Working Paper No. 6. London School of Economics and Political Ideas. [Table]. Retrieved from <https://www.dahrendorf-forum.eu/wp-content/uploads/2018/12/Beyond-Regulation_Final.pdf>.

Kan, M. (2019). “Facebook Algorithms to Detect Hate Speech”, PCMag. [Image]. Retrieved from <https://www.pcmag.com/news/368104/facebook-taps-next-gen-ai-to-help-it-detect-hate-speech>.

The New York Times. (2019). “How the New Zealand Gunman Used Social Media | NYT News”, Youtube. [Video]. Retrieved from <https://www.youtube.com/watch?v=zueWcZrvKiw>.

Popov, A. And Z. Bickel. (2017). “Can Germany Fix Facebook?”, The Atlantic. [Image]. Retrieved from <https://www.theatlantic.com/international/archive/2017/11/germany-facebook/543258/>.

Sivier, M. “Social Media Demon”, Vox Political, [Image]. Retrieved from <https://voxpoliticalonline.com/2019/04/05/plan-for-social-media-regulator-will-attack-the-symptom-of-harmful-content-not-the-cause/>.

Be the first to comment

Leave a Reply

Your email address will not be published.


*