A Moderate Approach to Moderation is Not Enough

Being Human by inSight. All Rights Reserved.

Vivienne Guo thinks that social platforms have failed at content moderation.

Academic Tarleton Gilliespie (2018, p. 5) tells us that social media platforms try to replicate “utopian notions of community and democracy”, but the reality is much grimmer. With the genesis of social media platforms came community anxieties around the way that content is governed online. Who decides what content is harmful? What are the economic implications of social platforms? Who removes harmful content?

These anxieties are built on the amorphous nature of digital world. Although it is often argued that the Internet and offline world are mutually exclusive, as per dramatic manifestos such as John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, this falls short of revealing the intricate ways in which the offline and online are connected.

It should always be the priority of social media platforms to protect the community that have put so much trust in them from hate speech and dangerous content. Yet, this debate is deeply complicated. To understand why, we must work to understand the history of content moderation on social platforms, and the economic and socio-cultural nuances and barriers that come into play.

2 Social Media Tools by jrhode CC BY-NC-SA 2.0

early history of content moderation

There is a common misconception about content moderation on social platforms that holds that they have had a single defining moment in their history that establishes a set of fundamental rules to regulate their pockets of cyberspace. This is untrue; content moderation policies are unfinished and always adapting as a result of what is essentially trial and error. Decisions of removal are made in an ad hoc fashion, factoring in media pressure, civil society groups, government, or individual users.

However, as academic Kate Klonik (2018) points out, we need to be aware that “even when these platforms have been made aware of a problematic content moderation policy, they don’t always modify their policies, even when they say they will… But learning the history of these policies, alongside the systems that enforce them, is a crucial part of advocating effectively for change.”

 

who decides what is harmful?

Klonik (2018) gives us a grim answer: “At least for now, and for the foreseeable future, online speech is in the hands of private corporations.”

Certain categories of content are prohibited across most platforms. These include:

  • Sexual content (nudity, sex, pornography)
  • Graphic content (self-harm, violence, obscenity)
  • Harassment (abuse, trolling, threats, hate speech)
  • Illegal activity

 

3 The Terror of War by Nick Ut. All Rights Reserved.

But who decides what is ‘harmful’? Certainly, there are Western-centric power structures in place on these supposedly utopic social platforms.  Gilliespie (2018) draws attention to the Western-centric moderation on social platforms, a part of the enduring neo-colonialist legacy of European colonialism. For example, the photo of ‘Napalm Girl’ was removed from Facebook for nudity, despite having massively important cultural and political implications.

The world rose up in protest, and the suits listened. Facebook Vice President Justin Osofsky had this to say:

“These decisions aren’t easy. In many cases, there’s no clear line between an image of nudity or violence that carries global and historic significance and one that doesn’t. Some images may be offensive in one part of the world and acceptable in another…” (Osofsky as cited in Gilliespie, 2018, p. 3)

In contrast to Osofsky, Gilliespie attempts to provide a more careful interrogation of cultural acceptability:

“When is an image of the human body artistic, educational, or salacious? Are representations of fictional violence merely entertaining or psychologically harmful? Does discussing a dangerous behavior help those who suffer, or tempt them to act? Or, as in the Terror of War photo, does the fact that something is newsworthy supersede the fact that it is also graphic?” (Gilliespie, 2018, p. 10)

Gilliespie’s investigation makes it clear; human nuance is needed when making decisions on what constitutes ‘harmful’ content. We must make use of cultural intelligence in an increasingly diverse world, or it will only reproduce the power structures of European colonialism by positioning Western culture as the superior cultural and moral authority.

 

4 Capitalism by Teen Vogue. All Rights Reserved.

economic implications of social platforms

Osofsky’s statement, for all its vagueness, highlights one very important fact: At the end of the day, social platforms are private corporations. (Gilliespie, 2018) The capitalist motivations that underpin social platforms are nefarious but they put soft power structures and obligations into motion. Ultimately, it is within the best interest of social corporations to keep their platforms as clean and marketable to the community as they possibly can.

 

 

the implications of human vs machine moderators

Platforms such as Facebook use both human moderators and machine moderators such as algorithms. There are key points to be made for – and against – both:

 

Human moderators` Machine Moderators
•       Work for smaller communities

•       Can facilitate speech

•       Recognise context

•       Understand cultural nuance

 

•       Work at large scale

•       Filter speech + images

•       Poor context recognition

•       Lack cultural sensitivity

•       Can be tricked

5 As Facebook Shows Its Flaws, What Might A Better Social Network Look Like? by Chris Nickels. CC BY 2.0

Undoubtedly, moderation requires a lot of labour and resources. This is somewhat reduced by automated machine moderators but these machines lack key understandings of cultural nuance and context. For example, I recently was put in charge of making a Facebook page for a campaign against racism on Facebook, and was barred from using the word ‘racism’ because it was deemed offensive. And the word ‘chink’ is not prohibited on Facebook despite it being Sinophobic slur because it also means a quiet high-pitched sound or a small opening.

The woeful inadequacies of machine moderators mean that we will always need human moderators to back them up. Thus, we will keep seeing an outsourcing of labour to developing countries whose people will work for next to nothing; again, an example of social platforms reproducing neo-colonialist power structures. Gilliespie (2018, p. 9) highlights that there is a lot of hidden labour that upholds the platforms that we use every day without a second thought, citing a 2014 Wired report by Adrian Chen that documents “the experiences of Filipino workers who [scrub] U.S. social platforms for dollars a day.” And the moderation of traumatic content will no doubt lead to vicarious trauma for the people who have to view it in order to remove it; it already does.

‘It’s the worst job and no-one cares’ – BBC Stories

 

The treatment of human moderators needs to be drastically improved moving forward, before we move into discussions of whether or not we can enforce obligations of harmful content removal.

 

should it be applied to an australian context?

I find the wording of the above question just a tad bizarre. In short: yes. There is a dire need for a heightened emphasis on removing problematic content on social platforms. Subconsciously, we know that the internet doesn’t exist on its own; it augments volatile anti-social behaviour that already exists in the offline world (Bakardjieva, 2011). Enter ‘toxic techno-cultures’, as dubbed by academic Adrienne Massanari. Under this blanket term hides many dark events; Facebook Live killings and suicides, violent ideological movements such as the incel movement on platforms like Reddit, and rampant hate speech on social platforms. The question we should be asking is not so much if social platforms should have this obligation enshrined in legislation, so much as whether or not such an enforceable obligation is even possible.

It’s all well and good to say that Australia should have a strong, zero-tolerance policy for harmful content on social platforms. However, this is complicated by the internet’s transnational nature. One question stands fundamental in this debate; how does a country regulate a cyberworld and social platforms that exceed its physical borders?

Jeffrey Rosen (2008) answers this question, providing a 2007 example of nation-specific content modification:

“[Deputy general counsel of Google, Nicole Wong] decided that Google, by using a technique called I.P. blocking, would prevent access to videos that clearly violated Turkish law, but only in Turkey. For a time, her solution seemed to satisfy the Turkish judges, who restored YouTube access. But last June, as part of a campaign against threats to symbols of Turkish secularism, a Turkish prosecutor made a sweeping demand: that Google block access to the offending videos throughout the world, to protect the rights and sensitivities of Turks living outside the country. Google refused, arguing that one nation’s government shouldn’t be able to set the limits of speech for Internet users worldwide.”

A key precedent raised by this case is that the laws of one nation do not govern the Internet at large, only within its geographical borders. The perceived obligation to remove harmful content on social platforms is already enshrined in certain laws, some of which are German law and Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online.

In Australia, we have the Sharing of Abhorrent Violent Material Act 2019. This recent amendment to the Criminal Code places the onus of removing dangerous content on individual social media corporations, mirroring similar legislation by the European Commission. But realistically, the law is flawed. It uses the ambiguous lens of a “reasonable person” in defining what falls within standards of acceptability, without asking what defines a person to be reasonable. Like machine moderators, it largely ignores cultural, social, political and context nuance.

 

where does this leave us?

Undeniably, the history of content moderation is a complex one, but the conclusion that should be clear is that we should absolutely implement enforceable policies and obligations to remove harmful content on social platforms, which are frequented by so many of us from all walks of life. But while the law is an integral part of implementing enforceable policies of harmful content removal, I will reiterate once again: human nuance is not replaceable by machine moderators or the frigid hand of the law. Social media corporations, who as capitalist entities will ultimately profit immensely from improving their moderation policies to clean up their platforms, have the onus of making sure that their policies are representative of human nuances. As social media users, we have not only a vested interest in the way that harmful content is moderated but a responsibility to make sure that human moderators are given the dignity of fair, compassionate working conditions and pay.

 

 

 

REFERENCES

Be the first to comment

Leave a Reply

Your email address will not be published.


*