Who Decides What We See? Automated Content Moderation, Algorithm, and TikTok

A moral discussion on the future of AI and the power we give to it

Sunniva Ottestad, #ARIN2610

11.10-2019

“We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before” – John Perry Barlow, Declaration of Independence of Cyberspace, 1999

The Scale of Online Social Space and the Need for Content Moderation

One of the core functions of social media is grounded in the idealistic days of early internet; designed to connect people online, extending participation and social connection (Gillespie, 2019). Yet, with the rise of the platforms, we are also increasingly seeing the chaos that anticipated the need for them, as well as a parallel need for moderation.

Online community spaces connect people, but the scale it has achieved also gives the perils increased visibility: a lot of user generated content misuse the opportunity of speaking to a wide range of people through posting content that showcases what can generally be considered as harmful. For this reason, content moderation is an issue of increased debate, and users are increasingly interested in what happens behind the curtains at tech giants like Facebook, Twitter and Instagram.

In 2014, WIRED released an article titled The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed, written by Adrian Chen. This article became crucial in creating a debate on the human resources that goes into content moderation every day, and it implied a larger discussion on how the platforms we use are regulated, and the ethics ensued in this.

Automated content moderation has the potential to be of huge benefit, yet we still have a long way to go, and several things that need to be taken into consideration first. These are most importantly the ethics and transparency of tech giants and way they moderate, as well as a deep consideration of how much power we want to give to algorithms.

A Human-Machine Process

With the amount of UGC that platform users post, human-led removal of harmful content has become impossible. A report made by Cambridge Consultants outlines this in crucial detail. Content moderation on social media, as of 2019, is a process where AI generally plays an integral part in pre-moderation phase (where content is moderated before publication, using automated systems), and post- or reactive-moderation is when content is flagged and reviewed by human users and workers (Cambridge Consultants, 2019).

Hashtag moderation is another way of simply banning certain words or phrases that are considered harmful, yet users tend to find ways around this, as seen in the battle on pro-eating disorder communities on Instagram (Gerrard, 2018).

AI moderation is rapidly improving in removal of harmful content online. Machine learning, or deep learning, is one of the recent breakthroughs in the field, where computers learn to moderate by discovering patterns in data and iteratively adjusting towards a desired outcome (Cambridge Consultants, 2019: 5). “Deep neural networks”, enabling “deep learning” is increasing the complexity of these systems, and algorithm development increasingly depends on them (5).

The societal confidence in AI systems is, however, low (Cambridge Consultants, 2019: 6). People generally tend to be more critical when machines make mistakes, and several cases of controversy has been related to AI and bias.

Community Guidelines and the Challenge in Automated Content Moderation

We need to take one step back to understand why automated content moderation is such a relevant topic of discussion, to the simple debate on how content moderation should be done in general.

One of the largest challenges in content moderation are the sets of morals and standards behind the decisions taken, of hiding or removing posts. Facebook, for example, gained a lot of controversy in 2016 for suspending users that posted pictures of topless aboriginal women.

This is an interesting example, because whether it was an automated decision or not, it implied the difficulty in judgment of community standards, as well as the scrutiny over different standards in different cultures. Scrolling Facebook last night I also came across an example of something similar:

Image property: Humans of New York
Image property: Humans of New York
Image property: Humans of New York
Image property: Humans of New York

The reactions this post received were very egalitarian; protecting the cultural purpose and meaning of the image. All of this boils down to a tension platform companies face with the physical world; as Zuckerberg pointed out that Facebook is not a “media company”, scholars argued that this is both strategic and inaccurate, as it removes liability and social obligation (Gillespie, 2018). Social media companies are undeniably influential in our lives, and mistakes in moderation on topics like nudity stick with people.

Community guidelines on social media platforms are also in continuous change, putting human content moderators in a difficult position to take accurate decisions, in a job that is already extremely challenging in relation to the content flagged. The difficulty of these jobs being discussed in media does however give the moderators sympathy, and people forgive.

Automated content moderation could unload the difficulty of these workers, but even with the research in the field and the fast-paced advances, there is an overbearing difficulty of trust in the systems. Automated content moderation’s most difficult issue is in deciphering cultural and emotional significance and context (Cambridge Consultants, 2019), which continues the reliance on humans in the process. In addition to this, users are in general more skeptical towards social media companies, which in turn urges for a strong transparency, let alone explainability, of the automated content moderation strategies implied in the future.

 

From Content Moderation to User Interface and.. TikTok

Another important aspect of this is the part where we, as users, actually are able to engage in the platform without seeing harmful content. This implies a discussion on algorithmic recommendation, and how user interfaces are structured.

Algorithmic recommendation is how platforms sort content for personalisation or popular content across groups. Gillespie argues that these are of increased cultural importance, a statement that is coming truer as we speak, seen in how groups can form online (Massanari, 2016).

Discussing algorithmic recommendation provides perspective in not only the way automated content moderation is developing, but in how it’s extending towards our personalised experience of the platforms, and its cultural value.

An incredibly interesting example of this is TikTok, a video-sharing app popular with teenagers across the globe. It bases its suggestion of content completely on algorithm. The parent company of this app is a Chinese company called ByteDance, one of the leading companies in artificial intelligence. TikTok has been downloaded over a billion times since it launched in 2017, and ByteDance is valued at more than seventy-five billion dollars, which is groundbreaking for a startup.

TikTok has also gained attention over several periods since it came out, with curious advertising strategies, as well as concerns surrounding racism and sexually explicit content.  Last month a leak revealed that the app instructs moderators in censoring content that mention Tiananmen Square, Tibetan independence, and Falun Gong, a banned religious group in China. These revelations came in part due to the Hong Kong protests, where a search for the city, still, does not reveal any of the tension present in the streets:

Screenshot #1 from my own TikTok search
Screenshot #2 – Seems nice..

If we do a quick Google search we see that the protests are still happening:

Screenshot #3

The app is thus under fire for allegedly advancing Chinese foreign policy. As community guidelines are often established culturally, whether this is with overt intention or not, guidelines for China seems to be with a general purpose of according with the socialist system. In responding to this criticism, TikTok stated that they are taking a more localised approach, including local moderators and local moderation policies.

TikTok’s algorithm provides “personalised information flows” with “large scale AI models” (from TikTok’s website), providing a “Discover” page, and a “For You” page, using a machine-learning system that analyse user behavior on each individual video, creating a never-ending feed bound to keep you interested if you use it for long enough.

Discussing this example is relevant because it takes into consideration both how an app hides and promotes content based on algorithm, as well as how politics play an important role in community guidelines. It poses a question of western values in western tech companies, and how content moderation needs to be considered on local levels; thus also how companies like Facebook needs to take liability in content moderation. If we are to implement and expand on automated content generation, we also first need to recognise the problems content moderation already face in terms of values and culture.

There’s no simple conclusion, but..

The discussion on automated content moderation is an increasingly complex one, as it in its essence deals with a commentary on freedom of speech, government intervention and liability of tech companies. We, as users, also have an obligation to stay interested in this discussion because companies are currently trying to balance the cost of human content moderation to automated content moderation. In its absolute simplest form, this is a choice users have to make as to how much power we want to give to algorithms feeding us content that inevitably will keep our attention span for longer, and to whether we can trust computers to distinguish out harmful content from say, activist content. Lastly, tech companies need to own up to their obvious cultural influence and take responsibility on local levels in order to achieve their own goals of inclusiveness and diversity on their platforms.

 

 

 

 

Literature:

Alexander, J. (2019). Indian Lawmakers call for TikTok ban, alleging spread of ‘cultural degradation’ among teens. The Verge. Retrieved from: https://www.theverge.com/2019/2/13/18223590/tiktok-india-ban-legislation-government-teens

Alexander, L. (2016) Facebook’s censorship of Aboriginal bodies raises troubling ideas of ‘decency’. The Guardian. Retrieved from: https://www.theguardian.com/technology/2016/mar/23/facebook-censorship-topless-aboriginal-women

Barlow, J. P. (1996, February 8). A Declaration of the Independence of Cyberspace. Retrieved 27 February 2017, from Electronic Frontier Foundation website: https://www.eff.org/cyberspace-independence

Binns R., Veale M., Van Kleek M., Shadbolt N. (2017) Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation. In: Ciampaglia G., Mashhadi A., Yasseri T. (eds) Social Informatics. SocInfo 2017. Lecture Notes in Computer Science, vol 10540. Springer, Cham. DOI: https://doi.org/10.1007/978-3-319-67256-4_32

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. https://doi.org/10.1177/1461444812440159

Chen, A. (2014) The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed. Wired. Retrieved from: https://www.wired.com/2014/10/content-moderation/

DeNardis, L. (2014). The global war for internet governance. Chapter three: Setting Standards for the Internet. https://doi.org/10.12987/yale/9780300181357.001.0001

DOI: 10.12987/yale/9780300181357.001.0001

Geiger, S. R. (2016) Bot-based collective blocklists in Twitter: the counterpublic moderation of harassment in a networked public space, Information, Communication & Society, 19:6, 787-803, DOI: 10.1080/1369118X.2016.1153700

Gerrard, Y. (2018) Beyond the Hashtag: Circumventing content moderation on Social Media. New Media and Society. 20(12), 4492–4511. https://doi.org/10.1177/1461444818776611

Gillespie, T. (2018). All platforms moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 1–23). New Haven: Yale University Press. ISBN: 030023502X,9780300235029

Gillespie, T. (2016). #trendingistrending: when algorithms become culture. In Algorithmic cultures : essays on meaning, performance and new technologies (pp. 61–82). Abingdon, Oxon: Routledge, an imprint of the Taylor & Francis Group.

Greenberg, A. (2016). Inside Google’s Internet Justice League and its AI-powered War on Trolls. Wired. Retrieved from: https://www.wired.com/2016/09/inside-googles-internet-justice-league-ai-powered-war-trolls/

Harwell, D. & Romm, T. (2019). TikTok’s Beijing roots fuel censorship suspicion as it builds a huge US audience. The Washington Post. Retrieved from: https://www.washingtonpost.com/technology/2019/09/15/tiktoks-beijing-roots-fuel-censorship-suspicion-it-builds-huge-us-audience/

Hern, A. (2019) Revealed: Catastrophic Effects of Working as a Facebook Moderator. The Guardian. Retrieved from: https://www.theguardian.com/technology/2019/sep/17/revealed-catastrophic-effects-working-facebook-moderator

Hern, A. (2019) Revealed: how TikTok censors videos that do not please Beijing. The Gueardian. Retrieved from: https://www.theguardian.com/technology/2019/sep/25/revealed-how-tiktok-censors-videos-that-do-not-please-beijing

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807ISSN: 1461-4448

Cambridge Consultants, Ofcom (2019). Use of AI in Online Content Moderation. Internet Research. Retrieved from: https://www.ofcom.org.uk/research-and-data/internet-and-on-demand-research/online-content-moderation

Sanneh, K. (2014) Censoring Twitter. The New Yorker. Retrieved from: https://www.newyorker.com/culture/cultural-comment/censoring-twitter

Seyfert, R., & Roberge, J. (2016). Algorithmic cultures : essays on meaning, performance and new technologies . Abingdon, Oxon ;: Routledge, an imprint of the Taylor & Francis Group.

Solon, O. (2017). To Censor or Santion Extreme Content? Either Way, Facebook Can’t Win. The Guardian. Retrieved from: https://www.theguardian.com/news/2017/may/22/facebook-moderator-guidelines-extreme-content-analysis

Steinmetz, K. (2019). Inside Instagram’s War on Bullying. Time Magazine. Retrieved from: https://time.com/5619999/instagram-mosseri-bullying-artificial-intelligence/

Thompson, N. (2017). Instagram Unleashes an AI System to Blast Away Nasty Comments. Wired. Retrieved from: https://www.wired.com/story/instagram-launches-ai-system-to-blast-nasty-comments/

Tolentino, J. (2019). How TikTok Holds our Attention. The New Yorker. Retrieved from: https://www.newyorker.com/magazine/2019/09/30/how-tiktok-holds-our-attention

Vincent, J. (2018). Instagram is using AI to detect bullying in photos and captions. The Verge. Retrieved from: https://www.theverge.com/2018/10/9/17954658/instagram-ai-machine-learning-detect-filter-bullying-comments-captions-photos

Be the first to comment

Leave a Reply

Your email address will not be published.


*