Automated content moderation

content moderation

Image: Trending Topics 2019, Flicker, All rights reserved

Introduction

In the following essay, focused on critically discussing the automated content moderation system’s use and its possible effects from both negative and positive side. In the first section, it would define the term automated content moderation and the history of content moderation function, explain the importance and difficulties of conducting it. After that, it would discuss how government and the general public also may benefits from it, since this function has the capability from detecting inappropriate content that may create a social disturbance. Along with the findings suggested what problems may lead by applying automated content moderation on media sites in the second section. The last section is from my personal perspective to discuss how this innovation has affected me in multiple ways, based on the unique content moderation regulation policies in my home country China. The conclusion is automated content moderation is a major and necessary function for securing the user experience, reduce possible social disturbance that may cause by inappropriate content such as terrorism on digital media platforms. However, it has to be under regulations, otherwise it may turn a tool for the companies to gain financial benefits, the government may also use it to stop providing neutral and unbiased information to the public.

 

social media platforms

Picture showing multiple digital media platforms
Image: Ranking Rescuer Toronto, Flicker, All rights reserved

The definition of automated content moderation

First of all, digital media platforms allow people to conduct direct contact with others, and provides opportunities for users to contributes contents. Content moderation could be defined as applying series of limits about the content, it would remove from the platforms when the content became unacceptable.

The problem appears with the grow of digital media platform.

As the importance of digital media platforms continuously to grow in the modern society, its influence ability among the public is also growing especially on social media platforms. It is considered to be crucial to ensure the content could not be used for negative purpose. As the social online environment and relating legal regulations becomes more mature, people has raised their awareness in control of the sharing content for it might leads to negative or even worse situation (Gupta et al.2018)

Besides, the content moderation is usually supervised and conducts by the media platforms companies. By designing automatic algorithms that detects information that may include nudity, terrorism, hate speech and other violent content. After the platforms detected inappropriate information, it would automatically delete the content and the account may receive penalization. According to Gillespie (2018), it’s considered to be the platform’s responsibilities to protect the users, and ensure the platforms is presented in the best way to attract more users to participate.

It’s complicated to define the  boundaries for public speech.

Online anonymity has allowed users to publish contents without receiving punishment in reality, and along with the explosive growth of users on digital platforms, it has become easier to influence others. For example, ISIS gained a lots of attention by posting terrorism context on social media, and they has successfully recruited over 30000 soldiers from other countries. (Booking, Emerson T and Singer, P.W. 2016) Therefore, it also shows a challenging problem for the platforms to decide the boundaries for making a public speech on the platforms. Different standards could apply in areas based on their culture background and social policies.  Therefore, it is a complicated process to set the regulations without depriving individuals speech freedom.

 

Automatic content moderation benefits

Famous media platforms usually use visual recognition tools that are capable of recognize videos or an image that contains weapons, child pornography, animal torture. Specially, hate speeches against specific individuals or groups based on their nationality and gender, race.

  • The contents that may influence others especially teenagers to imitate those dangerous or illegal behavior that could cause serious results.
  • Another common problem is hate speeches online are easy to raise conflicts and argument between groups, by spreading hate speech targeting the vulnerable groups exacerbating the societal tensions.
  • Besides, another possible benefits from content moderation is to stop fake news before it creates a mess in the public, for example during the time when SARS (Severe acute respiratory syndrome) disease outbreak in China. Rumors about eating more salt could effectively prevent from getting the disease, caused millions of citizens to buy as many salt as they could before they realized it’s a false news created and spread by the business owners. Fake news can cause the public to lose trust in media, and creates social panic. The rapid development of digital media tools has accelerated the speed of fake news; content moderation could effectively stop news from being propagated.
  •  Moreover, government also benefits from automatic content moderation after they gain the access of regulating contents by the standards they created. it allows them to reduce the possibilities of creating societal concerns.

 

Problems Automatic content moderation :

Furthermore, there is another critical problem with applying automated content moderates, the automatic system may not be fully reliable to make punishment decisions that could affect the users in a negative way.

The unreliable automatic application tools may creates errors

  • First, language processing tools may only perform their best when they are applied in specific domains, they may not be reliable when the different language are involved. When non-English speakers use tools to translate content into English, it could lead to disappointment results for them. The translation tools often have a lower accuracy, when their translation for languages may not be easy to rephrase, it creates problems for the non-English speakers when they post online with false translation. For example, from the guardian’s news reported A Palestinian man was questioned by the Israeli police, because his “good morning” in Arabic language post on Facebook was translated into “Attack them” in Hebrew. (Hern, 2017)

Creates negative consequences upon users

  • The automatic system is not reliable enough, by relying on them may create negative consequences on users, and risk marginalizing the minority groups which already are facing discrimination.Platforms such as Twitter and Facebook’s policy would automatically delete their post or block user accounts, after the system detects inappropriate contents.However, when the automatic detecting system’s made mistakes, the impacts may be substantial for those users who heavily rely on social media platforms as a major tool to communicates with family and friends, or professional bloggers. When they lose access to these platforms, it affects them in multiple aspects. Since the platforms are deeply interwoven with their economic and social life. (West,2018)

 

Personal reflection on automated content moderation in life

Picture showing the Sina Weibo application
Image: NeochaEDGE, Flicker, All rights reserved.

China, my home-country has strict standards for content moderation regulation, the government has more power over the platforms decisions, the rules limits people’s freedom of public expression. At first, when national emergency occurs, the despised opinions and expression may get blocked under the government pressure, in order to promotes public safety. However, after such censorship people would be in a weaker position  to speak, with no legal protections that guarantees the freedom of speech. (Langvardt,2018)

NowThis World(2015), “How Strict are China’s censorship laws”, Retrieved from Youtube

To be more specific, Chinese social  media platform Weibo (an application similar to Twitter) automatically block all the possible negative comments towards major political leaders, democracy protests along with other sensitive contents. Negative political commentary towards the nation is mostly not allowed as well, the accounts may get deleted. Besides, all the content’s author and the original source on the websites is traceable. (Sun.2017)

When the public gets more acceptable with the content moderation norms on media platforms, it would affect their freedom of expression and the ability to conduct critical thinking. Moreover, with the government who has powers to decide the information the public receives, it creates information inequality.

Conclusion

In conclusion, it’s considered to be vital to monitor online contents as the rapid development of digital media platforms over the years. The governments and platform companies agrees to use automated content moderation, allowing inappropriate contents on social media is well regulated. The above paragraphs have also mentioned the difficulties to creates the standards for content moderation, such as culture and social differences. The automated content moderation can benefit the users from multiple aspects, but it might not be fully accurate and it may create negative influence on users by mistake. After that, it discussed the influence on my personal life with China’s policies on content moderation, deprives mine freedom of expression on certain issues, especially on political issues.

 

Reference list:

Alex Hern, Facebook Translates ‘Good Morning’ into ‘Attack Them’, Leading to Arrest , Guardian (Oct. 24, 2017). Retrived from: https://www.theguardian.com/technology/2017/oct/24/facebook-palestine-israel-translates-good-morning-attack-them-arrest

Booking, Emerson T and Singer, P.W. (2016) War Goes Viral: how social media is being weaponised. The Atlantic.  November 2016. https://www.theatlantic.com/magazine/archive/2016/11/war-goes-viral/501125/

Gillespie, T. (2018). All platforms moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 1–23). New Haven: Yale University Press. ISBN: 030023502X,9780300235029

Gupta, D., Sen, I., Sachdeva, N., Kumaraguru, P., & Buduru, A. (2018). Empowering First Responders through Automated Multimodal Content Moderation. 2018 IEEE International Conference on Cognitive Computing (ICCC), 1–8. https://doi.org/10.1109/ICCC.2018.00008

Langvardt, K. (2018). Regulating Online Content Moderation. Georgetown Law Journal, 106(5), 1353–1388. Retrieved from http://search.proquest.com/docview/2100378890/

Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059

Roberts, Margaret E. (2018) Introduction: Porous Censorship. In Censored: Distraction and Diversion Inside China’s Great Firewall. pp.1-17. Princeton, New Jersey:  Princeton University Press

Sun, Wanning (2017). China bans streaming video as it struggles to keep up with live content. The Conversation. June 28 2017. https://theconversation.com/china-bans-streaming-video-as-it-struggles-to-keep-up-with-live-content-80008

 

 

Be the first to comment

Leave a Reply

Your email address will not be published.


*