The Automation of Moderation

Is Artificial Intelligence the End of Internet Antisocial Behaviour?

Automated moderation is the use of artificial intelligence (AI) software to assist in the moderation of online activities. This means that antisocial online behaviours such as terrorist recruitment, hate speech and the sharing of violence or nudity, can be prevented, controlled or eliminated.

This technology has arisen to address the problems of human moderation, and as a consequence, its development has benefited many people, and disadvantaged others. However, the technology is also not perfect, meaning the software itself has both flaws and benefits. As such, humans will continue to find relevance in this field, as some weaknesses of automated moderation can only be solved by humans. Despite this, the continued development of automated moderation is still largely beneficial to society, as increased development will reduce these flaws, and the burden of human moderation will be alleviated to some extent.

The development of automated moderation

Artificial intelligence has existed as a concept for decades, having been written about by the likes of Alan Turing’s Computer Machinery and Intelligence and Pamela McCorduck’s Machines Who Think. However it wasn’t until the 21st century that the concept really took of into execution.

Through the development of AI, automated moderation has arisen. This technology generally works off of large data bases and indexes, to scan, filter and remove online content that is deemed inappropriate for a public domain. By 1990 filtering software had emerged, for parents and employers to block inappropriate websites. By 2006, Facebook’s automated moderation software was reporting more offensive photos than humans (Gillespie, 2018). As our knowledge of the technology increases, the more advancements are made in the field, allowing for increased speed and accuracy when analysing the harms of online content. It is believed by some that due to the advancements made over the last five years, that complete automated moderation could one day be possible.

Why is this technology necessary?

The development of AI based moderation is incredibly important to maintaining user safety on internet platforms, as it will relieve the pressures of human moderation. Whilst there is disagreement as to the extent which automated moderation can take over, it is still important that some of the workload is relieved.

This is due to the incredibly poor conditions which human content moderators work under, and the effects which their work has on mental health, as they often go through hundreds of posts a day, much of which consists of pornographic, violent or hateful behaviours.

In the following video by the Verge, the mental effect of moderation on people is evident. Workers at Cognizant, a company that moderates for Facebook, are only paid $15 per hour to look through up to 400 posts per day, for six hours a day. Much of this content is graphic, consisting of violence and the abuse, and death of both humans and animals. This has a serious effect on the mental health of moderators, with developments of PTSD, and an inability to sleep, both very real consequences. This negative experience is compounded by unhygienic working conditions and the pressure of achieving and maintaining an accuracy score of 98%.

The benefits and possibilities

Automated moderation has become increasingly accurate at detecting certain types of antisocial online content. One such example of this is in the realm of counter terrorism.

Terrorist organisations, such as ISIS, have begun using social media and the internet to recruit people and to execute acts of terror. In 2014, ISIS used twitter to spread news of their movements and successes, to create an image that they were stronger than they actually were. This was achieved through an app that posted tweets on behalf of users about the ISIS invasion, resulting in 40,000 tweets produced per day. They also used choreographed execution videos and images as a means of sharing viral images, enhancing the perception of their movement as one that was strong and out of control. This lead to the ISIS invasion of Mosul, as the city was left uncontested by the Iraqis, despite their helicopters, tanks and 25,000 troops, against 1,500 ISIS fighters armed with small weapons. This was because the Iraqis were disorientated by the news of the ISIS movements, leading them to surrender of the city (Brooking & Singer, 2016).

Pro ISIS twitter accounts created by year, another example of how Twitter was used for terrorist recruitment.

Source: Brookings Institute via Forbes

Terrorism in liberal democracies works through undermining the sense of security in the governments in which they target, and so, national security is focused on prevention of these activities. There are two main tactics, the first being to protect infrastructure via security, and the second being the reduction in recruitment and radicalisation to prevent terrorist plans coming to fruition. Here’s where moderation comes in, to eliminate the levels of terrorist recruitment occurring online (McKendrick, 2019).

Through visual recognition, AI can detect the presence of weapons, bodily fluids and facial expressions which indicate violence. They can also determine if structural damage has been caused by natural disaster or by man made causes, and analyse audio to detect gunshots and other non verbal yet violent sounds. Through the ability of AI to detect violence, the detection and removal of terrorist content, without human exposure, is possible. This allows for a less traumatic experience for human moderators and a safer online environment. This is also a political benefit as the control of terrorist threat provides a sense of safety for nations and their citizens.

Why all the haters though?

Despite the increasing speed and accuracy at which online content can be removed, there are also flaws to automated moderation.

One of the main flaws in automated moderation is it is prone to mistakes, and there are antisocial behaviours in which AI is less capable of moderating. Things such as hate speech, and fake news can go undetected, as the technology has no understanding of context, sarcasm and irony. Whilst the technology can keep on improving, the software will never be able to entirely understand human culture, and thus, the nuances in material can lead to false positives and negatives.

As such, it is believed that humans will constantly need to be involved in the process. This way they can make decisions based on cultural nuances, context and constantly updating policies.

Moderators have the power to determine what is and isn’t allowed on the internet, bringing to light the question of free speech.

Source: Taylor Angelo

This also brings to question the idea of who gets to determine what gets to be broadcast and what does not. Some depictions of violence have a historical and political significance (Gillespie, 2018), and as such, through automated moderation, the removal of these videos could be of detriment to society, particularly as it would allow governments to censor content that may show them in a negative light, such as brutality against unarmed protestors. Thus, human moderators are required to make judgments on the grey areas, as seen in the retention or removal of violence depending on context.

Benefits to society, detriments to platforms

Automated moderation presents a great possibility for effective and accurate content moderation, however there is little incentive to remove it. Social media platforms are companies that profit off of advertising, and as such, are profit focussed in their actions.

Whilst the combination of AI which returns a large number of false positives, in conjunction with human moderation, may be a more effective and accurate solution, this solution is more expensive. Not only would software costs, in combination with the cost of human labour, reduce company profits, but also there is little incentive for companies to remove content. This is because platforms such as Facebook still make lots of money out of the advertisements which surround content. As such, the removal of terrorist content would be highly beneficial to society, but would detriment to ability of social platforms to make large profits.

Therefore, the development of automated moderation brings benefits to society, if used in conjunction with human moderation, but has a negative effect on social media companies.

So is this the future?

Overall:

  • Content moderation is very important for protecting online users from harmful material and toxic behaviours.
  • AI could minimise the stresses on human moderators by dealing with certain types of content.
  • However, the technology is not currently be able to relieve the burdens of human moderation entirely.
  • A continued need for humans exists as the software is not able to understand cultural nuances. This may change in the future as the technology continues to improve.
  • This is evident through the ability for visual and audio recognition software to detect signs of violence, hence enabling more effective moderation of terrorist recruitment and content.
  • There is a lack of economic incentive for companies to implement improved AI software, which means the optimal combination of both human and automated moderation is not currently used.

Therefore, despite the flaws of inaccuracy, the technology is largely beneficial to society and human moderation teams, and it is a technology that should continue to be developed.

Reference List

Augis, C. (2019) Evolution of AI: Past, Present, Future. Retrieved from https://medium.com/datadriveninvestor/evolution-of-ai-past-present-future-6f995d5f964a

Brooking, E. T., & Singer, P. W. (2016). WAR GOES VIRAL. The Atlantic Monthly, 318, 70. Retrieved from http://ezproxy.library.usyd.edu.au/login?url=https://search-proquest-com.ezproxy1.library.usyd.edu.au/docview/1858228044?accountid=14757

Buni, C & Chemaly, S. (n.d). THE SECRET RULES OF THE INTERNET. Retrieved from https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech

Dickson, B. (2019) Human Help Wanted: Why AI Is Terrible at Content Moderation. Retrieved from
https://www.pcmag.com/news/369398/human-help-wanted-why-ai-is-terrible-at-content-moderation

Gillespie, T. (2018). Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. Retrieved from http://search.ebscohost.com.ezproxy1.library.usyd.edu.au/login.aspx?direct=true&db=nlebk&AN=1834401&site=ehost-live

Leetaru, K. (2019) The Problem With AI-Powered Content Moderation Is Incentives Not Technology. Retrieved from https://www.forbes.com/sites/kalevleetaru/2019/03/19/the-problem-with-ai-powered-content-moderation-is-incentives-not-technology/#2cedae7455b7

McKendrick, K. (2019) Artificial Intelligence Prediction and Counterterrorism. Retrieved from https://www.chathamhouse.org/sites/default/files/2019-08-07-AICounterterrorism.pdf

The Verge. (2019). Inside the traumatic life of a facebook moderator. Retrieved from https://youtu.be/bDnjiNCtFk4

Vincent, J. (2019) AI won’t relieve the misery of Facebook’s human moderators. Retrieved from https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms

Be the first to comment

Leave a Reply

Your email address will not be published.


*