` `

Facebook Moderation: Meaning and Methods

Christopher Frawley Christopher Frawley
Technology
26th October 2020
Facebook Moderation: Meaning and Methods

Note: The views and opinions expressed in blog/editorial posts are those of the author. They do not purport to reflect the views or opinions of Misbar.

Moderation and content review is an essential aspect of maintaining any public internet forum. Without it the potential for malcontents to spread misinformation, hatred, and disturbing content is nearly endless. Facebook, as the largest social media platform by far with 2.7 billion monthly active users, and their system of moderation have come under fire in recent years. Because of its importance and prominence, we at Misbar decided to examine the methods used by Facebook to moderate their content and explain why it's important that they do. 

Why Does Social Media Require Moderation?

The most prominent advantage of social media is also its greatest drawback: anyone can use it to say anything. While most people use a platform like Facebook to connect with their friends and family, share pictures, etc., it can also be a potentially invaluable tool for assembling like-minded individuals in an easily accessed meeting place. This is not usually a problematic occurrence, but things can easily spiral out of control when extremists meet up and form echo chambers of possibly dangerous rhetoric and information. And then you have agents of chaos who merely wish to upset people as much as they can with graphic imagery and offensive statements.

According to this New Yorker article, Facebook originally had more of a loose collection of materials which they deemed unacceptable on their platform. Examples included deleting nudity, and not allowing people to “say nice things about Hitler.” As the platform grew exponentially since its inception in 2004, a more official document was needed to label what is and isn’t okay to post.

These days, Facebook uses a comprehensive list of Community Standards as the constitution of their content. The introduction states several values as being most important to the Facebook community (authenticity, safety, privacy, and dignity), before going on to list in specifics the items which they deem improper. Such items include: hate speech, violent/graphic content, adult nudity, sexual activity, sexual solicitation, and cruelty/insensitivity. These are all labeled “Objectionable Content.” Also noted are the issues of misrepresentation, inauthentic behavior, false new, and manipulated media. In other words, misinformation. While Facebook itself claims to understand the grave responsibility of handling misinformation, their actions in this area have been criticized.

Among the more extreme end of the spectrum are radical groups who use Facebook as a meeting place. White supremacists in particular have been a notorious thorn in the side of Facebook. Because Facebook guidelines are publicly posted and somewhat vague, it is relatively easy to skirt around the rules. And if one group is disbanded, and users are blocked, it is easy to make new accounts and start new groups.

How Does Moderation Actually Work?

There are several different kinds of moderation, including: pre-moderation, post-moderation, reactive moderation, and user-dependent moderation. Of these techniques, pre-moderation (which involves screening content before it is actually posted) is the most proactive, but also the least viable considering the sheer volume of user-generated content on Facebook.

In order to understand how content is moderated, it is important to understand the almighty algorithm. Since every piece of user content cannot possibly be reviewed manually, Facebook and other social media platforms rely on algorithms to sort everything. According to Robert McNamee, a former associate of Mark Zuckerberg:

“Facebook’s algorithms promote extreme messages over neutral ones, which can elevate disinformation over information, conspiracy theories over facts. Like-minded people can share their views, but that can also block out any fact or perspective with which they disagree.”

In addition to relying on their own moderators, social media platforms like Facebook rely on third-party companies such as Cogito for dedicated moderation on specific topics (in this case, erotic and explicit content).

Consequences & Complications

Seldom thought of in this area of controversy are the moderators themselves, whose job has them frequently witnessing extremely graphic content. Social media moderators have the unpleasant and often thankless task of viewing as much negative content as they can and deciding what is worthy of the ban-hammer. This task can take a real toll on the emotional and mental condition of moderators and content reviewers. So great was the effect on Facebook’s moderators that in May, Facebook agreed to pay out a $52 million settlement earlier this year after moderators came at them with a class-action lawsuit in 2018.

In light of the pandemic, it has become more important than ever to curtail potentially dangerous medical misinformation. In August, Facebook removed an incredible 7 million posts and flagged 98 million more which contained COVID-19 misinformation. The level of misinformation that was allowed to spread was in part due to a lack of on-hand moderators, which caused a spike in violations of other Community Standards as well. Again, this was a reactionary measure. By the time these posts were taken down, millions of people could have viewed them and acted in accordance with inaccurate medical procedures.

As a nominally neutral platform, Facebook has the dual responsibilities of allowing everyone to have a voice, while also curtailing the misuse of their site. Facebook has frequently come under accusations of bias towards any and all sides of the political spectrum. In an effort to please everyone, they can please very few. It is in Facebook’s best interest to remain as neutral as possible, considering the sheer size and diversity of their audience.

An Impossible Task

Due to the massive proportions of users vs. moderators on major social media sites, attempting to process absolutely everything is a virtually impossible task. Additionally, Facebook (as well as other prominent social media platforms) has to consider a very important question: what do you do when social media has become the most important political platform? What is the appropriate reaction when a major, democratically elected public figure uses Facebook or Twitter or Instagram to further their agenda through misinformation and deceit?

By allowing misinformation to germinate on their platform, Facebook risks causing real harm to people. If they were to begin banning public figures it would undoubtedly be seen as an act of censorship by many. As of right now, there is no clear answer that everyone could live with.

Despite Facebook’s claim that they are fighting against hate speech, misinformation, etc., several massive problems remain. For starters: it hasn’t been working. A team of civil rights leaders commissioned to audit Facebook since 2018 by Facebook themselves recently declared that the company has not made significant progress in tackling the alarming levels of hate speech rampant on the platform.

Secondly, Facebook’s issue with hate speech is strictly ethical, and not economical. Engagement is and always has been their main priority. If an extremist uses their platform and gets lots of attention, and other users see that content along with advertisements, Facebook is literally profiting off of their potentially dangerous rhetoric. Let’s not forget that Facebook is a publicly traded company with an approximate net value of $753 billion, and that Mark Zuckerberg (Facebook’s CEO) is one of the wealthiest people in the world, with a net worth of $104.5 billion. It is not a humanitarian organization; it exists to make money.

Conclusion

Facebook is experiencing a very turbulent year. On one hand, they have expanded their business and revenue by a considerable amount. On the other, the FTC is currently investigating the company for allegedly enforcing an illegal monopoly. That combined with their enormous user-base ­– and the havoc that said base can potentially cause – has created an unprecedented powder keg just waiting to go off. Through this and other social media platforms, we can see the conflict that is inherent to free speech: to allow people to express themselves as much as they want, but to also understand when people go too far. This ever-shifting line is important for all of us to understand if social media is going to continue to be the public forum of information and discourse. 

Most Read