The Leaked Facebook Content Guidelines Show One Thing: No Policy Can Ever Completely Monitor User Behaviour

The Guardian has recently revealed some of the policy guidelines followed by Facebook content moderators. They often seem arbitrary and inconclusive and it is perhaps not surprising that Facebook was keen to keep them under wraps. For example, Facebook’s rulebook suggests to allow a user posting a comment such as “fuck off and die”, but “someone shoot Trump” should be deleted on the other hand. Some photos of non-sexual physical abuse and bullying of children should only be deleted if there is a sadistic or celebratory element within them. “Handmade” art showing sexuality is allowed, while “digital” art is prohibited. A spectrum of problematic content has been constructed by Facebook and some things pass through its filters and others don’t. But how are such decisions made by individual moderators? How is something classified as credible or not credible?

What this comes down to is a perception of threatening or disturbing content as either too generic to pose any risk, or not credible enough to have any serious consequences. While some content may be disagreeable or violent for example, it does not necessarily violate Facebook’s so-called ‘community standards’. However, this is where problems with Facebook’s approach become evident.

Facebook is a big data platform of content accumulation. It has 2 billion users worldwide and an estimated 1.3m posts are shared every minute. One would think that this may make content moderation difficult if not impossible. To address the sometimes fine line between free speech and hate speech or content that is deemed problematic, Facebook has teams of moderators across the world, who, as The Guardian notes, are employed by subcontractors, having to scan through hundreds of disturbing images in each shift. This not only has consequences for their mental health, as the recent court case against Microsoft by content moderators demonstrated, but shows that they are having to achieve an almost impossible task: spending a few seconds per image or content element to determine if it should be censored or not. This approach is wrong. Facebook would do good to hire more moderators who are properly trained and can spend sufficient time on each case rather than having to make decisions within split seconds.

The leaked policy guidelines also illustrate an approach, which has designed a set of generic guidelines that can then be applied to individual cases. The idea of any policy is a set of rules and guidelines that apply to a group of people (in this case Facebook users) in order to establish structures and manage content. But Facebook has remained rather non-transparent as to how much agency individual moderators have in deciding about a particular user post, for example. Facebook should release its full moderation policies and enter into a dialogue with users and experts, rather than drafting a set of rules that it has tried to keep secret.

In addition, there is another aspect about the internal rulebook, which is of significance. One of the leaked documents states that “people use violent language to express frustration online” and feel “safe to do so” on Facebook.

“They feel that the issue won’t come back to them and they feel indifferent towards the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face.”

These sentences show that Facebook managers regard the platform as a container which can hold users’ frustrations, aggressions, hatred and violent language for example. Facebook is seen as a safe space in which all opinions can be expressed and users can vent their anger. This is in line with research which examined Facebook user posts in more detail. I researched Facebook user posts that criticised the platform for various reasons (e.g. disappearing content, unwanted harassment) that all had in common a sense in the users that Facebook was not doing enough to help them with their particular problems. I argued that Facebook can be seen as a platform where users can get things off their chests and such a function is facilitated by Facebook. The above quotes actually illustrate that Facebook seems to see itself in a similar fashion. This may be true to some extent. But when it comes to violent language that expresses the desire to kill someone else, consequences may be very real and the question is if such statements should be moderated and in what ways.

“We should say that violent language is most often not credible until specificity of language gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design. From this perspective language such as ‘I’m going to kill you’ or ‘Fuck off and die’ is not credible and is a violent expression of dislike and frustration,” it is stated in the policy documents leaked to The Guardian.

Expressions such as “most often” or “reasonable ground” expose the vagueness and opaqueness of Facebook moderation policies. Facebook now needs to release its policies in full and enter into a constructive dialogue about how and if they should be changed. The same goes for its workers who moderate content and what their working conditions are.

Beginning a discussion about moderation policies, would mean that Facebook recognises and values its users and staff beyond mere economic metrics.

— This feed and its contents are the property of The Huffington Post UK, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.