The publication by The Guardian of Facebook’s ‘moderation’ documents reveals the complexity of the rules Facebook has developed to guide its moderators as to what should and should not be on the site. What is interesting about these revelations is how Facebook attempts to position itself as a platform that isn’t responsible for what people post on the site, and will only intervene when its community rules have been breached.
Facebook has about two billion users, who utilise the site almost like an online town square. In the real world, public discourse is usually subject to laws relating to freedom of expression and offending public morals, determined by our governments. Western democratic governments are generally subject to rules that hold them to account when they overstep the mark, ensuring that the government does not arbitrarily decide who speaks and what we say; a government cannot shut down a newspaper because it doesn’t like what it says. Facebook has no such restrictions.
In developing these complex rules, in which, for instance, Facebook doesn’t let users suggest that someone should shoot Trump, but allows users to suggest that “unless you stop bitching I’ll have to cut your tongue out,” because it is not considered to be a credible threat and that people use violent language online in a facetious way. It is only considered credible when more specific language is used. There are plenty of scenarios that Facebook passes judgments on, from child abuse, to sexual content, to violence.
These judgements have included allowing a man to live stream himself killing his 11-year-old daughter and removing pictures of mothers breastfeeding. Although Facebook’s policy is to allow pictures of breastfeeding, it only has 4,500 moderators to apply complex rules to the posts of two billion users, and it is understandable mistakes happen under such pressure, that they mistake breastfeeding for a free nipple.
The reasons why the rules are complex are in themselves complex, but part of the reason is that users originate from many countries, with many different laws and regulations and Facebook wants to make sure that every user can use it without coming across material that would make them reconsider using Facebook again. So, although we may think nipples are okay, a fundamentalist Christian may think very differently. Facebook understood this, and weighed up that it would be more beneficial to cater to the fundamentalist’s needs, than the needs of liberal Europeans.
We are increasingly living our public lives in the online sphere, a sphere that is controlled by a company which has no human rights obligations, no obligation to act fairly and no appeal mechanism if you are unhappy with a decision. In theory, Facebook could block you for no reason and be under no obligation to reinstate you, or even offer an explanation why. What these documents show is a company creating law without accountability, keeping these rules from us, and presiding over what we can and can’t say. By interacting in a private/public space, we are giving up our own ability to determine the rules of engagement, and submit to rules determined by a company. If we don’t like it, we can leave, sure. But no one leaves, because interacting virtually is as important to us in creating community as interacting in real life is now. We are given no choice, and little avenue for redress. What improvements Facebook makes can also be taken away without any consequences for Facebook.
Is the restriction of our rights to expression the price we pay for being part of a global community, or is there a better way of organising ourselves online, one where the actions of the site owner are subject to some form of accountability? Are we willing to sacrifice the ability to create our own public space for the convenience of communicating over Facebook? These are the questions that these documents raise.