Oversight Board overturns Meta’s original decision in ‘Reclaiming Arabic words’ case (2022-003-IG-UA)
The Oversight Board has overturned Meta’s original decision to remove an Instagram post which, according to the user, showed pictures of Arabic words which can be used in a derogatory way towards men with “effeminate mannerisms.” The content was covered by an exception to Meta’s Hate Speech policy and should not have been removed.
About the case
In November 2021, a public Instagram account which describes itself as a space for discussing queer narratives in Arabic culture posted a series of pictures in a carousel (a single Instagram post that can contain up to 10 images with a single caption). The caption, written in both Arabic and English, explained that each picture shows a different word that can be used in a derogatory way towards men with “effeminate mannerisms” in the Arabic-speaking world, including the terms “zamel,” “foufou,” and “tante/tanta.” The user stated that the post intended “to reclaim [the] power of such hurtful terms.”
Meta initially removed the content for violating its Hate Speech policy but restored it after the user appealed. After being reported by another user, Meta then removed the content again for violating its Hate Speech policy. According to Meta, before the Board selected this case, the content was escalated for additional internal review which determined that it did not, in fact, violate the company’s Hate Speech policy. Meta then restored the content to Instagram. Meta explained that its initial decisions to remove the content were based on reviews of the pictures containing the terms “z***l” and “t***e/t***a.”
The Board finds removing this content to be a clear error which was not in line with Meta’s Hate Speech policy. While the post does contain slur terms, the content is covered by an exception for speech “used self-referentially or in an empowering way,” as well as an exception which allows the quoting of hate speech to “condemn it or raise awareness.” The user’s statements that they did not “condone or encourage the use” of the slur terms in question, and that their aim was “to reclaim [the] power of such hurtful terms,” should have alerted the moderator to the possibility that an exception may apply.
For LGBTQIA+ people in countries which penalize their expression, social media is often one of the only means to express themselves freely. The over-moderation of speech by users from persecuted minority groups is a serious threat to their freedom of expression. As such, the Board is concerned that Meta is not consistently applying exemptions in the Hate Speech policy to expression from marginalized groups.
The errors in this case, which included three separate moderators determining that the content violated the Hate Speech policy, indicate that Meta’s guidance to moderators assessing references to derogatory terms may be insufficient. The Board is concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case.
Providing guidance to moderators in English on how to review content in non-English languages, as Meta currently does, is innately challenging. To help moderators better assess when to apply exceptions for content containing slurs, the Board recommends that Meta translate its internal guidance into dialects of Arabic used by its moderators.
The Board also believes that to formulate nuanced lists of slur terms and give moderators proper guidance on applying exceptions to its slurs policy, Meta must regularly seek input from minorities targeted with slurs on a country and culture-specific level. Meta should also be more transparent around how it creates, enforces, and audits its market-specific lists of slur terms.
The Oversight Board’s decision
The Oversight Board overturns Meta’s original decision to remove the content.
As a policy advisory statement, the Board recommends that Meta:
- Translate the Internal Implementation Standards and Known Questions into dialects of Arabic used by its content moderators. Doing so could reduce over-enforcement in Arabic-speaking regions by helping moderators better assess when exceptions for content containing slurs are warranted.
- Publish a clear explanation of how it creates its market-specific slur lists. This explanation should include the processes and criteria for designating which slurs and countries are assigned to each market-specific list.
- Publish a clear explanation of how it enforces its market-specific slur lists. This explanation should include the processes and criteria for determining precisely when and where the slurs prohibition will be enforced, whether in respect to posts originating geographically from the region in question, originating outside but relating to the region in question, and/or in relation to all users in the region in question, regardless of the geographic origin of the post.
- Publish a clear explanation of how it audits its market-specific slur lists. This explanation should include the processes and criteria for removing slurs from or keeping slurs on Meta’s market-specific lists.
For further information:
To read the full decision, click here.
To read a synopsis of public comments for this case, please click the attachment below.