New Decision Sets Out Nuanced Approach to Allow Content Criticizing State Actions Through Nationality-Based Criminal Allegations
25 de septiembre de 2024
The Board has reviewed three cases together all containing criminal allegations made against people based on nationality. In overturning one of Meta’s decisions to remove a Facebook post, the Board has considered how these cases raise the broader issue of how to distinguish content that criticizes state actions and policies from attacks against people based on their nationality. In making recommendations to amend Meta’s Hate Speech policy and address enforcement challenges, the Board has opted for a nuanced approach that works for moderation at-scale, with guardrails to prevent negative consequences. As part of the relevant Hate Speech rule, Meta should develop an exception for narrower subcategories that use objective signals to determine whether the target of such content is a state or its policies, or a group of people.
About the Cases
In the first case, a Facebook post described Russians and Americans as “criminals,” with the user calling the latter more “honorable” because they admit their crimes in comparison with Russians who “want to benefit from the crimes” of Americans. This post was sent for human review by Meta’s automated systems, but the report was automatically closed, so the content remained on Facebook. Three months later, when Meta selected this case to be referred to the Board, Meta’s policy subject matter experts decided the post did violate the Hate Speech Community Standard and removed it. Although the user appealed, Meta decided the content removal was correct following further human review.
For the second case, a user replied to a comment made on a Threads post. The post was a video about the Israel-Gaza conflict and included a comment saying, “genocide of terror tunnels?” The user’s reply stated: “Genocide … all Israelis are criminals.” This content was sent to human review by Meta’s automated systems and then removed for violating the Hate Speech rules.
The third case concerns a user’s comment on an Instagram post in which they described “all Indians” as “rapists.” The original Instagram post shows a video in which a woman is surrounded by men who appear to be looking at her. Meta removed the comment under its Hate Speech rules.
All three cases were referred to the Board by Meta. The challenges of handling criminal allegations directed at people based on nationality are particularly relevant during crises and conflict, when they “may be interpreted as attacking a nation’s policies, its government or its military rather than its people,” according to the company.
Key Findings
The Board finds that Meta was incorrect to remove the Facebook post in the first case, which mentions Russians and Americans, because there are signals indicating the content is targeting countries rather than citizens. Meta does not allow “dehumanizing speech in the form of targeting a person or group of persons” based on nationality by comparing them to “criminals,” under its Hate Speech rules. However, this post’s references to crimes committed by Russians and Americans are most likely targeting the respective states or their policies, a conclusion confirmed by an expert report commissioned by the Board.
In the second and third cases, the majority of the Board agrees with Meta that the content did break the rules by targeting persons based on nationality, with the references to “all Israelis” and “all Indians” indicating people are being targeted. There are no contextual clues that either Israeli state actions or Indian government policies respectively were being criticized in the content. Therefore, the content should have been removed in both cases. However, a minority of the Board disagrees, noting that content removal in these cases was not the least intrusive means available to Meta to address the potential harms. These Board Members note that Meta failed to satisfy the principles of necessity and proportionality in removing the content.
On the broader issue of policy changes, the Board believes a nuanced and scalable approach is required, to protect relevant political speech without increasing the risk of harm against targeted groups. First, Meta should find specific and objective signals that would reduce both wrongful takedowns and harmful content being left up.
Without providing an exhaustive list of signals, the Board determines that Meta should allow criminal allegations when directed at a specific group likely to serve as a proxy for the state, such as police, military, army, soldiers, government and other state officials. Another objective signal would relate to the nature of the crime being alleged, such as atrocity crimes or grave human rights violations, which can be more typically associated with states. This would mean that posts in which certain types of crime are linked to nationality would be treated as political speech criticizing state actions and remain on the platform.
Additionally, Meta could consider linguistic signals that could distinguish between political statements and attacks against people based on nationality. While such distinctions will vary across languages, making the context of posts even more critical, the Board suggests the presence or absence of the definite article could be such a signal. For example, words such as “all” (“all Americans commit crimes”) or “the” (“the Americans commit crimes”) could indicate the user is making a generalization about an entire group of people, rather than their nation state.
Having a more nuanced policy approach will present enforcement challenges, as Meta has pointed out and the Board acknowledges. The Board notes that Meta could create lists of actors and crimes very likely to reference state policies or actors. One such list could include police, military, army, soldiers, government and other state officials. For photos and videos, reviewers could look for visual clues in content, such as people wearing military uniform. When such a clue is combined with a generalization about criminality, this could indicate the user is referring to state actions or actors, rather than comparing people to criminals.
The Board urges Meta to seek enforcement measures aimed at user education and empowerment when limiting freedom of expression. In response to one of the Board’s previous recommendations, Meta has already committed to sending notifications to users of potential Community Standard violations. The Board considers this implementation an important step towards user education and empowerment on Meta’s platforms.
The Oversight Board’s Decision
The Oversight Board overturns Meta’s decision to take down the content in the first case, requiring the post to be restored. For the second and third cases, the Board upholds Meta’s decisions to take down the content.
The Board recommends that Meta:
- Amend the Hate Speech Community Standard, specifically the rule that does not allow “dehumanizing speech in the form of comparisons to or generalizations about criminals” directed at people based on nationality, to include an exception along the following lines: “Except when the actors (e.g., police, military army, soldiers, government, state officials) and/or crimes (e.g., atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court) imply a reference to a state rather than targeting people based on nationality.”
- Publish the results of internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy. Results should be provided in a way that allows these assessments to be compared across languages and/or regions.
Note:
In July 2024, as part of its update on a policy forum about speech using the term “Zionist,” Meta stated it had referred cases to the Board to seek guidance on “how to treat comparisons between proxy terms for nationality (including Zionists) and criminals (e.g. ‘Zionists are war criminals’).” The Board takes this opportunity to clarify that none of the three cases that have been reviewed as part of this decision includes the term “Zionists” and neither does the decision discuss use of the term.
For Further Information
To read public comments for this case, click here.