Oversight Board Overturns Meta's Original Decisions in United States Posts Discussing Abortion Cases

The Oversight Board has overturned Meta’s original decisions to remove three posts discussing abortion and containing rhetorical uses of violent language as a figure of speech. While Meta acknowledges its original decisions were wrong and none of the posts violated its Violence and Incitement policy, these cases raise concerns about whether Meta’s approach to assessing violent rhetoric is disproportionately impacting abortion debates and political expression. Meta should regularly provide the Board with the data that it uses to evaluate the accuracy of its enforcement of the Violence and Incitement policy, so that the Board can undertake its own analysis.

About the Cases

The three abortion-related pieces of content considered in this decision were posted by users in the United States in March 2023.

In the first case, a user posted an image of outstretched hands, overlaid with the text, “Pro-Abortion Logic” in a public Facebook group. The post continued, “We don’t want you to be poor, starved or unwanted. So we’ll just kill you instead.” The group describes itself as supporting the “sanctity of human life.”

In the other two cases, both users’ posts related to news articles covering a proposed bill in South Carolina that would apply state homicide laws to abortion, meaning the death penalty would be allowed for people getting abortions. In one of these posts, on Instagram, the image of the article headline was accompanied by a caption referring to the South Carolina lawmakers as being “so pro-life we’ll kill you dead if you get an abortion.” The other post, on Facebook, contained a caption asking for clarity on whether the lawmakers’ position is that “it’s wrong to kill so we are going to kill you.”

After Meta’s automated systems, specifically a hostile speech classifier, identified the content as potentially harmful, all three posts were sent for human review. Across the three cases, six out of seven human reviewers determined the posts violated Meta’s Violence and Incitement Community Standard because they contained death threats. The three users appealed the removals of their content. When the Board selected these cases, Meta determined its original decisions were wrong and restored the posts.

Key Findings

The Board concludes that none of the three posts can be reasonably interpreted as threatening or inciting violence. While each uses some variation of “we will kill you,” expressed in a mock first-person voice to emphasize opposing viewpoints, none of the posts expresses a threat or intent to commit violence. In these three cases, six out of seven human moderators made mistakes in the application of Meta’s policies. The Board has considered different explanations for the errors in these cases, which may represent, as Meta’s responses suggest, a small and potentially unavoidable subset of mistaken decisions on posts. It is also possible that the reviewers, who were not from the region where the content was posted, failed to understand the linguistic or political context, and to recognize non-violating content that used violent words. Meta’s guidance may also be lacking, as the company told the Board that it does not provide any specific guidance to its moderators on how to address abortion-related content as part of its Violence and Incitement policy.

Discussion of abortion policy is often highly charged and can include threats that are prohibited by Meta. Therefore, it is important Meta ensure that its systems can reliably distinguish between threats and non-violating, rhetorical uses of violent language.

Since none of these cases are ambiguous, the errors suggest there is scope for improvement in Meta’s enforcement processes. While such errors may limit expression in individual cases, they also create cyclical patterns of censorship through repeated mistakes and biases that arise from machine-learning models trained on present-day abusive content. Additionally, these cases show that mistakenly removing content that does not violate Meta’s rules can disrupt political debates over the most divisive issues in a country, thereby complicating a path out of division.

Meta has not provided the Board with sufficient assurance that the errors in these cases are outliers, rather than being representative of a systemic pattern of inaccuracies.

The Board believes that relatively simple errors like those in these cases are likely areas in which emerging machine learning techniques could lead to marked improvements. It is also supportive of Meta’s recent improvement to the sensitivity of its violent speech enforcement workflows. However, the Board expects more data to assess Meta’s performance in this area over time.

The Oversight Board’s Decision

The Oversight Board overturns Meta’s original decisions to remove three posts discussing abortion.

The Board recommends that Meta:

  • Provide the Board with the data that it uses to evaluate the enforcement accuracy of its Violence and Incitement policy. This information should be sufficiently comprehensive to allow the Board to validate Meta’s arguments that the type of errors in these cases are not a result of any systemic problems with Meta’s enforcement processes.

For Further Information

To read the full decision, click here.

To read a synopsis of public comments for this case, please click here.

Return to News