Multiple Case Decision

Rhetorical Threats Against Authorities

Four users appealed Meta’s decision to remove their Facebook and Instagram posts containing rhetorical threats against authorities. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored all four posts.

4 cases included in this bundle

Overturned

IG-NCZ5E9W0

Case about violence and incitement on Instagram

Platform
Instagram
Topic
Freedom of expression,Governments,Politics
Standard
Violence and incitement
Location
Italy
Date
Published on January 29, 2026
Overturned

FB-FSSEO67W

Case about violence and incitement on Facebook

Platform
Facebook
Topic
Freedom of expression,Governments,Politics
Standard
Violence and incitement
Location
Ethiopia
Date
Published on January 29, 2026
Overturned

FB-K4GVOGGB

Case about violence and incitement on Facebook

Platform
Facebook
Topic
Freedom of expression,Governments,Politics
Standard
Violence and incitement
Location
Pakistan
Date
Published on January 29, 2026
Overturned

FB-XYOQF1VK

Case about violence and incitement on Facebook

Platform
Facebook
Topic
Freedom of expression,Governments,Politics
Standard
Violence and incitement
Location
Ukraine
Date
Published on January 29, 2026

Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement.

Summary

Four users appealed Meta’s decision to remove their Facebook and Instagram posts containing rhetorical threats against authorities. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored all four posts.

About the Cases

In July and August 2025, four users from different countries (Ethiopia, Pakistan, Ukraine and Italy) posted pieces of content containing rhetorical threats against authorities that were initially removed by Meta for violating the Violence and Incitement Community Standard.

In the first case, a user commented in Amharic under a Facebook post featuring photos of the Ethiopian Prime Minister Abiy Ahmed with Chinese Premier Li Qiang. The post also includes Prime Minister Ahmed’s message on the 55th anniversary of Ethio-China relations and resulting benefits of their cooperation. The comment expressed support for Ethiopia’s claim to the Port of Assab, located in Eritrea. The user also mentioned that they hope for peace and collaboration between Ethiopia and Eritrea, stating that “This Nazi Isaias” [Eritrean Prime Minister Isaias Afwerki] “should be eliminated.” In their appeal to the Board, the user mentioned that their post does not include “sensitive” words or statements.

The second case involves a Facebook post in Urdu criticizing corruption in public sector recruitment in Balochistan, a province of Pakistan. The post includes a photo of a person walking barefoot in the desert alongside a donkey, highlighting that poverty in the province is “increasing day by day.” The user also accuses government officials of accepting bribes to hire unqualified candidates for government jobs and stated: “May Allah damn these corrupt and conscienceless officers who sell away our rights for money.” This statement had been originally translated into English by Meta as “May Allah drown those bastards, shameless and unscrupulous officers who sell our rights on us for money.” The caption ends with a crying face emoji. In its statement to the Board, the user explained that this is “an important and serious issue in Pakistan,” and that they only write about the reality of their country.

In the third case, a user posted in Ukrainian a photo of two women wearing face masks with a caption addressing Ukrainian Members of the Parliament, politicians and the president for what they described as mistakes in response to the Russian offensive with “catastrophic” consequences, claiming that Ukrainians had paid with their lives. The post states, “you will be judged and beaten harshly – not by your words but by your actions.” In their appeal to the Board, the user stated that Meta was wrong because the company “did not take into account the wording and context.” They also stated that they used “information-raising statements to show people the problems of the corrupt government” but “did not call for violence.” The user further mentioned that criticism of political parties and politicians is an important component of democracy and that removing this post “will negatively affect the state of civil society in Ukraine,” as it will deprive people of knowing “the truth about the government and those stealing their money.”

Finally, in the fourth case, an Instagram user commented in Italian on a carousel of photos from an art event where performers played soccer using a replica of Spanish dictator Francisco Franco’s head as the ball. The comment states that this should be repeated “with the head of the big bald that someone is still revering today,” the words “big bald” alluding to Italian dictator Benito Mussolini. The comment then adds that they would send those people to Piazzale Loreto, the site where Mussolini's corpse was publicly displayed in April 1945. In their appeal to the Board, the user stated that their post had “nothing against [the] community or Instagram,” labeling it as a “friendly antifa comment.”

Under the Violence and Incitement Community Standard, Meta prohibits “threats of violence that could lead to death” or “serious injury” and “coded statements where the threat of violence is not clearly articulated, but the threat is veiled or implicit.” However, the policy rationale states that the company tries to “consider the language and context in order to distinguish casual or awareness-raising statements from content that constitutes a credible threat to public or personal safety.” The policy rationale also highlights that Meta “considers additional information such as a person’s public visibility and the risks to their physical safety” to determine whether threats are credible.

After the Board brought these cases to Meta’s attention, the company concluded that all four pieces of content did not violate Meta’s Violence and Incitement policy, and that the removals were incorrect. The company identified that the statements addressing authorities are rhetorical expressions of criticism, disdain, or disapproval and not credible threats.

In the first case, Meta concluded that the term “eliminated” should be interpreted as a call for Prime Minister Afwerki’s removal from office rather than a threat of violence, as discussions about his potential removal have been a recurring topic within regional political discourse. In the second case, Meta concluded that the user’s statement – which was originally translated into English by Meta as “drown those bastards, shameless and unscrupulous officers,” but later updated by the company to “damn these corrupt and conscienceless officers,” is best understood as an expression of strong disdain (in the form of a prayer to God) toward local leaders. In the third case, Meta decided that the statement “you will be judged and beaten harshly – not by your words but by your actions” is best interpreted as a critique of Ukrainian leaders. Finally, in the fourth case, Meta identified the reference to Piazzale Loreto as a threat signal given the historical incident of violence in the location but found that the comment does not pose an imminent risk of violence toward Mussolini sympathizers. Meta then concluded that the comment did not meet the criteria to be classified as a veiled threat. The company therefore restored all four pieces of content to its platforms.

Board Authority and Scope

The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1).

Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users.

Significance of Cases

This bundle highlights the overenforcement of Meta’s Violence and Incitement policy against rhetorical statements, and how the company’s shortcomings in distinguishing between credible and non-credible threats of violence continues to restrict political speech.

The Board has repeatedly emphasized the importance of differentiating credible threats of violence that could lead to offline harm from rhetorical threats of violence that are used to express disdain, disapproval, criticism, or resentment towards political regimes or figures in power, as well as the need to safeguard the latter. For instance, in the Iran Protest Slogan decision, the Board determined that a widely used protest slogan – which translates literally as a call for the death of Iran's Supreme Leader Ayatollah Khamenei – was used rhetorically to express disapproval. Additionally, in the Statement About the Japanese Prime Minister decision, the Board highlighted that “the threat against a political leader [former Japanese Prime Minister Fumio Kishida] was intended as non-literal political criticism calling attention to alleged corruption, using strong language.”

In those two cases, the Board issued recommendations that are relevant to this case. Firstly, the Board recommended that Meta “amend the Violence and Incitement Community Standard to (i) explain that rhetorical threats like “death to X” statements are generally permitted, except when the target of the threat is a high-risk person; (ii) include an illustrative list of high-risk persons, explaining they may include heads of state; (iii) provide criteria for when threatening statements directed at heads of state are permitted to protect clearly rhetorical political speech in protest contexts that does not incite to violence” ( Iran Protest Slogan, recommendation no. 1). Secondly, the Board recommended that Meta “update [the company’s] internal guidelines for at-scale reviewers about calls for death using the phrase "death to" when directed against high-risk persons,” specifically to “allow posts that, in the local context and language, express disdain or disagreement through non-serious and casual ways of threatening violence” ( Statement About the Japanese Prime Minister, recommendation no. 2).

Meta has reported progress towards the implementation of both recommendations, explaining that the company has been “committed to conducting policy development related to [its] approach to ‘calls for death’” and “refining definitions and work across [the] Violence and Incitement policy,” (Meta’s H1 2025 Reporton the Oversight Board – Appendix).

Also, to ensure that potential veiled threats are more accurately assessed, the Board recommended that Meta “produce an annual assessment of accuracy for this problem area,” including a “specific focus on false negative rates of detection and removal for threats against human rights defenders, and false positive rates for political speech” ( Content Targeting Human Rights Defender in Peru, recommendation no. 2). Implementation is currently in progress. In its initial response to the Board in relation to this recommendation, Meta explained that “conducting an ‘accuracy’ assessment is challenging as the final assessment is the result of complex factors that may be specific to a regional, historical, or otherwise situational context,” but emphasized that the company will work to “refine how content is surfaced for veiled threats assessment.” The Board believes that full implementation of these recommendations would contribute to decreasing the number of enforcement errors under the Violence and Incitement policy by making the company’s assessment of whether threats are credible more nuanced and context focused. Additionally, it would allow Meta to more readily identify shortcomings in policy enforcement related to veiled threats, allocating resources to improve accuracy rates where needed.

Decision

The Board overturns Meta’s original decisions to remove the four pieces of content. The Board acknowledges Meta’s correction of its initial errors once the Board brought the cases to Meta’s attention.

Return to Case Decisions and Policy Advisory Opinions