Multiple Case Decision
Comments Calling for Ethnic Cleaning
December 9, 2025
A user appealed Meta’s decisions to leave up three Facebook comments that called for the ethnic cleansing of Albanians in Kosovo.
3 cases included in this bundle
FB-U3X0XVOY
Case about violence and incitement on Facebook
FB-C7C98IEV
Case about violence and incitement on Facebook
FB-DC3DA56G
Case about violence and incitement on Facebook
Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement.
Summary
A user appealed Meta’s decisions to leave up three Facebook comments that called for the ethnic cleansing of Albanians in Kosovo. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and removed all three comments.
About the Cases
This case is a bundle of three pieces of content. In June 2025, a Facebook user posted a photo of the construction of the National Library of Kosovo, in Pristina, the country's capital. Several users commented on the post. The first comment brought by the reporting user before the Board tagged the reporting user, and talked about a “good ethnic cleanse, as Milošević started” which would be [the] end of the “fake state” and the “turkoschiptar [hereafter “t************”] myth” about the “greater turkoalbania.” Slobodan Milošević was a former President of Serbia and subsequently a President of the former Republic of Yugoslavia, which Kosovo was a part of.
The reporting user posted a comment in response, pointing out that the posting user appeared to be celebrating ethnic cleansing and genocide and glorifying war crimes. The reporting user asserted that Milošević was responsible for the murder, sexual assault, and displacement of many people.
The second case is a comment from the posting user in response to the reporting user. They tagged the reporting user again, claiming “you are victimizing yourself” and that “Milošević was too soft.” They went on stating that, “We should go with full ethnic cleanse as you did to Serbian people back from wwII [World War Two].” They also said, “That’s the only way” since “you are not civilized people.”
Once again, the reporting user posted a comment in response, accusing the posting user of calling for genocide and being ignorant of history.
The third case is another comment from the posting user in response to the reporting user. The comment read, “We’ll get back to Serbian Kosovo and Metohia sooner or later” and “NATO [the North Atlantic Treaty Organization] wouldn’t always be there for you.” Metohia, also spelt Metohija, is the southwestern region of Kosovo. The comment added, “You are a disgrace to civilization” and that they “know very well who t************ are”.
The reporting user appealed to the Board three separate times - once for each comment of the posting user - against Meta’s original decisions to leave up the three comments. In his statements to the Board, the user explained that the first comment uses “inflammatory and derogatory language that targets specific ethnic groups” with terms that “promote a narrative of ethnic superiority.” He also drew attention to the fact that content like this “can contribute to an escalation of ethnic and national tensions, especially in regions with a history of conflict.” In relation to the second comment from the posting user, the reporting user mentioned that calling for a “full ethnic cleanse,” with references to past conflicts, “incites hostility and violence against a targeted group.” Still according to the user, “leaving such statements unmoderated could lead to real-world harm.” Finally, in relation to the third comment from the posting user, the reporting user emphasized that t************ “is a derogatory, historically charged ethnic slur used to insult and dehumanize Albanians. It is widely recognized as hate speech in the context of the Balkan ethnic tensions.”
Meta initially left up each of the three pieces of content. However, under Meta’s Violence and Incitement Community Standard, the company removes “language that incites or facilitates violence and credible threats to public or personal safety. This includes violent speech targeting a person or group of people on the basis of their protected characteristic(s) or immigration status.” The company takes down “threats of violence that could lead to death (or other forms of high-severity violence).” Threats of violence are defined as “statements or visuals representing an intention, aspiration, or call for violence against a target, and threats can be expressed in various types of statements such as statements of intent, calls for action, advocacy, expressions of hope, aspirational statements and conditional statements.” Moreover, under the Hateful Conduct Community Standard, the company removes “content that describes or negatively targets people with slurs.” The policy defines slurs as “words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic, often because these words are tied to historical discrimination, oppression and violence.”
After the Board brought these cases to Meta’s attention, the company determined that all three pieces of content should not have been left up on the platform. Meta explained that the first piece of content “calls for a resumption of the ethnic cleansing begun by Slobodan Milošević,” which was “articulated explicitly in a later portion of the conversation by the same user.” Therefore, Meta concluded that this comment amounts to a threat of high-severity violence against Albanians. Meta also considered that the second case violated the Violence and Incitement policy, highlighting that “here, the user calls for ethnic cleansing, which qualifies as a threat of high-severity violence targeting Albanians.” Finally, the company concluded that the third comment was violating as well, since it referenced an “historical incident of violence against Albanians in Kosovo” which, in light of the “earlier threat of violence by the same user,” also constitutes a “threat of high-severity violence against Albanians.” The company then removed the three pieces of content from Facebook.
Board Authority and Scope
The Board has authority to review Meta’s decision following an appeal from the user who reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1).
When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users.
Significance of Cases
The three pieces of content in this bundle provide examples of the underenforcement of Meta’s Violence and Incitement Community Standard at scale. On three separate occasions, the company’s content moderation systems initially failed to identify and remove clear threats of high-severity violence targeting a specific group of people on the basis of protected characteristics, which in these cases, was Albanians. Nevertheless, after the Board flagged the content, Meta reviewed it again and its specialized teams considered the full context of the longer conversation to establish that the individual comments were violating.
The Board has repeatedly raised concerns about the underenforcement of death threats in general and, more specifically, of content targeting groups that have historically been and continue to be discriminated against (see, for instance, Statements Targeting Indigenous Australians and Post in Polish Targeting Trans People). The Board notes the history of conflict in Kosovo and the recent escalation of tensions following events in the region in May and September 2023, and February and November 2024, as well as the emphasis the reporting user placed on the potential for comments like the ones in this bundle to lead to offline violence. In the Knin Cartoon decision, when reviewing a video on Facebook which depicted ethnic Serbs as rats, the Board reached a conclusion that content targeting an ethnic group is dehumanizing and hateful if it is “celebrating past acts of discriminatory treatment,” “especially in a region that has a recent history of ethnic conflict.”
In that same decision, the Board recommended that Meta should clarify “the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood” ( Knin Cartoon, recommendation no. 1). Even though the recommendation was issued in relation to Meta’s Hate Speech (now Hateful Conduct) policy, the Board believes it is relevant in this case, as the posting user didn’t explicitly refer to Albanians in any of the three comments. In response to that recommendation, Meta reported that the company “added new language to the introduction to [its] Community Standards on [its] Transparency Center that clarifies [its] approach to content that uses ambiguous or implicit language and requires additional context to identify as violating. The update clarifies that, in an instance where additional context enables [Meta] to reasonably interpret that content violates [its] Community Standards, [Meta] may remove said content” (Meta’s Q4 2022 Quarterly Update on the Oversight Board). The Board considered the recommendation to be partially implemented because the policy update was not adopted under the Hate Speech policy and because it lacked language addressing implicit references to protected groups.
Also, the Board noted that, after it brought this bundle to Meta’s attention, the company assessed all three comments in this case, together with the comments left by the reporting user, as part of a longer conversation thread. This was relevant for establishing that all three comments were calling for ethnic cleansing and a “return to a ‘Serbian Kosovo’” - a reference to a historical incident of violence against Albanians in Kosovo, therefore a threat of high-severity violence against them. While this approach is consistent with the Board’s guidance for Meta on how to address policy enforcement on its platforms, it was not followed by its at-scale reviewers. In the Poem About Political Protest in Argentina, the Board highlighted the importance of providing at-scale content reviewers with the full context of a post, as a way of improving enforcement accuracy. Though that case focused on policy enforcement against carousels (a post with multiple images), the same holds for conversation threads, as these cases illustrate.
Additionally, the Board has issued recommendations to increase its understanding of and overall transparency around Meta’s enforcement accuracy and approach to measuring it:
- “In order to inform future assessments and recommendations to the Violence and Incitement policy, and enable the Board to undertake its own necessity and proportionality analysis of the trade-offs in policy development, Meta should provide the Board with the data that it uses to evaluate its policy enforcement accuracy. This information should be sufficiently comprehensive to allow the Board to validate Meta’s arguments that the type of enforcement errors in these cases are not a result of any systemic problems with Meta’s enforcement processes” ( United States Posts Discussing Abortion, recommendation no. 1).
- Meta “should improve its transparency reporting to increase public information on error rates by making this information viewable by country and language for each Community Standard.” The Board underscored that “more detailed transparency reports will help the public spot areas where errors are more common, including potential specific impacts on minority groups, and alert Facebook to correct them.” ( Punjabi Concern Over the RSS in India, recommendation no. 3.)
In response to both recommendations, Meta shared a confidential “summary of enforcement data ... including an overview of enforcement accuracy data” for the Violence and Incitement and the Dangerous Organizations and Individuals policies (Meta’s H1 2025 Report on the Oversight Board). The Board considered both recommendations as omitted or reframed, since Meta’s responses did not address the recommendations’ core objectives. In the instance of the first recommendation, this was because the Board asked Meta for the data the company uses to assess the accuracy of policy enforcement. However, Meta only shared the results of an assessment itself. In the second instance, this was because it asked Meta to make this enforcement data public, and for the company to break it down by country and language. Therefore, the Board concluded that the goals of each recommendation were not achieved.
Finally, the Board has also made a recommendation regarding Meta’s audits of its slurs list, to make sure it is up to date across all regions: “When Meta audits its slur lists, it should ensure it carries out broad external engagement with relevant stakeholders. This should include consulting with impacted groups and civil society” ( Criticism of EU Migration Policies and Immigrants, recommendation no. 3.). Meta reported progress on the implementation of this recommendation. The company explained it “regularly engage[s] with stakeholders, including civil society, to maintain accurate lists of slurs across global regions, and [is] working to formalize this process at an early stage of [its] annual audit process” (Meta’s H1 2025 Report on the Oversight Board). Meta stated that it is “forming a working group to identify how to formalize best practices and requirements for ensuring that stakeholders are included in the next audit.” According to Meta, new updates will be provided in the future.
The Board believes that fully implementing the recommendations mentioned above would further strengthen Meta’s ability to reduce the underenforcement of threats targeting groups of people on the basis of a protected characteristic. Firstly, it would enhance reviewer’s ability to spot implicit references to protected groups and remove harmful content. Secondly, it would allow the company to refine its approach to measuring and comparing accuracy data across languages and/or regions, to better allocate resources to improve accuracy rates where needed. Thirdly, the public reporting on the accuracy of reviews under the Violence and Incitement policy would increase transparency and generate engagement with Meta that has the potential to lead to further improvements. Finally, consolidating best practices and requirements for engaging with stakeholders to update its slur list will improve Meta’s ability to more quickly respond to evolving trends in the usage of words with the potential to cause harm.
Decision
The Board overturns Meta’s original decision to leave up the three pieces of content. The Board acknowledges Meta’s correction of its initial errors once the Board brought the cases to Meta’s attention.