Overturned

Comment Stating Romani “Should Not Exist”

A user appealed Meta’s decision to leave up a Facebook comment targeting Romani people, stating they “should not exist.” After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the comment.

Type of Decision

Summary

Policies and Topics

Topic
Discrimination, Marginalized communities, Race and ethnicity
Community Standard
Hate speech

Region/Countries

Location
Poland

Platform

Platform
Facebook

Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement.

Summary

A user appealed Meta’s decision to leave up a Facebook comment targeting Romani people, stating they “should not exist.” After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the comment.

About the Case

In March 2025, a Facebook user commented on a post discussing the history of migrations of the Romani people (also known by other names, such as Roma), an ethnic group originating in northern India that migrated worldwide, living predominantly in Europe in the modern day. The comment stated that the Romani people “should not exist” and are “only [a] problem for the world.”

A user appealed to the Board against Meta's original decision to leave up the comment. The appealing user highlighted the history of persecution against the Romani people in Europe and the discrimination that they face in modern times, such as “segregated education, employment discrimination, housing discrimination, healthcare discrimination, political and legal discrimination, media and social stigma, police brutality, extreme poverty and hate crimes.”

Prior to January 7, 2025, when Meta announced changes to its Hate Speech (now Hateful Conduct) Community Standard, the policy prohibited “statements denying existence (including but not limited to: ‘[protected characteristic(s) or quasi-protected characteristic] do not exist’, ‘no such thing as [protected characteristic(s) or quasi-protected characteristic]’ or ‘[protected characteristic(s) or quasi-protected characteristic] shouldn’t exist.’).”

While this prohibition was removed from the public-facing language of the Hateful Conduct policy, the company still removes “calls for exclusion or segregation when targeting people based on protected characteristics,” such as race and ethnicity. This prohibition includes “general exclusion” which is defined as “calling for the general exclusion or segregation, such as ‘No X allowed!’” In the information Meta provided to the Board, the company also revealed that other violating examples include statements such as “no more,” or “a world without” members of a protected characteristic group.

After the Board brought this case to Meta’s attention, the company determined that the content violated the Hateful Conduct policy and that its original decision to leave up the comment was incorrect. Meta considered that the user violated the policy because the statement that Romanis “should not exist” is a call for their general exclusion. The company then removed the content from Facebook.

Board Authority and Scope

The Board has authority to review Meta's decision following an appeal from the user who reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1).

When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users.

Significance of Case

The content in this case provides an example of hateful speech against the Romani people that is not subtle or coded, expressly stating that they “should not exist.” The case is another example of underenforcement of clear hateful speech (see, for instance, Comment Targeting People with Down Syndrome, Statements Targeting Indigenous Australians, Post in Polish Targeting Trans People) that highlights concerning shortcomings in Meta’s enforcement of its policies against hateful conduct on its platforms.

The United Nations Human Rights Council has long acknowledged that Romanis “have faced, for more than five centuries, widespread and enduring discrimination, rejection, social exclusion and marginalization all over the world, in particular in Europe, and in all areas of life.” The Council also recognized “the existence of anti-Gypsyism [“Gypsy” being a term used to refer to the Romanis and considered by some to be pejorative] as a specific form of racism and intolerance, leading to hostile acts ranging from exclusion to violence against Roma communities.” Similarly, according to a comprehensive study developed by the United Nations Special Rapporteur on Minority Issues, the Romani people face risks of violence, threats to their collective identity, challenges in living conditions and obstacles to effective social participation. More recently, a resolution from the European Parliament, as well as a survey of 10 European countries by the European Union Agency for Fundamental Rights, highlight the discrimination and violence, as well as the difficulties faced by Romani people in areas such as housing, education, health and employment.

The Board has repeatedly raised concerns about underenforcement of content targeting groups that have historically been and continue to be discriminated against. In the Statements Targeting Indigenous Australians decision, the Board highlighted Meta’s error in not taking down content calling for the exclusion of such a group. In the Alleged Crimes in Raya Kobo decision, the Board recommended that Meta “should rewrite [its] value of “Safety” to reflect that online speech may pose risk to the physical security of persons and the right to life, in addition to the risks of intimidation, exclusion and silencing,” (recommendation no. 1). Implementation of this recommendation has been demonstrated through published information.

The Board has also issued recommendations aimed at improving Meta’s policy enforcement to reduce errors. For instance, the Board has recommended that Meta should “share [with the public] the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech [now Hateful Conduct] policy […] in a way that allows these assessments to be compared across languages and/or regions,’’ ( Criminal Allegations Based on Nationality, recommendation no. 2). In its initial response to the Board, Meta reported that the company will implement this recommendation in part. Meta stated that, while the company “will continue to share data on the amount of hate speech content addressed by [its] detection and enforcement mechanisms in the Community Standards Enforcement Report (CSER),” data on the accuracy of its enforcement on a global scale will be confidentially shared with the Board. This recommendation was issued in September 2024. The implementation is in progress, with data yet to be shared with the Board.

The Board urges Meta to significantly improve its accuracy rates in detecting and removing content that clearly violates its Hateful Conduct Community Standard across languages and/or regions. The Board has previously highlighted in the Posts Displaying South Africa’s Apartheid-Era Flag decision that “in 2018, Meta cited the failure to remove hate speech from Facebook in crisis situations like Myanmar as motivation for increasing reliance on automated enforcement.” In the same decision, the Board explained that “in many parts of the world, users are less likely to engage with Meta’s in-app reporting tools for a variety of reasons, making user reports an unreliable signal of where the worst harms could be occurring.” Still in this decision, the Board concluded that it is “crucial that Meta considers fully how the effects of any changes to automated detection of potentially violating content, both for under- and overenforcement, may have uneven effects globally, especially in countries experiencing current or recent crises, war or atrocity crimes.”

The Board believes that fully implementing recommendation no. 2 from the Criminal Allegations Based on Nationality decision mentioned above would further strengthen the company’s ability to reduce underenforcement of harmful content impacting vulnerable groups. It would allow Meta to compare accuracy data across languages and/or regions, allocating resources to improve accuracy rates where needed. Moreover, public reporting on the accuracy of reviews under the Hateful Conduct policy would increase transparency and generate engagement with Meta that has the potential to lead to further improvements.

Decision

The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention.

Return to Case Decisions and Policy Advisory Opinions