Stop "Algospeak" Including Emojis Used for Hate Speech

The Oversight Board overturns Meta's original decisions to keep up two pieces of content that use emojis to express hate, discrimination and harassment towards Black people by comparing them to monkeys. The Board has called for Meta to prevent hateful and discriminatory targeting of groups by improving its automated and human moderation to comprehensively account for “algospeak,” including emojis. This should encompass ensuring its training data for automated policy enforcement is regionally appropriate and up to date, that efforts are coordinated to proactively disrupt hateful campaigns and ensure its mitigation efforts include active monitoring of emoji content inciting discrimination and hostility during major sporting events, such as the FIFA (International Federation of Association Football) World Cup.

About the Cases

These cases address two posts made in May 2025 using monkey emojis to refer to Black people.

In the first case, a user in Brazil posted a short video on Facebook featuring a scene from the movie The Hangover, in which two characters argue, dubbed in Portuguese, claiming ownership of a monkey. Text overlaying the video names the characters as the Spanish football (soccer) clubs “Barcelona” and “Real Madrid.” Additional overlay text refers to boys rising to prominence in Brazilian football. The caption consists of a monkey emoji. The post was viewed over 22,000 times and 12 people reported it.

The second case involves a comment posted in response to a video on an Instagram account in Ireland. In the video, the user expresses indignation after witnessing a racist incident on the street and the caption calls to reject racism in Ireland. Another user’s comment says they do not support the message, rather they want the situation to “blow up” and “to have some glorious fun with all the [monkey emojis] & out in the street.” The comment additionally included several monkey, laughing and praying emojis, and underscored “glorious days ahead.” The original post was viewed over 4,000 times and 62 people reported the comment.

Meta’s automated systems and – after user appeals – human reviewers left both posts up. Users then appealed to the Board. After the Board selected these cases for review, Meta determined its initial decisions were wrong and removed the posts in July 2025 for violating the company’s Hateful Conduct Community Standard.

Coded language through turns of phrases or emojis (called “algospeak”) can be used to convey dehumanizing or hateful messages while bypassing automated content moderation systems.

Key Findings

The Board is concerned about the accuracy of the enforcement of the Hateful Conduct policy, especially in assessing emojis used as algospeak. Classifiers identified the content but took no action. Meta says reviewers should consider all aspects of the content, such as imagery, captions and text overlays, and factors beyond the immediate content, including the main post and comments. Meta also explained that its classifiers are trained on datasets of reported and labeled examples, including cases where emojis are used in potentially violating ways. However, automated and human reviews failed to accurately assess the posts.

Meta should improve automated detection of violative emoji use by periodically auditing its training data. Enforcement processes should always direct content to reviewers with appropriate language and regional expertise.

Responding to the Board’s questions, Meta stated that after the company’s January 7, 2025, announcement, large language models (LLMs) are now more widely integrated as an additional review layer, including for content that may violate the Hateful Conduct policy. According to Meta, the LLMs do not replace existing models, but provide a second opinion on enforcement decisions, focusing on content that has been flagged for removal. In these cases, LLMs were not involved in the review process.

The Board finds that both posts violate the Hateful Conduct Community Standard prohibiting dehumanizing comparisons to animals. Both posts utilize the monkey emoji to target Black people on the basis of their protected characteristic.

Keeping the posts up is also inconsistent with Meta’s human rights responsibilities, as emojis seeking to dehumanize and incite discrimination or hostility towards protected characteristic groups should be subject to removal. It is necessary and proportionate to remove both posts.

Both posts represent forms of algospeak used to express hate, discrimination and harassment towards specific protected characteristic groups, and illustrate how emojis can be used to urge others to take discriminatory and potentially hostile action.

The Brazilian post was made in the context of widely documented systemic racism and hostility in football, particularly targeting Black players. The comment in the Irish case was shared in the context of rising racial discrimination and Afrophobia in Ireland.

To better coordinate its efforts and protect people who may not be directly named but are implicit targets of hateful campaigns, Meta should develop a framework to harmonize its already-existing measures to proactively disrupt hateful campaigns, especially those involving the use of emojis. Meta should ensure that its time-sensitive mitigation efforts, be that through its Integrity Product Operations Center or another risk mitigation system, include active monitoring of content with emojis that incite targeted discrimination or hostility in the lead up, during and in the immediate aftermath of major sporting events, e.g., the 2026 FIFA World Cup.

The Oversight Board’s Decision

The Board overturns Meta's original decision to keep up both pieces of content.

The Board also recommends that Meta:

  • Audit its training data for automated systems used for Hateful Conduct policy enforcement and ensure the data is updated periodically to include examples of content with emojis in all languages, violating use of emojis and new instances of the hateful use of emojis.
  • Harmonize its existing efforts to proactively disrupt hateful campaigns, especially those involving the use of emojis.to better protect people who are not directly named but the implicit targets of hateful campaigns.
  • Ensure that its time-sensitive mitigation efforts, be that through its Integrity Product Operations Center or another risk mitigation system, include active monitoring of content with emojis that incite targeted discrimination or hostility in the lead up, during and in the immediate aftermath of major sporting events, such as the FIFA World Cup.

The Board reiterates the importance of its relevant previous recommendation that Meta:

  • Provide users with an opportunity for self-remediation comparable to the post time friction intervention that was created as a result of the Pro-Navalny Protests in Russia recommendation no. 6. If this intervention is no longer in effect, the company should provide a comparable product intervention.

Further Information

To read public comments for this case, click here.

Return to Blogs