Public Comments Portal

Emojis Targeting Black People

October 16, 2025 Case Selected
October 30, 2025 Public Comments Closed
February 10, 2026 Decision Published
Upcoming Meta implements decision

Comments


Country
United States
Language
English
Attachments
Oversight-Board-public-comment.docx
Name
Tal-Or Cohen Montemayor
Organization
CyberWell
Country
Israel
Language
English

Executive Summary
CyberWell submits this public comment to the Oversight Board to address the use of emojis and antisemitic code words used to target protected characteristic groups on social media. We analyze the code words “juice” and “tiny hat” plus the 🧃, 👃, 🤑, 🐷, 🐀, 🐒, 😈, 👿, and👹emojis as key symbols used to promote hate speech towards Jews in English and Arabic across Meta’s platforms (Facebook and Instagram). We also recommend strategies for Meta to address such content when moderating at scale.

CyberWell’s Mission
As a nonprofit dedicated to eradicating online Jew-hatred by driving the enforcement and improvement through informed platform policies, CyberWell provides guidance on “algospeak” emojis used to spread hate and evade moderation.

According to the Board’s statement, it seeks public comments on "The use of emojis, such as the monkey emoji or other coded language to target protected characteristic groups on social media […]". Similar to cases involving monkey symbols referring to Black people, CyberWell contributes expertise on how users promote antisemitic hate speech in English and Arabic on Meta through code words (“juice” and “tiny hat”) and emojis🧃, 👃, 🤑, 🐷, 🐀, 🐒, 😈, 👿, 👹).

In line with the Board's priority of addressing “Hate Speech Against Marginalized Groups”, we offer solutions rooted in moderation best practices that balance freedom of expression with Meta's duty to protect users. Our analysis provides frameworks to help Meta prevent antisemitic content.

Introduction to Hateful Code Words and Emojis
Regarding Cases 2026-001-FB-UA and 2026-002-IG-UA, we identified that users employ code words and emojis to refer to Jews in a hateful manner, both to evade moderation and amplify hate. These references fall into four categories: animals, devils, proxies, and classic antisemitic tropes. These examples most frequently violate Meta’s Meta’s Hateful Conduct Tier 1, with some violating Tier 2 of Hateful Conduct and Tier 2 of Meta’s Bullying and Harassment policy.

Jews as Animals
Depicting Jews as animals is not a new phenomenon. Antisemites have long used such comparisons to dehumanize Jews and question their morality. Portraying Jews as animals increased during World War II, when the Nazis widely published propaganda depicting Jews as rats and other vermin to justify genocide against them. Today, users on social media apply animal emojis (🐷, 🐀, 🐒) as code words for Jews to perpetuate antisemitic rhetoric.

The use of animal emojis in this context violates Tier 1 of Meta’s Hateful Conduct policy covering dehumanizing speech such as: “Animals in general or specific types of animals that are culturally perceived as inferior (including but not limited to: Black people and apes or ape-like creatures; Jewish people and rats […])”. Some cases also violated Tier 2 of Meta’s Bullying and Harassment policy covering: “Dehumanizing comparisons (in written or visual form) to or about: Animals and insects”.

Pig Emoji
For centuries, antisemites dehumanized Jews by comparing them to pigs. In English, the insult is primarily pejorative. In Arabic, it references the Quran, specifically Surah Al-Ma’idah 5:60, interpreted by some as referencing Jews. As a result, posts in Arabic frequently use the pig emoji (🐷) to describe Jews. Users pair 🐷 with the Jewish star of David emoji (✡️) or by inserting 🐷 in the middle of the word, يـهـ🐷ـود (“Je🐷w”). The 🐷emoji also appears across Meta's platforms to demonize Jews and Zionists in discussions about Israel.

In this Facebook post below, the user inserts a pig emoji in the Arabic word Jewish, “يـهـ🐷ـودي”, (“Jewish”), while describing the rabbi in their video: “A Jew🐷ish rabbi performs a pleading prayer to stop Ir🇮🇷anian miss🚀iles 😂”.

In this Instagram reel below, the user inserts the 🐷 emoji alongside the✡️ emoji and the severe Arabic insult “يهود القبلة”, which translates to “the Jews of the Qibla”. “يهود القبلة” is a religious slur that uses the term “Jew” as a metaphor to describe Muslims who betray Islam.

In this Facebook post, a user responds to Israeli military actions in Gaza and characterizes Israelis as a collective of being “Hitler ISIS Zionist Israeli terrorists”. The user employs the 🐷 emoji alongside other derogatory emojis like the 😈 emoji to claim that Israelis are evil beings who use Judaism’s religious texts to carry out acts of terror and destruction.

Rat Emoji
During World War II and the Holocaust, Nazi propaganda frequently compared Jews to rats to depict them as subhuman. This comparison persists online, where users exploit the rat emoji (🐀 ) to dehumanize Jews.

In this Instagram post, the 🐀 emoji describes the Rothschild family, a well-known Jewish family often invoked as a symbol of economic success and to advance coded accusations that Jews dominate global political and economic spheres. The post leverages the Rothschilds to perpetuate harmful stereotypes about Jewish financial control.

In this Facebook post, the user demonizes Jews by promoting the Khazarian myth that modern-day Jews descended from Khazars and are thus not “real Jews”. In both their caption and their comment, the user places the🐀 emoji alongside the devil emoji (😈) to dehumanize Jews as subhuman and as “devil worshippers”.

Monkey Emoji
In Arabic, “monkey” is a common insult. However, in Surah Al-Baqarah 2:65, the Quran compares “those of you who broke the Sabbath” to apes. Some interpretations link this Jews. On social media, the monkey emoji (🐒) is used to dehumanize Jews, similar to. The antisemitic connotations that this emoji holds in Arabic are similar to the 🐷 emoji.

In this Facebook post in Arabic, the user dehumanizes Jews and Israelis by comparing them to monkeys. The use of the monkey emoji within the word “ إسـ🐒ـرائيليون يهـ ـود” (“Jewish Israelis”), to mock Jews and Israelis as subhuman. The title translates to: “150 activists, many of them Jewish Is🐒raelis, breached the Gaza border fence in protest of the blockade […]”.

In another Facebook post in Arabic, a user uses emojis to contrast Muslims and Jews. Unlike the white heart and dove emoji representing Muslims, 🐖 and🐒 emojis are used to dehumanize Jews. The user’s video compares footage from Israel’s 2024 Lebanon electronic device attacks and Israelis running to bomb shelters. The title reads: “The difference between Muslims 🤍🕊️ and between Jews🐖🐒 […]”.

Jews as Devils
The antisemitic allegation that Jews are evil and demonic traces its roots to literal interpretations of New Testament texts. For instance, Revelation 2:9 and 3:9 refer to Jews as the “Synagogue of Satan”. While these verses are not inherently antisemitic, they are often misued to demonize Jews, portraying them as evil co-conspirators with the devil. In addition, religious iconography from the Middle Ages depicted Moses with horns; imagery that evolved into the misconception that Jews have devil horns.
CyberWell’s research shows that on social media, devil-like emojis (😈, 👿, 👹) often appear in English posts that vilify Jews in biblical interpretations. In Arabic posts, users apply these emojis more generally to promote harmful stereotypes about Jews. In both cases, using 😈, 👿, and👹emojis to describe Jews violates Tier 1 of Meta’s Hateful Conduct policy addressing: “Dehumanizing speech in the form of comparisons to or generalizations about animals, pathogens, or other sub-human life forms, including: Subhumanity (including but not limited to: savages, devils, monsters)”.

In one Facebook post, a user promotes an antisemitic interpretation equating Jews with the “Synagogue of Satan”. The post includes text referencing the phrase, a 😈emoji, and an image depicting the Star of David, a central symbol of Judaism. The placement of the 😈 emoji after the phrase “Synagogue of Satan” serves as coded reference to Jews.

In an Arabic Instagram post, a user demonizes Jews as followers of the Antichrist, a person or force who opposes Christ to signal the end of the world. The video claims that the Antichrist’s followers will be Jews from Isfahan, implying Jews worship the devil. The user refers to the Jewish messiah as the “Antichrist” and writes: “The soldiers of the antichrist 😈 (The Jews of Isfahan) if you like the content share with your friends🔥 best regards”.

Proxies for Jews
Antisemites often use code words and emojis as proxy terms for Jews, to evade content moderation and to users who recognize the coded meanings behind these symbols. The code words “tiny hat” and “juice”, as well as the emojis 🧃and 👃violate several sections of Tier 1, as well as Tier 2 of Meta’s Hateful Conduct policy. Tier 2 states that violative content includes “Insults, including those about: Character, including but not limited to allegations of cowardice, dishonesty, basic criminality, and sexual promiscuity or other sexual immorality.

Tiny Hat
CyberWell recently identified a viral trend across Meta’s platforms where users employ “tiny hat” as a derogatory term for Jews. “Tiny hat” refers to the Jewish yarmulke, a symbol of religious observance worn by Jewish men.

In one Instagram post, a user utilizes “#SmallHats”, a variation of “tiny hat”, alongside the 😈 emoji and the hashtag “#SynagogueOfSatan”. The user shares several images of Jews allegedly associated with sexual deviance to portray Jews collectively immoral, predatory, and evil.

On Facebook, another user uses “tiny hat” to claim Jews are demonic individuals who steal land from Gaza and who belong to the illuminati, a secret society conspiracy theorists believe seeks global domination.

“Juice” and juice box emoji
In English, the code word “juice” phonetically resembles “Jews”, and antisemites exploit it to avoid moderation while spreading antisemitic rhetoric. A related variation, the juice box emoji (🧃), appears in content that dehumanizes Jews or accuses them of world control. Users also apply this emoji when promoting or selling antisemitic merchandise across Meta's platforms.

This Instagram post uses the🧃emoji and the word “Juice” as code words for Jews to amplify conspiracy theories about Jewish global control.

On Instagram, a user uses the 🧃 emoji to sell merchandise featuring the same emoji. Alongside the🧃emoji, their caption includes “Noticing” and “Noticer”, terms implying that people “notice” alleged Jewish power or influence.

Nose Emoji
The antisemitic portrayal of Jews with hooked noses emerged in the 12th century to characterize Jews as ugly, and was later weaponized by the Nazis. Today, online users continue to evoke this stereotype by using the 👃 emoji as coded language for Jews.

In one Facebook post, a user shares an image depicting a flyer with the hooked nose symbol referring to Jews. The caption pairs the👃 emoji appears with the antisemitic hashtag “#TheNoticing”, to amplify the trope of Jewish control.

Tropes
CyberWell found that users often use the money-mouth face emoji (🤑) to promote false claims about Jewish greed and control over the economy. These antisemitic tropes date back to Medieval literature and iconography that villainized Jews for moneylending. Today, online posts often use the 🤑 emoji to promote conspiracy theories about Jews and the Rothschild family allegedly controlling the global economy. The emoji is often found alongside others to demonize Jews. Using 🤑 in this context violates Tier 1 of Meta’s Hateful Conduct policy prohibiting: “Harmful stereotypes historically linked to intimidation or violence […] claims that Jewish people control financial, political, or media institutions […]”.

In this Instagram post, a user promotes Holocaust distortion by claiming the Rothschild family funded it, and uses the 🤑 emoji with the 👹and ✡️ emojis to reinforce this antisemitic messaging.

Comments
In online comment sections, antisemitism is sometimes conveyed through emojis alone and without text. For example, users may reply to posts with 🐷 or 🧃 to invoke antisemitic code words and stereotypes without explicitly writing direct references to Jews. Users understand these coded meanings and deliberately exploit them to sustain antisemitic conversations. Because meaning depends on context or on the targeted individual’s identity, comments sections pose significant challenges for content moderators. CyberWell therefore recommends that Meta assess emoji use within the context of the original post, particularly in news items about Jews, Israelis, or Israel, or toward users identifying as Jewish or Israeli in their bio or in the comments.

In one Instagram post, the Israeli news source “Ynet Global” shares footage of Israeli civilians fleeing a Houthi drone strike on Eilat. In the comments, one user writes the🐷and 🐀 emojis alongside the 🇮🇱, 💩, 😂, and 👍, referring to Israelis in a derogatory manner.

In another post on Instagram, Jewish influencer Lizzy Savetsky posts a family photo in celebration of Rosh Hashanah, the Jewish New Year. In response, two users post 🐀 and 👿 emojis to target her. Savetsky’s post references her Jewish observance and her bio reads: “Proud Jewish Woman on a Mission✡️🙏🏼🇮🇱”.

Linguistic Variations
Emojis used to promote antisemitic content often take on distinct meanings across languages. For example, in Arabic-language posts, animal emojis sometimes allude to Quranic interpretations that compare Jews to animals. For instance, the🐒 emoji frequently appears in Arabic posts that promote antisemitic content but rarely appears in English posts. Meanwhile, English-language posts often apply emojis in reference to New Testament interpretations, as seen in posts equating Jews with the “synagogue of Satan”. While some posts antisemitic emoji use draws from religious or cultural narratives, others express general insults or slurs targeting Jews. These variations illustrate how religious and cultural narratives shape online antisemitism. They also underscore the importance of contextual understanding for content moderators tasked with flagging emoji-based hate speech.

Recommendations
CyberWell recognizes that the symbols referenced in the Board’s cases share similarities to those used by antisemites online. We offer the following recommendations in response to the Board’s questions:

I. Enhance detection mechanisms for emoji-related antisemitism that appear in both image and text
Cyberwell urges Meta to flag and remove cases where emojis appear in both image and text. This includes identifying antisemitic messaging embedded in memes, captions, comments, and reposts where emojis and code words are often used to disguise or reinforce hate speech. Integrating both visual and textual analysis will help Meta address antisemitic content more effectively.

II. Flag posts that include combinations of emojis and certain keywords that have a high probability of antisemitic messaging
CyberWell encourages Meta to flag and remove posts that include the following keyword combinations, which have a high likelihood of promoting antisemitism in violation of Tier 1 and Tier 2 of Meta’s Hateful Conduct policy, as well as Tier 2 of Meta’s Bullying and Harassment policy. These keyword combinations include:
"🧃" OR “Juice” AND "Synagogue of Satan"
"🧃" OR “Juice” AND "#thenoticing" OR “#noticer” OR “#noticing”
"🧃" OR “Juice” AND "Zionist"
"👃" AND "🧃"
"👃" AND "#thenoticing"
"🤑" AND" Rothschild"
"🐷" AND "✡️"
"🐷" AND "Zionist"
"🐷" AND “يـهــود”
"🐷" AND “يـهـ🐷ـودي”
"🐀" AND "Zionist"
“🐀” AND “Jew”
“🐒” AND “يـهــود”
“🐒” AND “ إسـ🐒ـرائيليون”
"😈" OR "👿" OR "👹" AND "Synagogue of Satan"
"😈" OR "👿" OR "👹" AND "Jew"
"😈" OR "👿" OR "👹" AND “Zionist”
"😈" OR "👿" OR "👹" AND "Khazar"
"😈" OR "👿" OR "👹" AND “يـهــود”
“Tiny Hat” AND “Jew”
“Tiny Hat” AND “Synagogue of Satan”
“Tiny Hat” AND “Zionist”

III. Ensure consistency across languages when responding to antisemitic hate speech that uses specific emojis and code words to target Jews
CyberWell recommends that Meta responds consistently across posts in different languages that use emojis and code words to target Jews. While the same emojis are often used to promote antisemitism across various languages, they may carry different connotations depending on cultural and linguistic usage. CyberWell therefore recommends that Meta’s detection and moderation systems account for these contextual differences by employing keyword and emoji-based combinations for scalable, accurate enforcement.

IV. Address the use of emojis and code words in the comments section
CyberWell recommends that comments containing the keyword combinations mentioned above should be flagged for review. We recognize that sometimes users post isolated emojis in the comments section to evade detection when referring to Jews. In these instances, Meta should focus on analyzing the associated posts to identify whether such comments contribute to broader antisemitic content. This can be achieved by prioritizing moderation of posts or news items about Jews, Israelis, or Israel, or users self-identifying as Jewish or Israeli. This contextual approach towards moderation would also allow Meta to detect coordinated or coded hate speech that might otherwise go unnoticed when emojis are used in isolation.

V. Ensure that human moderators are trained to identify emoji-based antisemitism
CyberWell recommends that Meta provide dedicated training for human moderators to recognize how emojis are used to convey antisemitic messages, both explicitly and implicitly. Moderators should be trained to also identify patterns of behavior in the comments section to evade automated detection. By strengthening moderators’ understanding of these evolving tactics, Meta can ensure more consistent and accurate enforcement of its hate speech policies.
COMMENT WITH LINKS AND IMAGES SHARED SEPARATELY

Name
Audrey Rosenberg
Country
United States
Language
English
Attachments
Audrey-Rosenbergs-Meta-Response.docx

Online racial discrimination has been rapidly evolving the past few years, from overt slurs to coded and visual language, known as algospeak. The use of emojis, especially the monkey emoji, has become a common tool to dehumanize Black individuals while avoiding detection by automated moderation systems. Other emojis that have been used in inappropriate context such as the eggplant emoji also get abused. This pattern is especially visible in sports discussions on meta platforms such as Instagram, Facebook, and in comment sections of major outlets such as ESPN, Sky Sports, and Marca, where Black athletes are routinely targeted with racist emojis and coded remarks after matches. Since social media is a prioritized way of communicating nowadays, there has to be more censorship on online platforms to minimize hate speech. These digital attacks may seem subtle but inflict real psychological harm and reinforce structural racism. Under the Universal Declaration of Human Rights (UDHR), all individuals are entitled to dignity, equality, and respect. These values are directly violated by hate speech, whether explicit or disguised. I'm 18 years old and my generation understands that using the monkey emoji in this way is racist. Most older generations recognize it too, and when they don’t, younger users often call it out and educate them. There should be no excuse to use innocent emojis in a discriminating way. The cultural awareness exists; what’s missing is consistent enforcement by the platforms themselves.
Meta must take greater responsibility to address the evolving forms of hate speech that its platforms enable. The company should strengthen its algorithms to detect algospeak and emoji-based harassment, while collaborating with anti-racism organizations, digital rights groups, and sports associations like FIFA and UEFA to study online racial abuse. It should also issue transparent reports showing how moderation systems adapt to new coded forms of hate and invest in culturally competent moderation teams trained to interpret regional and linguistic nuance. Tackling this issue is not just a matter of content moderation but, it is a matter of upholding human rights. Ignoring coded hate allows it to thrive, and Meta has both the power and the responsibility to ensure its platforms do not become spaces where discrimination hides behind innocent emojis.

Name
Mbene Amar
Country
United States
Language
English

It started with a pattern I couldn’t ignore: under every post about Sadio Mané, rows of 🥷 emojis appeared like digital graffiti. At first glance, harmless. But in context, it was unmistakable , a racial slur disguised as code. Users had begun using the ninja emoji as a substitute for the n-word, a way to slip hatred past the platform’s moderation filters. It was a quiet mutation of racism, one that revealed how discrimination online doesn’t disappear when silenced by algorithms, it simply learns to speak in new symbols.
What makes the ninja emoji so unsettling in this context is how smoothly it slips into old ideas about Black men, the kind that have been recycled for generations. The black outfit, the hidden face, the suggestion of danger, all of it feeds the stereotype of Blackness as something to be feared or contained. Furthermore People know “ninja” sounds almost identical to the n-word, and they use that to their advantage, a way to say it without saying it, to keep the hate but dodge the filters.
Part of why this kind of racism spreads so easily in sports spaces is because the environment already blurs the line between passion and hostility. Fans feel entitled to say anything under the excuse of rivalry. Add anonymity, and it becomes even easier to turn a player’s identity into a target. Online, people convince themselves it’s just part of the “game,” that dropping a 🥷 in a comment section isn’t serious , but it is. It keeps the same old ideas about Black athletes alive: that they’re built for entertainment, not respect. And because it’s disguised as humor, the hate goes unchallenged, buried under likes and laughing emojis.
Social media companies like Meta have a responsibility that goes beyond deleting slurs, they have to understand the ways people adapt hate to survive moderation. When a symbol like the 🥷 emoji becomes a racial code, it’s not just a glitch in the system; it’s a sign that the system is behind. The company’s human rights duty isn’t limited to removing explicit hate speech but to anticipating how discrimination evolves online. Relying on user reports after harm is done isn’t enough. Meta should invest in cultural and linguistic monitoring teams who can recognize these shifts early, especially in global contexts where slang and emoji use differ. Addressing hate speech means learning its language before it spreads, not waiting until it trends.

Country
United States
Language
English

ChatGPT said:

I personally think people should be able to say what they want, and banning emojis will just make people find more creative ways to bypass restrictions. I also think there should be a limit — like, if an account is constantly spreading hate and using emojis that way, then maybe their account should be restricted. But if someone only posts once or twice, I think it’s fine, since everyone has the right to freedom of speech.

Plus, using emojis to label a group of people really depends on the context. Just because a Jew is connected to a juice box or a Muslim to a bomb doesn’t automatically mean something bad — sometimes it’s just a replacement for a word. Also, if apps like Meta keep getting more restrictive and start banning or limiting speech more and more, people will eventually have enough and move to another app that doesn’t censor as much, like X. According to X, their daily users went up by 10 million from Q2 to Q3, so it’s clear people want a place to freely share their opinions without issues.

Name
Katelynn Ngo
Country
United States
Language
English

It is important to recognize that emojis used in culture contexts are usually fluid symbols. Though they are supposed to be universal and direct, their meanings are by no means fixed and can be different based on cultural settings, language, or even daily circumstances. The subtlety is often overlooked by such automated systems, since the same emoji could convey humor or hostility based on the person using it and against whom it is used. For instance, the monkey emoji can be used in informal contexts meant to be in a playful manner, but it is also deeply dehumanizing when it is used against a specific group of ethnic peoples. In multiregional areas like Brazil and across Europe, people can combine emojis with local slang or words and express a coded message with racist undertones. Where cultural difference is heightened by the ambiguous nature of emojis, a lot of algorithms are not able to detect new iterations of discriminatory language and let hateful content slip through these cracks. It is also not hard to see that racialized hate meaning depends on the intent, timing, and context, all of which algorithmic systems are not well equipped to decipher.

Since these algorithmic systems are being trained to detect word-symbol patterns, they end up overlooking subtle or context-based hate expressions. The speed of online comments highlights this issue even further. Emojis that have been meant to express frustration or enthusiasm can look identical to those meant to be used with racial prejudice. Not understanding who is being attacked and in what context, means that these systems cannot differentiate between hate speech and harmless language. To meet their human rights requirement, platforms like Meta should have culturally adaptive policies and also carry out collaboration with local communities. Doing so would make their detection mechanisms correctly represent the truth regarding multilingual online spaces and have the emoji–based forms of racism accountable.

Name
Julia Upchurch
Country
United States
Language
English
Attachments
META-Oversight-Board-Submission-.pdf
Name
Ved Shetty
Organization
Indiana University
Country
United States
Language
English

Racism in sports, especially football, keeps showing up online in ways that are hard for algorithms to catch. A lot of it happens right after a game, when a Black player misses a penalty or makes a mistake, social media posts get flooded with monkey emojis, bananas, or coded words that everyone knows are racist, even if the system doesn’t flag them.

This is very common in Europe and Brazil. Black players talk about getting hundreds of these comments after every match. People think using an emoji instead of a slur makes it harmless, but it carries the same meaning. It’s a way to insult without saying the “bad” word that gets a post deleted. Fans use inside jokes, wordplay, or different spellings to make hate seem like “banter.” It affects players’ mental health and normalizes racism in online sports talk.

Meta’s recent change, focusing on only “high-severity” automated detections and leaving the rest to user reports, is risky. In sports, hate spreads fast. By the time someone reports it, hundreds of people may have joined in. Most victims don’t want to report every single comment under their own posts. They just stop reading them or leave social media altogether.

The current moderation system also misses context. A monkey emoji on a wildlife post is fine, but under a Black athlete’s photo right after a game, everyone understands what it means. Automated systems should be able to combine text, timing, and context clues. For example, if a cluster of the same emoji appears after a match involving certain players, that’s a strong signal of hate behavior.

Another problem is that moderation doesn’t always work equally across languages. In Brazil and Spain, racial slurs often use slang or spelling changes that AI systems don’t recognize. Meta should work more closely with local experts and anti-racism groups to update its detection lists and improve training data. Relying only on English-language moderation misses a lot of abuse.

Human rights responsibility means more than just removing illegal posts. Platforms have a duty to create a safe environment for participation. When racism spreads freely in the comment sections, it discourages Black athletes, journalists, and fans from speaking up.

Suggestions / Recommendations:

Combine emoji use, timing, and player identity context to detect coded racial abuse automatically.

Add human moderators trained in sports-related hate speech in key languages (Portuguese, Spanish, English, etc.).

Partner with football clubs, player unions, and anti-racism NGOs to create faster reporting channels.

Test moderation systems after big games or tournaments. These are peak times for abuse.

Treat repeated emoji-based targeting as coordinated harassment.

Make it easier for athletes to filter or hide racist comments automatically without having to report each one.

Social media has the power to bring fans together, but if it keeps allowing coded racism to slip through, it divides people instead. A monkey emoji might look small, but when thousands of them appear under one player’s post, the message is loud and clear.

Name
James Cyrynowski
Country
Canada
Language
English

I welcome the Oversight Board’s focus on racial discrimination and hate speech on Meta’s platforms. I am not including any third-party personal data in this submission.

1) Prevalence, forms, and impact (Brazil, Ireland, Spain, and Europe)

Across Meta products, hate targeting protected groups—especially Black people—remains common and unevenly enforced. In multilingual contexts (Portuguese, Spanish, Catalan, Basque, Irish, English), abuse often appears as:
• Dehumanizing tropes (animal comparisons), stereotypes about migration or crime, and “ironic” memes that launder slurs.
• Coordinated pile-ons from newly created or low-reputation accounts.
• “History debates” that minimize slavery/colonialism and slide into veiled calls for violence.

When such content persists, it chills participation by those targeted, pushes people out of civic and cultural spaces, and undermines trust in Meta’s rules. Regional civil-society groups repeatedly document spikes tied to news cycles and sports events.

Recommendations
• Resource moderation in the relevant languages and dialects; partner with local NGOs and researchers in Brazil, Ireland, Spain, and across Europe to refresh risk signals quarterly.
• Track and publish disaggregated enforcement metrics by language/region (prevalence, action rates, false-negative rates, and time-to-action).

2) Emojis and coded language (and post-Jan 7, 2025)

Emojis, numeronyms, and euphemisms are used to evade detection (e.g., animal/food emojis to dehumanize; numeric codes/dog whistles; deliberate misspellings and spacing). These often bypass keyword filters and sometimes human review when context is missed.

How this evades algorithms
• Abuse expressed with symbol + target (emoji + name/hashtag) rather than explicit slurs.
• Code-switching across languages within a thread.
• Image/video overlays where text sits inside memes or screenshots.
• Temporal coordination (bursts around matches or news) that outpace review queues.

Post-announcement (Jan 7, 2025) asks
• Explain what changed on Jan 7, 2025, including policy scope for emojis/symbols and any classifier updates; share pre/post impact metrics.
• Add a reporting reason for “coded hate/emoji use,” allow reporters to attach thread context, and show clearer rationale in decisions.

Detection & enforcement improvements
• Treat certain symbol–target pairings as high-risk when intent is dehumanizing or harassing.
• Use context bundles (preceding replies, linked images, and account reputation) for borderline cases.
• Maintain a public changelog of newly recognized dog whistles, with civil-society input, and enable human-in-the-loop escalation for event spikes.

3) Human-rights responsibilities and best practices

Under the UN Guiding Principles on Business and Human Rights, Meta should conduct ongoing due diligence to prevent, mitigate, and remedy harms from hate speech.

Recommendations
• Risk assessments ahead of predictable surges (elections, football tournaments), with temporary friction: reply cooldowns, restricted first-time comments on high-risk Pages, and rate limits on newly created accounts.
• Graduated penalties for recidivism and coordinated harassment; require completion of an education module before reinstatement.
• Accessible remedy: faster appeals for targets of hate, a Creator/athlete “rapid response” lane, and clearer escalation paths.
• Transparency: publish per-language accuracy/over-removal metrics; enable independent audits and red-teaming of hate-detection systems.
• Protect counterspeech and context: preserve content that documents or condemns hate while removing abusive use.

4) Online racism in sports (football) and impact on Black athletes

Abuse reliably spikes before, during, and after matches, concentrating on athletes’ accounts and official team pages. The harms include mental-health impacts, silencing of athletes’ voices, and deterrence from using interactive features.

Recommendations
• Event-based moderation: on match days, proactively increase review staffing, enable stricter filters on athlete/team pages, and default to hiding comments from brand-new accounts until review.
• Partnerships: formal protocols with leagues, clubs, and player unions to share signals (e.g., anticipated flashpoints), plus a dedicated contact for teams to trigger heightened protections.
• User tools: one-tap bulk reporting for patterned emoji abuse; opt-in filters that collapse comments with high-risk symbol–target pairings.

Conclusion
Coded hate and inconsistent enforcement continue to harm Black people and other protected groups on Meta’s platforms. By clarifying emoji/symbol policy (including the Jan 7, 2025 changes), investing in regional expertise, and adopting event-based, context-aware enforcement with transparent metrics, Meta can better protect users’ dignity and equal participation while preserving space for legitimate expression and counterspeech.

Case Description

The Board selected these cases to explore the use of “algospeak” and online racial discrimination in sports. “Algospeak” is using coded language or emojis to convey dehumanizing or hateful messages in order to bypass automated content moderation systems. The Board also aims to assess the enforcement of such evolving forms of expression, both by human moderators and automated systems, particularly following Meta’s announcement on January 7, 2025 that it is changing its automated policy violations detections systems. The company stated that it will “continue to focus these systems on tackling illegal and high-severity violations,” while relying on user reports to address “less severe policy violations.” The cases are relevant to one of the Board’s seven strategic priorities, Hate Speech Against Marginalized Groups.   

The Board would appreciate public comments that address: 

  • The prevalence, forms and impact of racial discrimination and hate speech, both online and offline, particularly targeting Black people in Brazil, Ireland, Spain and the rest of Europe.  
  • The use of emojis, such as the monkey emoji or other coded language to target protected characteristic groups on social media, including in sports-related conversations. Comments can also address ways in which such content could potentially bypass algorithms designed to flag harmful content, and content moderation challenges, particularly after Meta’s announcement on January 7, 2025.  
  • Views on human rights responsibilities of social media companies and best practices in identifying and responding to hate speech, including to address use of emojis to communicate specific ideas and/or evade moderation.  
  • The prevalence of online racism and racial discrimination in discussions about sports, especially in football (soccer), and its impact on Black athletes.

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.  

Public Comments 

If you or your organization feel you can contribute valuable perspectives that can help with reaching a decision on the cases announced today, you can submit your contributions using the button below. Please note that public comments can be provided anonymously. The public comment window is open for 14 days, closing at 23:59 Pacific Standard Time (PST) on Thursday 30 October. 

What’s Next 

 Over the next few weeks, Board Members will be deliberating these cases. Once they have reached their decision, we will post it on the Decisions page.