Case Description
The Oversight Board will address the three cases below together, choosing either to uphold or overturn Meta’s decisions on a case-by-case basis.
Meta has referred three cases to the Board, all involving symbols often used by hate groups, as defined under the Dangerous Organizations and Individuals policy, but which can also have other uses.
In the first case, an image posted to Instagram in April 2016 showed a blonde woman with the bottom half of her face covered by a scarf. The words “Slavic Army” and a kolovrat symbol were superimposed over the face covering. While a kolovrat is a type of swastika and both are used by neo-Nazis, the symbol may also be used by some pagans, without apparent extremist intent. In the post’s caption, the user expressed pride in being Slavic, stating the kolovrat is a symbol of faith, war, peace, hate and love. The user hoped that their “people will wake up” and also stated they would follow “their dreams to the death.”
In the second case, a carousel of selfie photographs posted to Instagram in October 2024 showed a blonde woman in various poses, wearing an iron cross necklace and a T-shirt printed with an AK-47 assault rifle and the words “Defend Europe.” The Fraktur font on the T-shirt is a typeface associated with Nazis and neo-Nazis. The caption contained the Odal (or Othala) rune, part of the runic alphabet used across many parts of Europe until it was replaced by the Latin alphabet in the seventh century. The Odal rune was appropriated by the Nazis and is now used by neo-Nazis and other white supremacists to represent ideas connected to what they describe as the “Aryan race.” The post’s caption also contained the hashtag #DefendEurope as well as a text-based image of a rifle. Defend Europe is a slogan used by white supremacists and other extremist organizations opposing immigration. It is also the name of an organization Meta designates as a hate group under its Dangerous Organizations and Individuals policy.
The third case also concerns a carousel of images. Posted in February 2024, the images are drawings of an Odal rune wrapped around a sword with a quotation about blood and fate by Ernst Jünger, a German author and soldier who fought in the first and second world wars. The caption repeats the quotation before sharing a selective early history of the rune, without mentioning its Nazi and neo-Nazi appropriation. The caption concludes by describing the rune as being about “heritage, homeland, and family” and stating that prints of the image are for sale.
The content in the first two cases was only removed after Meta’s subject matter experts reviewed the posts in November 2024, as part of them being referred to the Board. At this time, Meta also determined that the third post did not breach any of its rules.
In referring these cases to the Board, Meta states they are particularly difficult as the symbols may not explicitly violate the company’s policies but still promote dangerous organizations and individuals. The symbols and others like them are used by members of these groups to identify themselves and to show support for the groups’ objectives. This is a key issue that Meta’s Dangerous Organizations and Individuals policy seeks to address. However, Meta is concerned that prohibiting these symbols entirely could limit discussions of history, linguistics and art.
The Board selected these cases to assess whether Meta’s approach to moderating symbols that may promote dangerous organizations also respects users’ freedom of expression. This case falls within the Board’s strategic priority of Hate Speech Against Marginalized Groups.
The Board would appreciate public comments that address:
- How Meta should treat symbols with different meanings when reviewing at scale, where the review by the company’s subject matter experts is limited.
- The significance and prevalence of both the Odal/Othala rune and the kolovrat, particularly on social media.
- To what degree pagan and runic symbols in general have been appropriated by white supremacists and neo-Nazis, and the extent to which they are still used in non-extremist settings.
- Ways in which neo-Nazi and extremist content is disguised to bypass content moderation on social media.
As part of its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.
Public Comments
If you or your organization feel you can contribute valuable perspectives that can help with reaching a decision on the case announced today, you can submit your contributions using the button below. Please note that public comments can be provided anonymously. The public comment window is open for 14 days, closing at 23.59 Pacific Standard Time (PST) on Thursday 27 February.
What’s Next
Over the next few weeks, Board Members will be deliberating this case. Once they have reached their decision, we will post it on the Decisions page.
Comments
I am a frequent user of Instagram. People write things to like "jas all the Jews"and "k.ill all the Jews", "n.azis were right", "zio bitch" on and on. The platform doesn't flag this hate speech because it is misspelled, etc. However, there are hate symbols that can be moderated and I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Please find the following reference of online hate symbols by the ADL:
https://www.adl.org/resources/hate-symbols/search
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
Thank you
This is a sample of what we wrote:
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
All symbols that can be interpreted as hate symbols should be restricted.
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
CyberWell submits this public comment to the Oversight Board to address the misuse and abuse of seemingly innocuous symbols. We analyze the inverted red triangle as a key example of a symbol on Meta that has been weaponized to incite violence against Jews and Israelis. Additionally, we offer suggestions on how Meta should treat symbols with different meanings when moderating at scale, where expert review is limited. Furthermore, we call on Meta to recognize that, even when a symbol associated with a DOI is used in an acceptable manner, related comments should automatically be flagged for review to detect potential glorification or support for terrorism. Strengthening these enforcement mechanisms is crucial to preventing symbols, like the red triangle, from being used to incite harm.
Introduction
As a nonprofit organization committed to eradicating online Jew-hatred through driving the enforcement and improvement of community guidelines and safety policies of digital platforms, it is important for CyberWell to provide guidance on the subject of seemingly innocuous symbols adopted by Dangerous Organizations. According to the Board’s statement, the Board “prioritizes cases that have the potential to affect lots of users around the world”. Similar to the Board’s cases regarding extremist and neo-Nazi symbols, CyberWell would like to contribute our knowledge by using the inverted red triangle, a symbol that can be classified as DOI, as an illustrative example.
Since October 2023, the inverted red triangle is frequently used as a mobilization cry in various flashpoints of antisemitic and violent attacks against Jewish communities, in public spaces, online, and against Jewish institutions worldwide.
We seek to offer solutions rooted in content moderation best practices that balance freedom of expression with Meta’s obligation to protect their users with adequate responses and protocol to prevent the spread of additional violence against Jews.
Inverted Red Triangle: History & Terrorist Ties
Since the Israel-Hamas war broke out in October 2023, CyberWell has detected a significant increase in the use of the inverted red triangle on social media platforms, especially on Meta. Today, the inverted red triangle is a known Hamas propaganda symbol used to identify Jewish and Israeli targets for execution and attack. In addition to its modern connotations, it is critical to recognize that the inverted red triangle was historically used in Nazi concentration camps to dehumanize and categorize prisoners, specifically political prisoners, during the Holocaust. Especially following the October 7 Hamas terror attacks, its use to promote and glorify violence cannot be overlooked. It has been employed to mark Israeli targets for elimination, incite demonstrations advocating for harm against Jews, and facilitate real-world antisemitic hate crimes.
Importantly, Hamas was designated by the US State Department as a Foreign Terrorist Organization (FTO) in 1997. According to Meta's Dangerous Organizations and Individuals Tier 1 Policy, which encompasses entities listed on the aforementioned FTO list, Meta removes content that glorifies, supports, or represents Tier 1 entities, their leaders, founders, or prominent members.
The red triangle highlights the challenge Meta faces in moderating symbols with dual meanings at scale, as it can represent Palestinian support, serve as a Hamas propaganda tool for targeting Jews and Israelis, or simply be used to point something out. CyberWell urges Meta to adopt a context-driven approach when reviewing posts featuring this symbol, particularly when expert review is limited. To support this, we are providing evidence of how the inverted red triangle can incite antisemitism on Meta and offering recommendations to address such cases.
Examples of the Red Triangle on Meta
Hamas Propaganda
As of October 12, 2024, CyberWell identified that Hamas videos feature a modified version of their ‘Military Media’ logo, directly incorporating the inverted red triangle.
[SEE EMAILED PDF VERSION FOR IMAGES]
Example 1: https://www.facebook.com/watch/?v=480303751625108
This video was recently removed from Facebook following CyberWell’s report requesting it be taken down. The video, which spread Hamas propaganda, showed the step-by-step execution of 23-year-old Jewish Israeli citizen Yonatan Deutch. Below, we have included a screenshot showing the red triangle symbol being used to mark the civilian before he was fatally shot. The use of the red triangle in this context not only incites violence against Jews and Israelis, but also serves to glorify Hamas-perpetrated killings, directly violating Meta’s Dangerous Individuals and Organizations (DOI) policy.
Red Triangle Used to Incite Violence & Endorse Hamas
CyberWell also identified users who use the red triangle to advocate for or glorify violence against Jews and to express support for Hamas.
Example 1: https://www.instagram.com/t_u_g_b_a_k/p/C6hIokvK1GM/
This Instagram post features the red triangle symbol in both the image and the user’s description and further mentions the word Intifada, a term historically referring to violent uprisings targeting Jewish and Israeli civilians. By glorifying violence, this post encourages hostility toward Jews and Israelis and aligns with rhetoric that has historically incited real-world violence.
Example 2: [SEE EMAILED VERSION FOR POST]
In this Instagram post, the user praises the “resistance”, portraying it as justified. In this context, “resistance” refers to Hamas and its leaders, as indicated by the photo frame featuring former Hamas leader, Yahya Sinwar, who is known as the orchestrator of the October 7 terrorist attacks in Israel. The user also includes the red triangle in their description. Together, these elements glorify Hamas' violent actions.
Red Triangle Used in Comment Sections
The presence of red triangles in the comment sections can serve as a coded signal of support for Hamas that violates policies without typed words. As such, the red triangle in comment sections is often purposefully used in social media posts in order to escalate and promote violence.
Example 1: https://www.instagram.com/p/DFNXitJBFfy/
In this Instagram post, a user comments “bulls eye” followed by multiple red triangles. CyberWell frequently detects similar comments. This comment indicates support for Hamas’ actions depicted in the video, which includes footage of terrorists shooting Israeli soldiers.
Example 2: https://www.instagram.com/p/DF0kQ6WMimu/
This Instagram post includes a Hamas propaganda video depicting a former Israeli hostage. In the comments section, a user suggests that a Hamas military wing is the bravest army in the world, accompanied by multiple red triangles. The combination of this comment and the video's content demonstrates clear support for and glorification of Hamas' actions, including the kidnapping of Israeli civilians.
Coded Language and News Coverage on Meta
CyberWell identified several tactics used by extremists to bypass content moderation, sometimes employed simultaneously. One widely used tactic involves coded language to conceal extremist and neo-Nazi content from detection on social media. CyberWell aims to highlight specific examples to further illustrate how this is being employed.
A common tactic used to evade content moderation is the insertion of dots or special characters within the names of designated terrorist organizations, making them harder for automated detection systems to recognize. For example, the Al-Qassam Brigades, Hamas’ military wing, is written in Arabic as كتائب القسام. To bypass detection, users alter the text by adding dots, resulting in كـ.ـتـ.ـائب الـ.ـقـ.ـسام. Similarly, the military wing of Palestinian Islamic Jihad (PIJ), known in Arabic as سرايا القدس, is manipulated to appear as سـ.ـرايا القـ.ـدس to avoid moderation. These minor text modifications enable extremist content to circulate undetected by automated systems.
While coded language helps content bypass detection by moderation systems, presenting DOI-related material under the pretense of news coverage exploits a significant policy loophole. Meta’s DOI policy categorizes “support” in various ways, one of which is defined as “Channeling information or resources, including official communications, on behalf of a designated entity or event. E.g., Directly quoting a designated entity without a caption that condemns, neutrally discusses, or is a part of news reporting”.
The lack of a clear definition of what qualifies as legitimate news reporting within the DOI policy creates an opportunity for DOI supporters to promote extremist content while evading enforcement. CyberWell observed that some media accounts on Instagram share hundreds of videos produced and distributed by registered FTOs directly, without including any editorial context or condemnation. Instead, these posts are often framed with vague captions such as: “Scenes published by [DOI name] of the attack on...”, allowing terrorist propaganda to circulate unchecked.
To illustrate the scale of this issue, CyberWell reported 60 incidents of violative content through the trusted partner channel on Meta. However, only 18 were removed, despite being appealed. This demonstrates a critical gap in enforcement, where content that directly amplifies the messaging of designated terrorist organizations remains accessible due to policy loopholes.
The following examples contain posts from news accounts:
Example 1: https://www.instagram.com/p/DFQVy0JBi-w/?igsh=MXFtZHc3ZjFwYjlrZQ%3D%3D
In this Instagram post, the user writes the name of the al-Qassam in a distorted manner in the post description, with dots between the letters, to avoid detection by the platform. The videos show former Israeli hostages prior to their release in Gaza.
Example 2: https://www.instagram.com/p/DFDM0ugqphv/
This Instagram post promotes Palestinian Islamic Jihad propaganda. This video of a sniper attack was shared by the user unedited and without any condemnation under the title: “Al-Quds Brigades shows footage they say is of their fighters shooting an Israeli soldier east of Gaza City before the ceasefire agreement went into effect”. In addition, in the post description, the user writes the name of the Al-Quds Brigades in a distorted manner, with dots between the letters, to avoid detection by the platform.
Red Triangle Use in Real-World Events
Importantly, the red triangle has spread beyond Meta, appearing in antisemitic rhetoric on other social media platforms as well, raising ongoing concerns about the reach of harmful content on Meta featuring this symbol.
But beyond the digital space, this symbol has also been used in real-world attacks against Jewish homes and synagogues, and in calls for violence, as depicted below.
Example 1: A photo showing the vandalized home of Anne Pasternak, the Jewish director of the Brooklyn Museum in New York, marked with the red triangle in June 2024.
https://forward.com/fast-forward/622549/red-triangle-inverted-hamas-symbol-brooklyn/
Example 2: This photo captures a pro-Palestinian protester vandalizing a monument in Washington, D.C., in July 2024. The graffiti includes the phrase “Hamas is coming”, which can be interpreted as a threat, and is accompanied by the red triangle symbol.
https://www.timesofisrael.com/man-charged-over-hamas-is-coming-graffiti-during-july-anti-netanyahu-protests-in-dc/
Example 3: During a protest at the University of Minnesota, pro-Palestinian demonstrators wrote “Victory to Al-Aqsa Flood” on the ground as part of a campus encampment, alongside the red triangle symbol. This phrase promotes antisemitic rhetoric by glorifying the killing of Jews and Israelis, as it directly references Hamas' October 7 attacks, which were code-named Operation Al-Aqsa Flood. When paired with the red triangle, a known Hamas propaganda symbol used to target Jewish individuals, the message serves as an endorsement of Hamas and a call for further violence against Jewish communities.
https://minndakjcrc.org/news/university-of-minnesota-appears-incapable-of-connecting-antisemitism-on-campus-with-pro-hamas-encampment/
CyberWell’s Response to Meta’s Decision on the Presented Cases
CyberWell recognizes Meta’s reasoning for removing two of the three cases while leaving the third online. All three contain symbols and rhetoric linked to extremist ideologies, but the key distinction appears to be the presence of explicit incitement. The first two cases included calls to action, such as “wake up” and “Defend Europe”, along with references to assault weapons or designated hate groups, signaling a forward-looking intent to incite violence. The third case, which Meta allowed to remain, seemed focused on distorting historical narratives rather than promoting immediate action.
CyberWell suggests the Board consider an approach similar to its past ruling on the term “Shaheed”, where content was removed only if it met at least one of three conditions: a weapon depiction, advocacy for weapons use, or reference to a designated event. Since at least one case in this review meets the weapon depiction standard, we recommend expanding these criteria to include explicit calls to action, references to extremist groups, or clear incitement to violence. By refining enforcement based on context and intent, Meta can better distinguish between neutral discussion of symbols and extremist content, ensuring consistent moderation while mitigating potential harm.
In conclusion, by adopting the suggested enforcement strategies, Meta can effectively curb the use of multi-meaning symbols to promote violent extremism, ensuring that hate speech and terrorist propaganda are swiftly removed from its platforms.
Recommendations
CyberWell recognizes that the symbols referenced in the Board’s cases share similarities with those used by DOI-affiliated groups. In addition to the symbols highlighted by the Board, a key example is the inverted red triangle, which has been heavily promoted by Hamas to promote terror. While we acknowledge that this symbol can have various interpretations, it has been widely used to express support for Hamas and their violent actions.
We further offer several specific recommendations in relation to the Board’s questions:
I. Flagging Hateful Symbols with an Additional DOI Name/Symbol/Thematic Context
CyberWell strongly encourages Meta to flag and remove cases where a DOI’s name appears in a post alongside the red triangle, as this demonstrates a clear pattern of support for a designated terrorist organization. The same principle should apply to other symbols linked to extremist or terrorist groups. Users frequently post single symbols as a tactic to signal support for extremist entities while avoiding detection.
In addition, to effectively detect when a symbol is being used in a harmful context, we recommend that Meta implement a context-based approach combined with layered symbol analysis. This method involves assessing whether a specific symbol appears alongside another symbol or within a particular thematic context. For instance, if the red inverted triangle is used in a post related to the Israel-Hamas conflict, the post should be removed at scale. Since determining context can sometimes be challenging without subject matter experts, CyberWell can provide guidance to help enforce these measures effectively while ensuring that free speech is not excessively restricted.
Example 1: Red Inverted Triangle (🔻) and Ninja Emoji (🥷)
Although traditionally linked to themes of stealth, secrecy, or cultural identity, CyberWell has identified that this emoji has been repurposed to spread Hamas propaganda. This is likely because it visually resembles Hamas operatives and spokespeople, including Abu Obeida, who is commonly known as the “masked one”. When these two symbols appear together in a Meta post, they provide a clearer indication of harmful intent, helping to identify and mitigate extremist content. While each symbol may have independent meanings, CyberWell has found that, when used in combination, they consistently signify explicit support for Hamas.
Example 2: Othala Rune and Kolovrat and/or paired with weapon symbols
The forementioned example was taken from the Board’s announcement of their latest cases. When these two symbols are used together in a post, it is highly likely that the content expresses support for an extremist ideology or group.
II. Flagging Comments on Meta’s Posts
CyberWell recommends that, even when a symbol associated with a DOI is used in a seemingly legitimate or neutral context, the comments containing such symbols should be automatically flagged for review to detect any expressions of support or glorification of terrorism. CyberWell has identified numerous instances where users have used the red triangle in comment sections to express support for Hamas and target openly Jewish users.
III. Issue of Bypassing Neo-Nazi and Other Extremist Content
Coded Language: Many social media users use coded language to alter speech that can violate DOI policies. We urge Meta to address and develop effective solutions for these evasion techniques. It is crucial to ensure that extremist content is accurately identified and removed from social media lest it promote real world harm.
News Coverage: CyberWell urges Meta’s policy team to establish specific criteria for what qualifies as legitimate news coverage to prevent DOI supporters from exploiting this loophole. Such criteria may include an elaborate framework of what constitutes sufficient editorial changes by the user.
Read the full comment: https://cyberwell.org/wp-content/uploads/2025/02/OSB-Public-Comment-DOI-Symbols-2025-website.pdf
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
Sincerely,
Orli Moyal
This is a sample of what we wrote:
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
This is a sample of what we wrote:
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
This is a sample of what we wrote:
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
This is a sample of what we wrote:
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.
Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.
Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.
Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.
These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.
Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.
Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.