Public Comments Portal

Symbols Adopted by Dangerous Organizations

February 13, 2025 Case Selected
February 27, 2025 Public Comments Closed
June 12, 2025 Decision Published
Upcoming Meta implements decision

Comments


Name
Miriam Kosowsky
Country
United States
Language
English

Take decisive action in addressing the use of extremist antisemitic symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. aMeta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE STOPPED

These symbols, like the white pointed hood, are used to evade moderation while ensuring their intended audience recognizes the underlying extremist message.
They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize them.
KEFFIYEH photos, hashtags are terror tape scarves first used by terrorist Arafat to signal armed resistance.
The following, in any format (with or without spaces or hashtags) should also not be allowed on your platform:
Rape is Resistance
From the River to the Sea
From the Sea to the River
Resistance by any Means

Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates.

While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk.

Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Meta’s decision must create a policy that acknowledges the reality of how these hate symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations.

A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance.

This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups are disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous.

If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed.

Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality.

If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Organization
Centre for Advanced Studies in Cyber Law and Artificial Intelligence
Country
India
Language
English
Attachments
Comments-Hate-Symbols.docx
Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Country
United States
Language
English

Hello Meta Team
I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Name
Aaron Rubin
Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Name
Sarah Feder
Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.
Sarah feder

Name
Sarah Feder
Country
United States
Language
English

Restrict all hate symbols!

Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Name
Marina Kogan
Country
United States
Language
English

This is a sample of what we wrote:

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Name
Joshua Brown
Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Name
Todd Garber
Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Name
Miriam Kosowsky
Country
United States
Language
English

Take decisive action in addressing the use of extremist antisemitic symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. aMeta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE STOPPED

These symbols, like the white pointed hood, are used to evade moderation while ensuring their intended audience recognizes the underlying extremist message.
They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize them.
KEFFIYEH photos, hashtags are terror tape scarves first used by terrorist Arafat to signal armed resistance.
The following, in any format (with or without spaces or hashtags) should also not be allowed on your platform:
Rape is Resistance
From the River to the Sea
From the Sea to the River
Resistance by any Means

Name
Magen David
Country
United States
Language
English

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Sincerely,
Magen D.

Name
Ella
Country
United States
Language
English

I ask Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Name
Violet Moore
Country
Canada
Language
English

This is a sample of what we wrote:

I urge Meta’s Oversight Board to take decisive action in addressing the use of extremist symbols on its platforms, particularly as antisemitism and white supremacist activity continue to rise at alarming rates. While freedom of expression is an important value, it is critical to recognize that the First Amendment applies to government restrictions on speech—it does not obligate private companies like Meta to allow harmful content that puts marginalized communities at risk. Meta has both the right and the responsibility to take a stand against the use of symbols that have been widely co-opted by hate groups, prioritizing the safety of those impacted.

Particularly in light of Meta’s decision to do away with human fact checking, the only effective approach for an AI moderation policy is to establish a policy that acknowledges the reality of how these symbols function today.

Namely: IF A SYMBOL HAS BEEN CO-OPTED BY A HATE GROUP, ITS USE SHOULD BE RESTRICTED.

Some symbols may have had multiple meanings in the past, but their modern use is so heavily tied to hate that they can no longer be separated from their hateful connotations. A clear example is the white hood worn by the Ku Klux Klan. While not inherently a hateful object, in the context of a pointed white hood, there is no question that it represents white supremacy, terrorism, and violence. No one today could reasonably argue that such an image is being used in a neutral, historical, or cultural context—it is a well-known hate symbol. The same principle applies to the kolovrat and the Odal rune. While they have historical origins, their dominant usage today is as identifiers of white supremacist and neo-Nazi movements.

These symbols, like the white pointed hood, are widely used for recruitment and radicalization online, often embedded in posts that appear innocuous on the surface. These symbols are widely used to recruit and radicalize individuals online, often embedded in posts that appear harmless at first glance. This is a deliberate strategy used to evade moderation while ensuring their intended audience recognizes the underlying extremist message. In the cases under review, these symbols are paired with nationalist rhetoric, references to white identity, and other indicators of extremist ideology. This is not coincidence—these are signals designed to spread hate.

Extremist groups have adapted to content moderation by disguising their ideology under the guise of history, art, or cultural heritage. They use specific fonts, hashtags, and coded language to create plausible deniability while still reinforcing white supremacist beliefs. This method allows hate speech to flourish unchecked, making it even more dangerous. If Meta does not act, it will continue providing extremists with a platform to spread and normalize their ideology.

Meta has full authority as a private company to take action against content that endangers marginalized communities. If it is not going to use human reviewers to assess intent, then the policy must be simple: symbols that are widely recognized as hate symbols should not be allowed. Freedom of expression does not mean freedom from consequences, and it certainly does not mean allowing hate to thrive under the false banner of neutrality. If Meta does not prioritize the safety of those most impacted by these symbols, then it is not remaining neutral—it is making a choice to enable harm.

Case Description

The Oversight Board will address the three cases below together, choosing either to uphold or overturn Meta’s decisions on a case-by-case basis.

Meta has referred three cases to the Board, all involving symbols often used by hate groups, as defined under the Dangerous Organizations and Individuals policy, but which can also have other uses.

In the first case, an image posted to Instagram in April 2016 showed a blonde woman with the bottom half of her face covered by a scarf. The words “Slavic Army” and a kolovrat symbol were superimposed over the face covering. While a kolovrat is a type of swastika and both are used by neo-Nazis, the symbol may also be used by some pagans, without apparent extremist intent. In the post’s caption, the user expressed pride in being Slavic, stating the kolovrat is a symbol of faith, war, peace, hate and love. The user hoped that their “people will wake up” and also stated they would follow “their dreams to the death.”

In the second case, a carousel of selfie photographs posted to Instagram in October 2024 showed a blonde woman in various poses, wearing an iron cross necklace and a T-shirt printed with an AK-47 assault rifle and the words “Defend Europe.” The Fraktur font on the T-shirt is a typeface associated with Nazis and neo-Nazis. The caption contained the Odal (or Othala) rune, part of the runic alphabet used across many parts of Europe until it was replaced by the Latin alphabet in the seventh century. The Odal rune was appropriated by the Nazis and is now used by neo-Nazis and other white supremacists to represent ideas connected to what they describe as the “Aryan race.” The post’s caption also contained the hashtag #DefendEurope as well as a text-based image of a rifle. Defend Europe is a slogan used by white supremacists and other extremist organizations opposing immigration. It is also the name of an organization Meta designates as a hate group under its Dangerous Organizations and Individuals policy.

The third case also concerns a carousel of images. Posted in February 2024, the images are drawings of an Odal rune wrapped around a sword with a quotation about blood and fate by Ernst Jünger, a German author and soldier who fought in the first and second world wars. The caption repeats the quotation before sharing a selective early history of the rune, without mentioning its Nazi and neo-Nazi appropriation. The caption concludes by describing the rune as being about “heritage, homeland, and family” and stating that prints of the image are for sale.

The content in the first two cases was only removed after Meta’s subject matter experts reviewed the posts in November 2024, as part of them being referred to the Board. At this time, Meta also determined that the third post did not breach any of its rules.

In referring these cases to the Board, Meta states they are particularly difficult as the symbols may not explicitly violate the company’s policies but still promote dangerous organizations and individuals. The symbols and others like them are used by members of these groups to identify themselves and to show support for the groups’ objectives. This is a key issue that Meta’s Dangerous Organizations and Individuals policy seeks to address. However, Meta is concerned that prohibiting these symbols entirely could limit discussions of history, linguistics and art.

The Board selected these cases to assess whether Meta’s approach to moderating symbols that may promote dangerous organizations also respects users’ freedom of expression. This case falls within the Board’s strategic priority of Hate Speech Against Marginalized Groups.

The Board would appreciate public comments that address:

  • How Meta should treat symbols with different meanings when reviewing at scale, where the review by the company’s subject matter experts is limited.
  • The significance and prevalence of both the Odal/Othala rune and the kolovrat, particularly on social media.
  • To what degree pagan and runic symbols in general have been appropriated by white supremacists and neo-Nazis, and the extent to which they are still used in non-extremist settings.
  • Ways in which neo-Nazi and extremist content is disguised to bypass content moderation on social media.

 

As part of its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.

Public Comments

If you or your organization feel you can contribute valuable perspectives that can help with reaching a decision on the case announced today, you can submit your contributions using the button below. Please note that public comments can be provided anonymously. The public comment window is open for 14 days, closing at 23.59 Pacific Standard Time (PST) on Thursday 27 February.

What’s Next

Over the next few weeks, Board Members will be deliberating this case. Once they have reached their decision, we will post it on the Decisions page.