To read this decision in Sorani Kurdish, click here.
بۆ خوێندنەوەی ئەم بڕیارە بە زمانی کوردیی سۆرانی، کرتە لێرە بکە.
The Oversight Board has overturned Meta’s original decision to leave up a Facebook post that mocks a target of gender-based violence. While Meta has since recognized this post broke its rules on Bullying and Harassment, the Board has identified a gap in Meta’s existing rules which seems to allow content that normalizes gender-based violence by praising, justifying, celebrating or mocking it (for example, in cases where the target is not identifiable, or the picture is of a fictional character). The Board recommends that Meta undertake a policy development process to address this gap.
About the case
In May 2021, a Facebook user in Iraq posted a photo with a caption in Arabic. The photo shows a woman with visible marks of a physical attack, including bruises on her face and body. The caption begins by warning women about making a mistake when writing to their husbands. The caption states that the woman in the photo wrote a letter to her husband, which he misunderstood, according to the caption, due to the woman’s typographical error. According to the post, the husband thought the woman asked him to bring her a “donkey,” while in fact, she was asking him for a “veil.” In Arabic, the words for “donkey” and “veil” look similar (“حمار" and “خمار"). The post implies that because of the misunderstanding caused by the typographical error in her letter, the husband physically beat her. The caption then states that the woman got what she deserved as a result of the mistake. There are several laughing and smiling emojis throughout the post.
The woman depicted in the photograph is an activist from Syria whose image has been shared on social media in the past. The caption does not name her, but her face is clearly visible. The post also includes a hashtag used in conversations in Syria supporting women.
In February 2023, a Facebook user reported the content three times for violating Meta’s Violence and Incitement Community Standard. If content is not reviewed within 48 hours, the report is automatically closed, as it was in this case. The content remained on the platform for nearly two years and was not reviewed by a human moderator.
The user who reported the content appealed Meta’s decision to the Oversight Board. As a result of the Board selecting this case, Meta determined that the content violates the Bullying and Harassment policy and removed the post.
The Board finds that the post violates Meta’s policy on Bullying and Harassment as it mocks the serious physical injury of the woman depicted. As such, it should be removed.
However, this post would not have violated Meta’s rules on Bullying and Harassment if the woman depicted was not identifiable, or if the same caption had accompanied a picture of a fictional character. This indicates to the Board that there is a gap in existing policies that seems to allow content that normalizes gender-based violence. According to Meta, a recent policy development process on praise of violent acts focused heavily on identifying any existing enforcement gaps in treating praise of gender-based violence under various policies. As part of that process, Meta considered the policy on the issue of mocking or joking about gender-based violence. Meta informed the Board that the company determined that the Bullying and Harassment policy generally captures this content. However, as noted in the examples above, the Board finds that existing policies and their enforcement do not necessarily capture all relevant content. This case also raises concerns about how Meta is enforcing its rules on bullying and harassment. The content in this case, which included a photograph of a Syrian activist who had been physically attacked and was reported multiple times by a Facebook user, was not reviewed by a human moderator. This may indicate that Meta does not prioritize this type of violation for review.
The Oversight Board’s decision
The Oversight Board overturns Meta’s original decision to leave up the content.
The Board recommends that Meta:
- Undertake a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence.
- Clarify that in the Bullying and Harassment Community Standard the term “medical condition” includes “serious physical injury.”
* Case summaries provide an overview of the case and do not have precedential value.
Full case decisionFull case decision
1. Decision summary
The Oversight Board overturns Meta’s original decision to leave up a Facebook post that mocks a target of gender-based violence. Meta has acknowledged that its original decision was wrong, and that the content violates its policy on Bullying and Harassment. The Board recommends that Meta undertakes a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence. The Board understands that Meta is conducting a policy development process which, among other issues, is considering how to address praise of gender-based violence. This recommendation is in support of a more thorough approach to limiting the harms caused by the normalization of gender-based violence.
2. Case description and background
In May 2021, a Facebook user in Iraq posted a photo with a caption in Arabic. The photo shows a woman with visible marks of a physical attack, including bruises on her face and body. The caption begins by warning women about making mistakes when writing to their husbands. The caption states that the woman in the photo wrote a letter to her husband, which the husband misunderstood, according to the caption, due to the woman’s typographical error in writing the letter. According to the post, the husband thought the woman asked him to bring her a “donkey,” while in fact, she was asking him for a “veil.” In Arabic, the words for “donkey” and “veil” look similar ( "حمار" and "خمار”). The caption then mocks the situation and concludes that the woman got what she deserved as a result of the mistake. There are several laughing and smiling emojis throughout the post.
According to several sources, the woman depicted in the photograph is a Syrian activist who had been imprisoned by the regime of Bashar Al-Assad and later beaten by individuals believed to be affiliated with the regime. Her image has been shared on social media in the past. The caption does not name her, but her face is clearly visible. The post also includes a hashtag which, according to experts consulted by the Board, is primarily used by pages and groups in Syrian conversations supporting women. The post had about 20,000 views and under 1,000 reactions.
In February 2023, a Facebook user reported the content three times for violating the Violence and Incitement Community Standard. The reports were closed without human review, leaving the content on the platform. Meta told the Board it considers a series of signals to determine how to prioritize content for human review, which includes the virality of the content and how severe the company considers the violation type. If content is not reviewed within 48 hours, the report is automatically closed. In this case, the content remained on the platform for nearly two years before it was first reported. After it was reported, it was not reviewed by a human reviewer within 48 hours and thus the report was automatically closed.
The user who reported the content appealed Meta’s decision to the Oversight Board. As a result of the Board selecting this case, Meta determined that the content violates the Bullying and Harassment policy and removed the post.
The Board notes the following context in reaching its decision in this case. This content was posted by a user in Iraq. According to the World Health Organization, some 1.32 million people in Iraq are estimated to be at risk of different forms of gender-based violence. The majority are women and adolescent girls. Despite repeated calls by women’s groups to pass legislation in Iraq to combat domestic violence, a draft law remains stalled, and the current penal code allows for husbands to punish their wives as an exercise of a legal right and provides for lower sentencing for murder when connected to an ‘honour killing.’
The activist depicted in the photograph is from Syria. According to the United Nations, “[o]ver a decade of conflict in Syria has had a significant gendered impact on women and girls.” As many as 7.3 million Syrians, overwhelmingly women and girls, require services related to gender-based violence. An inadequate national legal framework and discriminatory practices are barriers to women’s protection and hinder effective accountability for violence against them. ( UN Syria Report, pages 5-9) The UN reports widespread impunity from prosecution for gender-based violence and stigmatization of victims or survivors of gender-based violence, leading to ostracization and further restrictions on participation in public life. The regime has targeted women associated with the opposition, subjecting them to torture and sexual abuse.
According to a study conducted by UN Women, nearly half (49 per cent) of women internet users in eight nations in the League of Arab States reported feeling unsafe from online harassment. The same study found “33 per cent of women who experienced online violence report[ed] that some or all of their experiences of online violence moved offline.” Online violence was defined as including receiving unwanted images or symbols with sexual content; annoying phone calls, inappropriate or unwelcome communications; and receiving insulting and/or hateful messages. According to a UN Secretary-General report, online violence “impedes women’s equal and meaningful participation in public life through humiliation, shame, fear and silencing. This is a ‘chilling effect,’ whereby women are discouraged from actively participating in public life.” ( A/77/302, para. 22)
3. Oversight Board authority and scope
The Board has authority to review Meta’s decision following an appeal from a person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1).
The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation.
When the Board selects cases like this one, where Meta subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram.
4. Sources of authority and guidance
The following standards and precedents informed the Board’s analysis in this case:
I. Oversight Board decisions:
The most relevant previous decisions of the Oversight Board include:
II. Meta’s content policies:
The Bullying and Harassment Community Standard aims to prevent individuals being targeted on Meta’s platform through threats and different forms of malicious contact. According to Meta’s policy rationale, such behavior “prevents people from feeling safe and respected on Facebook.” The Community Standard is divided into tiers, with more protection provided for private individuals and limited scope public figures than for public figures.
When the content was reviewed by Meta and the Board began its review, Tier 4 of the Bullying and Harassment Community Standard prohibited targeting private individuals or limited scope public figures with “content that praises, celebrates or mocks their death or serious physical injury.” The Community Standard defines limited scope public figures as “individuals whose primary fame is limited to their activism, journalism, or those who become famous through involuntary means.” Meta made this definition public in response to the Board’s recommendation in Pro-Navalny protests in Russia, 2021-004-FB-UA. Meta’s internal guidance for content moderators defines “mocking” as “an attempt to make a joke about, laugh at, or degrade someone or something.”
On June 29, Meta updated the Community Standard. Under Tier 1 of the current policy, everyone is protected from “Celebration or mocking of [their] death or medical condition.”
The Board’s analysis was informed by Meta's commitment to "Voice," which the company describes as “paramount,” and its values of “Safety” and “Dignity.”
III. Meta’s human rights responsibilities
The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy, where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs.
The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards:
- The right to freedom of opinion and expression: Article 19, International Covenant on Civil and Political Rights ( ICCPR), General Comment No. 34, Human Rights Committee, 2011; UN Special Rapporteur on freedom of opinion and expression, reports: A/HRC/38/35 (2018) and A/74/486 (2019); and Joint Declaration on Freedom of Expression and Gender Justice, Special Rapporteurs on freedom of opinion and expression of the United Nations (UN), the Organization for Security and Co-operation in Europe (OSCE), the Organization of American States (OAS), the African Commission on Human and Peoples’ Rights (ACHPR)(2022).
- The right to non-discrimination: Article 2, para. 1, ICCPR; Article 1 and Article 7 (non-discrimination in participation in the political and public life of the country), Article 4(1) and Article 5(a), Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) and Article 8, Declaration on Human Rights Defenders, right to effective access, on a non-discriminatory basis, to participation in the conduct of public affairs.; Resolution 35/18, Human Rights Council, 2017; Special Rapporteur on the situation of human rights defenders. Report: A/HRC/40/60 (2019); Special Rapporteur on violence against women, its causes and consequences, Report: A/HRC/38/47 (2018).
5. User submissions
Following a Facebook user’s report of the content and appeal to the Oversight Board, both the user who created the content and the user who reported it were sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. Neither user submitted a statement.
6. Meta’s submissions
In its decision rationale, Meta explained that the content should have been removed from Facebook for violating the Bullying and Harassment policy.
Meta’s regional team identified the woman depicted as a known Syrian activist who had been jailed for her activism. According to Meta, the photograph in the post shows the activist after she was beaten by individuals affiliated with the regime of Bashar Al-Assad.
Based on the policy in place at the time, Meta stated that it considers the content to be mocking the serious physical injury of the woman depicted, and therefore, it violates the policy. Meta considers the woman depicted in the photograph to be a limited scope public figure, as her primary fame is limited to her activism. Meta understands the content to be joking about her injuries and implying that she “brought them upon herself due to ‘karma’.” According to Meta, the content also “makes up” a story about her having written a poorly worded letter, implying she lacks intelligence, when in reality she suffered these injuries as a result of a violent attack.
Following the update in the policy, Meta told the Board the content remains violating, and the update does not impact the substantive protection provided by the policy to the woman depicted. According to Meta, “[t]he update to the Bullying and Harassment policy was intended to streamline the policy. The update did not change the protections afforded limited scope public figures, like the woman identified in the case content. The relevant line under which [Meta] removed this content was initially in Tier 4 of the policy, but as a result of the update it is now part of Tier 1.”
The Board asked Meta 11 questions in writing. Questions related to Meta policies addressing content depicting gender-based violence, how Meta enforces the Bullying and Harassment policy, and any research on depictions of gender-based violence on social media and offline harms. Ten questions were answered and one question, asking for regional enforcement data for the Bullying and Harassment policy, was not answered.
7. Public comments
The Oversight Board received 19 public comments for this case. Three comments were submitted from Middle East and North Africa, two comments were submitted from Central and South Asia, two comments were submitted from Asia Pacific and Oceania, three comments were submitted from Europe, eight comments were submitted from United States and Canada, and one from Latin America and Caribbean.
The submissions covered the following themes: cyber harassment and targeting of women activists and public figures, the serious consequences of the digital dimension of gender-based violence on the safety, physical and psychological health and dignity of women, and the difficulty of bringing online violence against women to the attention of content moderators who may not understand the relevant regional dialect.
To read public comments submitted for this case, please click here.
8. Oversight Board analysis
The Board examined whether this content should be removed by analyzing Meta's content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance, providing recommendations on how Meta’s policies and enforcement processes can better respect Meta’s human rights responsibilities.
The Board selected this appeal because it offers the opportunity to explore how Meta’s policies and enforcement address content that targets women human rights defenders and content that mocks gender-based violence, issues the Board is focusing on through its strategic priority of gender.
8.1 Compliance with Meta’s content policies
I. Content rules
The Board finds that the post violates Meta’s policy on Bullying and Harassment, both at the time and under the updated policy, and should be removed. The caption of the post read in conjunction with the image violates Meta’s policy because it mocks the serious physical injury or medical condition of the woman depicted.
Meta’s internal guidance for content moderators defines “mocking” as “an attempt to make a joke about, laugh at, or degrade someone or something.” In the form of a joke, the content implies the woman depicted deserved to be physically attacked for making a typographical error in her request to her husband.
According to Meta, the internal guidance provided to content moderators defines “medical condition” to include “serious injury.” Prior to the update on June 29, the public-facing policy prohibited mocking the “serious physical injury” of a private individual or a limited scope public figure, while the internal guidance provided to moderators prohibited mocking their “medical condition.” The Board asked Meta about the discrepancy in the terms used in the public-facing policy and the internal guidance. Meta acknowledged the discrepancy and amended its public facing policy to use the same term used in its internal guidance.
The Board finds the post has multiple plausible interpretations. The woman may be targeted as a human rights defender or as a target of abuse, or both. The different interpretations are analyzed further below. Regardless of the interpretation, the gender of the depicted person, and the gendered nature of the mocking, the policy is violated in this case so long as the depicted person is identifiable.
II. Enforcement action
The Board is concerned about potential challenges in the enforcement of this Community Standard. First, the content in this case, which included a photograph of a Syrian activist who had been physically attacked and was reported multiple times by a Facebook user, was not reviewed by a human moderator. This may indicate that this type of violation is not prioritized for review. Second, the Board is concerned that enforcement of the policy, especially when it requires analyzing an image together with a caption, is challenging for Arabic language content. As the Board has previously explained, Meta relies on a combination of human moderators and machine learning tools referred to as classifiers to enforce its Community Standards. (See Wampum belt, 2021-012-FB-UA). In this case, Meta informed the Board that the company has a classifier targeting Bullying and Harassment for ‘General Language Arabic’.
The Board notes that the independent human rights due diligence report published by BSR, which Meta commissioned in response to the Board’s recommendation in an earlier case, noted problems in Meta’s enforcement in Arabic. It found the company’s problems in enforcement may be due to inadequate sensitivity to different dialects of Arabic. The Board is concerned, based on findings by the independent human rights due diligence report and the lack of enforcement in this case, that there may be challenges with both the proactive and reactive paths for effective enforcement of this policy in the region. The lack of transparency on auditing of the classifiers enforcing this policy is also concerning to the Board.
8.2 Compliance with Meta’s human rights responsibilities
The Board finds that Meta’s initial decision to leave up the post was inconsistent with its human rights responsibilities as a business.
Freedom of expression (Article 19 ICCPR)
Article 19, para. 2, of the International Covenant on Civil and Political Rights (ICCPR) provides that “everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.” Article 19 provides for broad protection of expression, including expression which people may find offensive (General Comment 34, para 11).
While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, necessity, and proportionality (Article 19, para. 3, ICCPR). The Board has acknowledged that while the ICCPR does not create obligations for Meta as it does for states, Meta has committed to respect human rights as set out in the UNGPs. ( A/74/486, paras. 47-48). Meta’s policies prohibit specific forms of discriminatory and hateful expression, absent a requirement that each individual piece of content incite direct and imminent violence or discrimination. The Special Rapporteur on free expression has noted that on social media, “the scale and complexity of addressing hateful expression presents long-term challenges.” ( A/HRC/38/35, para. 28) The Board, drawing upon the Special Rapporteur’s guidance, has previously explained that such prohibitions would raise concerns if imposed by a government, particularly if enforced through criminal or civil sanctions. As the Board noted in its Knin Cartoon, Depiction of Zwarte Piet, and South African Slurs decisions, Meta can regulate such expression, demonstrating the necessity and proportionality of its actions due to the harm that results from the accumulation of content.
I. Legality (clarity and accessibility of the rules)
Any restriction on freedom of expression should be accessible and clear enough in scope, meaning and effect to provide guidance to users and content reviewers as to what content is and is not permitted on the platform. Lack of clarity or precision can lead to inconsistent and arbitrary enforcement of the rules ( General Comment No. 34, para 25; A/HRC/38/35, para. 46).
The Board notes that Meta made changes to the Community Standard on June 29, aligning the terminology used in its public-facing Bullying and Harassment Community Standard and internal guidance provided to content moderators. Prior to this change, the Community Standard prohibited content mocking “serious physical injury” while the internal guidance prohibited mocking a “medical condition.” According to Meta, “medical condition” is the broader term. The Board welcomes this change as the use of different terminology may lead to confusion and inconsistent enforcement. However, the Board is concerned that it may not be clear to users that “medical condition” includes “serious physical injury” and recommends that Meta makes this clear to its users.
II. Legitimate aim
State restrictions on freedom of expression must pursue a legitimate aim, which includes the protection of the rights of others. The Human Rights Committee has interpreted the term “rights” to include human rights as recognized in the ICCPR and more generally in international human rights law ( General Comment 34, para. 28).
The Board finds that Meta’s Bullying and Harassment policy is directed towards the legitimate aim of respecting the rights of others, including the right to equality and non-discrimination, and to freedom of expression. Among other aims, these policies seek the legitimate aim of preventing the harms resulting from bullying and harassment, discrimination on the basis of sex or gender, and respecting the freedom of expression and access to Meta’s platform for those targeted by this expression. These aims are linked, as according to the Joint Declaration on Freedom of Expression and Gender Justice, “online violence against women has particular significance for freedom of expression” and “social media platforms have an obligation to ensure that online spaces are safe for all women and free from discrimination, violence, hatred and disinformation.”
III. Necessity and proportionality
The principle of necessity and proportionality provides that any restrictions on freedom of expression "must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected" ( General Comment 34, para. 34).
Applied to the company, the Board finds that Meta’s policy, under which this content should have been removed, constitutes a necessary and proportionate response to protect users from targeted on-line bullying and harassment. The policy respects the equal right to freedom of expression of women human rights defenders who are often forced off the platform through such harassment. The UN Special Rapporteur on violence against women has reported that “[w]omen human rights defenders, journalists and politicians are directly targeted, threatened, harassed, or even killed for their work. They receive online threats, generally of a misogynistic nature, often sexualized and specifically gendered. The violent nature of these threats often leads [women] to suspend, deactivate or permanently delete their online accounts, or to leave the profession entirely” ( A/HRC/38/47 para. 29).
The Special Rapporteur on the situation of human rights defenders has identified similar practices and harms specifically for women: “Women human rights defenders are often subjected to online harassment, violence and attacks, which include sexual violence, verbal abuse, sexuality baiting, doxing...and public shaming. Such abuse occurs in comments on news articles, blogs, websites and social media. The online terror and slander to which women are subjected can also lead to physical assault” ( A/HRC/40/60 (2019) para 45). According to a study conducted by UN Women, 70% of women activists and human rights defenders in Arab states reported feeling unsafe from online harassment. In this sense, women who take part in public life through activism or by running for office are disproportionately targeted online, which can lead to self-censorship and even withdrawal from public life.
The Board finds this post mocks the depicted woman by using a gendered joke to belittle her, implying she deserved to be physically attacked. Such online harassment is widespread and leads to women being silenced and shut out of public life. It can also be accompanied by physical attacks. Removal of this content is therefore necessary, as less restrictive means would not prevent her image with a joke meant to belittle her from being disseminated.
The Board also finds that this post is addressed to women and girls more broadly. It normalizes gender-based violence by implying that physically attacking women is funny and that men are entitled to use violence. The use of the hashtag also indicates the intention of the user to reach a broader group of women than the individual depicted. The Special Rapporteur on violence against women draws the connection between targeting women in public life and intimidation aimed at women more broadly: “In addition to the impact on individuals, a major consequence of online and ICT [Information and Communication Technology] facilitated gender-based violence is a society where women no longer feel safe either online or offline, given the widespread impunity for perpetrators of gender-based violence” ( A/HRC/38/47, para 29; A/HRC/RES/35/18, para. 4, urging states to address gendered stereotypes that are a root cause of violence against women and discrimination).
The Board is concerned that Meta’s existing policies do not adequately address content that normalizes gender-based violence by praising it or implying it is deserved. The content analyzed in this case was dealt with under the Bullying and Harassment policy, but this policy is not always adequate to limit the harm caused by material that, by referring more generally to gender-based violence, exacerbates discrimination and the exclusion of women from the public sphere online or offline. This same post would not violate the Bullying and Harassment policy if the woman depicted was not identifiable, or if the same caption had accompanied a picture of a fictional character. According to Meta, this content does not violate the Hate Speech policy because it “does not target a person or people on the basis of their protected characteristic.” This indicates to the Board that there is a gap in existing policies that seems to allow discriminatory content, including content that normalizes gender-based violence, to remain and be shared on the platform.
According to Meta, a recent policy development process on praise of violent acts focused heavily on identifying any existing enforcement gaps in treating praise of gender-based violence under various policies. As part of that process, Meta considered the policy on the issue of mocking or joking about gender-based violence. Meta informed the Board that the company determined that the Bullying and Harassment policy generally captures this content. However, as noted in the examples above, the Board finds that existing policies and their enforcement do not necessarily capture all relevant content.
According to the UN Special Rapporteur on violence against women, “[v]iolence against women is a form of discrimination against women and a human rights violation falling under CEDAW” ( A/HRC/38/47, para. 22). Online violence against women includes “any act of gender-based violence against women that is committed, assisted or aggravated in part or fully by the use of ICT…against a woman because she is a woman, or affects women disproportionately” ( A/HRC/38/47, para. 23). The Committee on the Elimination of Discrimination against Women (UN body of independent experts monitoring the implementation of the Convention) in General Comment 35, called on states to adopt preventive measures, including by encouraging social media companies to strengthen self-regulatory mechanisms “addressing gender-based violence against woman that takes place through their services and platforms” ( CEDAW/C/GC/35, para 30(d)).
Content that normalizes gender-based violence by praising it or implying it is logical or deserved, validates violence and seeks to intimidate women, including women who seek to take part in public life (see Public Comment by Digital Rights Foundation, PC-11226). The message the accumulation of this content delivers is that violence is acceptable and can be used to punish transgressions of gender norms. While academic studies showing causation are limited, multiplestudies have shown a correlation between the normalization of gender-based violence and the increased occurrence of such violence.
In multiple previous cases, the Board has recognized how certain content which may be discriminatory ( Depiction of Zwarte Piet, 2021-002-FB-UA) or hateful ( Knin cartoon, 2022-001-FB-UA, South African Slurs, 2021-011-FB-UA) can be removed due to its cumulative effect, without the need to show that each piece of content can cause direct and imminent physical harm. The Board has also noted that the accumulation of harmful content creates an environment in which acts of discrimination and violence are more likely ( Depiction of Zwarte Piet, 2021-002-FB-UA).
The UN Special Rapporteur on Violence against Women called attention to the important role social media plays in addressing gender-based violence and the need to shape ICT policies and practices with the understanding of the “broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls, which frame their access to and use of the Internet and other ICT” (A/HRC/38/47, para 14). Gendered stereotypes promote violence and inadequate responses to it, which further perpetuates such discrimination. In several cases, the Committee on the Elimination of Discrimination against Women has found that when state authorities act on gendered stereotypes in their decision making, the state fails to effectively prevent or address gender-based violence (See Belousova v. Kazakhstan, R.P.B. v. Philippines; Jallow v. Bulgaria (para 8.6); and L.R. v. Republic of Moldova.).
In the context of a broader environment of gender-based discrimination and violence, Meta has a responsibility not to exacerbate threats of physical harm and the suppression of women’s speech and participation in society. Content like the post in this case normalizes gender-based violence by denigrating women and trivializing, excusing, or encouraging both public aggressions and domestic abuse. The cumulative effect of content normalizing gender-based violence to encourage or defend the use of violence, and the harm to women’s rights and the perpetuation of an environment of impunity all contribute to a heightened risk of offline violence, self-censorship, and suppression of the participation of women in public life. Therefore, the Board recommends that Meta undertake a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence.
9. Oversight Board decision
The Oversight Board overturns Meta's original decision to leave up the content.
A. Content policy
1. To ensure clarity for users, Meta should explain that the term “medical condition,” as used in the Bullying and Harassment Community Standard, includes “serious physical injury.” While the internal guidance explains to content moderators that “medical condition” includes “serious physical injury,” this explanation is not provided to Meta’s users.
The Board will consider this recommendation implemented when the public-facing language of the Community Standard is amended to include this clarification.
2. The Board recommends that Meta undertakes a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence. The Board understands that Meta is conducting a policy development process which, among other issues, is considering how to address praise of gender-based violence. This recommendation is in support of a more thorough approach to limiting the harms caused by the normalization of gender-based violence.
The Board will consider this recommendation implemented when Meta publishes the findings of this policy development process and updates its Community Standards.
The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members.
For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world.