Overturned
Reclaimed Term in Drag Performance
April 23, 2025
A user appealed Meta’s decision to remove an Instagram post featuring a drag performance and a caption that included a word, designated by Meta as a slur, being used in a reclaimed, positive, self-referential context.
Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement.
Summary
A user appealed Meta’s decision to remove an Instagram post featuring a drag performance and a caption that included a word, designated by Meta as a slur, being used in a reclaimed, positive, self-referential context. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.
About the Case
In May 2024, a user posted a video of themselves to Instagram wearing a red, glittery outfit and performing in a drag show. The caption underneath the video mentioned other Instagram users, acknowledging them for their support and participation. The post also included a thank-you note to another user for providing the sound production for the show. In the post, the user refers to themselves as a “faggy martyr.”
The user who posted the video appealed Meta’s decision to remove this post to the Board explaining that they are a queer, trans, drag performer and that they are speaking about themselves in the caption of the video. They emphasized that they included the word “faggy” (a diminutive version of the “fag” slur, hereafter “f***y” and “f-slur”) in their post description because it is a “reclaimed colloquial term that the queer community ... uses all the time.” The user also emphasized that they consider this term a joyous self-descriptor of which they are proud. The user concluded their appeal to the Board by stating the importance of keeping the post up, as it helps them book more performances.
Under Meta's Hateful Conduct Community Standard, Meta removes slurs, “defined as words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic,” in most contexts, “because these words are tied to historical discrimination, oppression, and violence.” Although on January 7, 2025, Meta announced changes to the language of the company’s Hate Speech policy, now Hateful Conduct policy, and its enforcement, the “f-slur” remains on Meta’s list.
The company allows slurs when used self-referentially and in an expressly positive context. These exceptions remain in place following Meta’s January 7 policy update. In this case, the user posted a video in which they were performing, praising their performance and referring to themself as “f***y.” While “f***y” is a slur, in this context, it was being used “self-referentially or in an empowering way.”
After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Hateful Conduct policy and that its original decision to remove the content was incorrect because in the post the “f-slur" was used both self-referentially and in an explicitly positive context. The company then restored the content to Instagram.
Board Authority and Scope
The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1).
When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users.
Significance of Case
This case demonstrates ongoing issues with Meta’s ability to enforce exceptions to its Hateful Conduct (formerly Hate Speech) policy for the use of slurs in self-referential and/or empowering speech. This summary decision highlights the impact of wrongful removals on the visibility and the livelihoods of queer performers, as the user appealing Meta’s decision indicated. The potential for disproportionate errors in the moderation of reappropriated speech by queer communities and the subsequent impact of mistaken removals is a serious issue that has been noted by researchers for many years.
In the Reclaiming Arabic Words case, the Board found Meta had also over-enforced its hate speech policies against the self-referential use of slurs, impacting Arabic-speaking LGBTQIA+ users. In that case, three moderators mistakenly determined the content violated the Hate Speech policy (as it then was), raising concerns that enforcement guidance to reviewers was insufficient. The Board also highlighted it expects Meta to be “particularly sensitive to the possibility of wrongful removal” of this type of content “given the importance of reclaiming derogatory terms for LGBTQIA+ people in countering discrimination.”
The Board has issued recommendations aimed at reducing the number of enforcement errors Meta makes in enforcing exceptions to the Community Standards. For example, the Board has recommended that Meta should, “conduct accuracy assessments focused on Hate Speech policy allowances that cover artistic expression and expression about human rights violations (e.g., condemnation, awareness raising, self-referential use, empowering use),” ( Wampum Belt, recommendation no. 3). The company performed an accuracy assessment and provided the Board with enforcement precision metrics for the Hate Speech (now Hateful Conduct) policy. The Board categorizes the recommendation as implemented, as demonstrated through published information. Enforcement errors may occur in at-scale content moderation. However, the Board encourages Meta to continue to improve its ability to accurately detect content where over-enforcement and under-enforcement pose heightened risks for vulnerable groups.
On January 7, Meta announced that it was committed to reducing mistakes in the enforcement of its policies, in particular to protect speech. Through its summary decisions, the Board highlights enforcement errors the company has made, often indicating areas where Meta can make further improvements based on prior Board decisions and recommendations.
Decision
The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention.