Overturned
Post Honoring Lesbian Relationships
April 28, 2026
A user appealed Meta’s decision to remove an image that was part of a carousel honoring lesbian relationships.
Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement.
Summary
A user appealed Meta’s decision to remove an image that was part of a carousel honoring lesbian relationships. Two of the other images in the carousel included a Meta-designated slur used to refer to lesbians in Brazil. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the content.
About the Case
In September 2025, a user posted an Instagram carousel in Brazilian Portuguese honoring the lesbian love stories of older generations of women, whom they called “grandmothers.” The case content is one of the carousel images, with text explaining that long-term romantic relationships between older generations of women, or “grandmothers”, are usually described as “very close friendships” even when the women lived together for decades as companions. Across the nine images of the carousel, the user explains that “grandmas” were pushed to hide their relationships, like many lesbians of their generation, due to societal pressure and the risks of stigmatization and forced institutionalization. The post criticizes the common practice of mischaracterizing older generations of lesbian women as “spinsters” and their romantic relationships as “old friendship[s]” or “very close relationship[s],” emphasizing that such narratives render them invisible and erase them from history. Two of the carousel images include the term “sapatão,” a slur used to refer to lesbians in Brazil. Meta initially removed the case content under the Hateful Conduct policy.
In their appeal to the Board, the user explained that lesbians like herself use the slur “to reclaim it, stripping away its negative connotation and giving it a new meaning.” The user added that, “It’s a post about community, pride, and respect.”
Under Meta's Hateful Conduct Community Standard, the company removes “content that describes or negatively targets people with slurs.” Slurs are defined as “words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic, often because these words are tied to historical discrimination, oppression and violence.” However, the policy rationale explains that slurs are allowed when “used self-referentially or in an empowering way” when “the speaker's intention is clear.”
After the Board brought this case to Meta’s attention, the company determined that the case content did not violate the Hateful Conduct policy, and that its original decision to remove it was incorrect because it “does not include any text that qualifies as a slur” – rather other images in the carousel included a Meta-designated slur. The company also explained that the broader context represented in the carousel “suggests that the term is presented in an explicitly positive manner.” The company then restored the content to Instagram.
Board Authority and Scope
The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1).
When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users.
Significance of Case
The case highlights Meta’s repeated errors in two areas: enforcing exceptions to its Hateful Conduct policy for the use of slurs self-referentially and/or in an empowering way; and the moderation of content involving carousels.
As the Board noted in the Reclaiming Arabic Words decision: “The over-moderation of speech by users from persecuted minority groups is a serious and widespread threat to their freedom of expression.” The same enforcement error was observed in both the Reclaimed Term in Drag Performanceand Heritage of Pride decisions.
The Board has issued recommendations aimed at reducing errors in Meta’s Hateful Conduct policy enforcement. For instance, the Board has recommended that Meta should "share [with the public] the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech [now Hateful Conduct] policy […] in a way that allows these assessments to be compared across languages and/or regions" ( Criminal Allegations Based on Nationality, recommendation no. 2). In its initial response to the Board, Meta reported that the company will implement this recommendation in part. Meta stated that, while the company "will continue to share data on the amount of hate speech content addressed by [its] detection and enforcement mechanisms in the Community Standards Enforcement Report (CSER)," data on the accuracy of its enforcement on a global scale will be confidentially shared with the Board. The company later revised its response and stated that "As part of [its] efforts to change how [it] enforce[s] [its] policies to reduce mistakes, [it is] relying more on reports from users instead of proactive detection for many violation types, including Hateful Conduct (formerly Hate Speech). [The company] will conduct [its] assessment at a later time in order to allow teams to fully implement these changes and provide more accurate information on [its] enforcement" (Meta’s H1 2025 Report [Appendix] on the Oversight Board). This recommendation was issued in September 2024. The implementation is still in progress, with data yet to be shared with the Board.
The present case also illustrates Meta’s ongoing challenges in moderating content involving carousels. The case content was removed despite not containing a slur – the word was featured in other images in the carousel. In the Poem About Political Protest in Argentina decision, the Board noted the moderators’ failure to understand the context because the review system did not display the full carousel. The Board therefore reiterates its previous recommendation that: “Meta should develop an integrated process for ensuring that, when a content type [such as carousels] is introduced or significantly updated, the company's procedures and tooling allow for moderation in line with the company’s human rights responsibilities” ( Poem About Political Protest in Argentina, recommendation no. 2). In its initial response to the Board, Meta stated that its “risk review processes ensure effective assessment, compliance and continuous improvement across [its] products, with transparency provided through regular reporting and maintained commitments to human rights and increased transparency.” The company has not, however, provided the Board with visibility on the details of this process. Therefore, Meta described implementation as work it already does but did not publish information to demonstrate it.
The Board believes that fully implementing recommendation no. 2 from the Criminal Allegations Based on Nationality decision mentioned above would further strengthen the company’s ability to reduce overenforcement of speech from vulnerable groups. Moreover, the implementation of recommendation no. 2 from the Poem About Political Protest in Argentina decision would enhance Meta’s ability to anticipate and mitigate risks associated with new content types.
Decision
The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention.