This website is currently undergoing maintenance and will be back soon.

Overturned

Elon Musk Satire

A user appealed Meta’s decision to remove an Instagram post containing a fictional “X” thread that satirically depicts Elon Musk reacting to a post containing offensive content. The case highlights Meta’s shortcomings in accurately identifying satirical content on its platforms.

Type of Decision

Summary

Policies and Topics

Topic
Freedom of expression, Humor
Community Standard
Dangerous individuals and organizations

Region/Countries

Location
United States

Platform

Platform
Instagram

This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not consider public comments and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement.

Case Summary

A user appealed Meta’s decision to remove an Instagram post containing a fictional “X” (formerly Twitter) thread that satirically depicts Elon Musk reacting to a post containing offensive content. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.

Case Description and Background

In July 2023, a user posted an image on Instagram containing a fictional X thread that does not resemble X’s layout. In the thread, a fictitious user posted several inflammatory statements such as: “KKK never did anything wrong to black people,” “Hitler didn’t hate Jews,” and “LGBT are all pedophiles.” The thread featured Elon Musk replying to the user’s post by stating “Looking into this.…” This Instagram post received fewer than 500 views.

The post was removed for violating Meta’s Dangerous Organizations and Individuals policy, which prohibits representation of and certain speech about the groups and people the company judges as linked to significant real-world harm. Meta designates both the Ku Klux Klan (KKK) and Hitler as dangerous entities under this policy. In certain cases, Meta will allow “content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.”

In their appeal to the Board, the user emphasized that the post was not intended to endorse Hitler or the KKK, but rather to “call out and criticize one of the most influential men on the planet for engaging with extremists on his platform."

After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Dangerous Organizations and Individuals policy and its removal was incorrect. The company then restored the content to Instagram.

Board Authority and Scope

The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1).

Where Meta acknowledges that it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users.

Case Significance

This case highlights Meta’s shortcomings in accurately identifying satirical content on its platforms. The Board has previously issued recommendations on Meta’s enforcement of satirical content. The Board has urged Meta to “make sure that it has adequate procedures in place to assess satirical content and relevant context properly. This includes providing content moderators with: (i) access to Facebook’s local operation teams to gather relevant cultural and background information; and (ii) sufficient time to consult with Facebook’s local operation teams and to make the assessment. Facebook should ensure that its policies for content moderators incentivize further investigation or escalation where a content moderator is not sure if a meme is satirical or not ,” ( Two Buttons Meme decision, recommendation no. 3). Meta reported implementation of this recommendation without publishing further information and thus its implementation cannot be verified.

Furthermore, this case illustrates Meta's challenges in interpreting user intent. Previously, the Board has urged Meta to communicate to users how they can clarify the intent behind their post, particularly in relation to the Dangerous Organizations and Individuals policy. Meta partially implemented the Board's recommendation to “explain in the Community Standards how users can make the intent behind their posts clear to Facebook… Facebook should provide illustrative examples to demonstrate the line between permitted and prohibited content, including in relation to the application of the rule clarifying what ‘support’ excludes ,” ( Ocalan’s Isolation decision, recommendation no. 6).

The Board emphasizes that full adoption of these recommendations, alongside Meta publishing information to demonstrate they have been successfully implemented, could reduce the number of enforcement errors of satirical content on Meta’s platforms.

Decision

The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention.

Return to Case Decisions and Policy Advisory Opinions