Overturned
Praise be to God
November 16, 2023
A user appealed Meta’s decision to remove their Instagram post, which contains a photo of them in bridal wear, accompanied by a caption that states “alhamdulillah.”
This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement.
Case summary
A user appealed Meta’s decision to remove their Instagram post, which contains a photo of them in bridal wear, accompanied by a caption that states “alhamdulillah,” a common expression meaning “praise be to God.” After the Oversight Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.
Case description and background
In June 2023, an Instagram user in Pakistan posted a photo of themselves in bridal wear at a traditional pre-wedding event. The caption accompanying the post stated “alhamdulillah,” which is an expression used by many people in Muslim and Arab societies meaning “praise be to God.” The post received less than 1,000 views.
The post was removed for violating the company’s Dangerous Organizations and Individuals policy. This policy prohibits content that contains praise, substantiative support, or representation of organizations or individuals that Meta deems as dangerous.
In their statement to the Board, the user emphasized that the phrase “alhamdulillah” is a common cultural expression used to express gratitude and has no “remote or direct links to a hate group, a hateful nature or any association to a dangerous organization.” The Board would view the phrase as protected speech under Meta’s Community Standards, consistent with freedom of expression and the company's value of protecting “Voice.”
The user stressed the popularity of the phrase by stating, “this is one of the most popular phrases amongst the population of 2+ billion Muslims on the planet... if this is the reason the post has been removed, I consider this to be highly damaging for the Muslim population on Instagram and inherently somewhat ignorant.”
After the Board brought this case to Meta’s attention, the company determined that the content “did not contain any references to a designated organization or individuals,” and therefore did not violate its Dangerous Organizations and Individuals policy. Subsequently, Meta restored the content to Instagram.
Board authority and scope
The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1).
Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram.
Case significance
This case highlights Meta’s inconsistent application of its Dangerous Organizations and Individuals policy, which can lead to wrongful removals of content. Enforcement errors such as these undermine Meta’s responsibility to treat users fairly.
Previously, the Board has issued recommendations relating to the enforcement of Meta’s Dangerous Organizations and Individuals policy. The Board has recommended that Meta “evaluate automated moderation processes for enforcement of the Dangerous Organizations and Individuals policy” ( Öcalan’s isolation decision, recommendation no. 2). Meta declined to implement this recommendation. Additionally, the Board has recommended that Meta “enhance the capacity allocated to the High-Impact False Positive Override system across languages to ensure that more content decisions that may be enforcement errors receive additional human review” ( Mention of the Taliban in news reporting decision, recommendation no. 7). Meta stated this was work it already does but did not publish information to demonstrate this. Lastly, the Board has recommended that Meta publish “more comprehensive information on error rates for enforcing rules on “praise” and “support” of dangerous individuals and organizations, broken down by region and language” in Meta’s transparency reporting ( Öcalan’s isolation decision, recommendation no. 12). Meta declined to implement this recommendation.
In this case, there was no mention of an organization or individual which might be considered dangerous. The Board has noted in multiple cases that problems of cultural misunderstanding and errors in translation can lead to improper enforcement of Meta’s policies. The Board has also issued recommendations relating to the moderation of Arabic content. The Board has recommended that Meta, “translate the Internal Implementation Standards and Known Questions,” which is guidance for content moderators “to Modern Standard Arabic” ( Reclaiming Arabic words decision, recommendation no. 1). Meta declined to implement this recommendation.
The Board reiterates that full implementation of the recommendations above will help to decrease enforcement errors under the Dangerous Organizations and Individuals policy, reducing the number of users who are impacted by wrongful removals.
Decision
The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error, after the Board had brought the case to Meta’s attention.