A graphic depiction of a line-drawn circle encompassing circles, rectangles and a triangle.
A graphic depiction of a line-drawn circle encompassing circles, rectangles and a triangle.
A graphic depiction of a line-drawn circle encompassing circles, rectangles and a triangle.

Oversight Board Issues First Expedited Decisions About Israel-Hamas Conflict


December 2023

Today, the Oversight Board has published its first expedited decisions in two cases about the Israel-Hamas conflict. These cases, which were announced on December 7, were decided on an accelerated timeline of 12 days. The Board is required to deliver expedited decisions within 30 days.

The Oversight Board selected these cases because of the importance of freedom of expression in conflict situations. Both cases are representative of the types of appeals users in the region have been submitting to the Board since the October 7 attacks. One case concerns video posted to Facebook of an Israeli woman begging her kidnappers not to kill her as she is taken hostage during the terrorist raids on Israel on October 7. The other case involves a video posted to Instagram showing what appears to be the aftermath of a strike on or near Al-Shifa Hospital in Gaza City during Israel’s ground offensive in the north of the Gaza Strip. The post shows Palestinians, including children, killed or injured.

In both cases, the Board overturned Meta’s original decision to remove the content from its platforms but approved the company’s subsequent decision to restore the posts with a warning screen.

You can read the Board’s full decisions here.

Key Findings Key Findings

The Board’s decisions discussed a number of aspects of Meta’s performance during the crisis that affected free expression.

In reaction to an exceptional surge in violent and graphic content on its platforms after the October 7 attacks, Meta temporarily lowered the confidence thresholds for the automatic classification systems (classifiers) which identify and remove content violating its Violent and Graphic Content, Hate Speech, Violence and Incitement, and Bullying and Harassment policies. These measures applied to content originating in Israel and Gaza across all languages.

This meant that Meta used its automated tools more aggressively to remove content that might violate its policies. While this reduced the likelihood that Meta would fail to remove violating content that might otherwise evade detection or where capacity for human review was limited, it also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict.

The Al-Shifa Hospital case shows that insufficient human oversight of automated moderation during crisis response can lead to the incorrect removal of speech that may be of significant public interest. Both the initial decision to remove this content as well as the rejection of the user’s appeal were taken automatically based on a classifier score, without any human review. This may have been exacerbated by Meta’s crisis response of lowering the removal threshold for content under the Violent and Graphic Content policy following the October 7 attacks.

In both cases reviewed by the Board, Meta demoted content where it had applied warning screens, excluding it from being recommended to other Facebook or Instagram users, even though the company had determined that the posts intended to raise awareness.

The Hostages Kidnapped From Israel case notes that Meta’s initial policy position on October 7 was to remove “third-party imagery depicting the moment of [designated] attack on visible victims,” in accordance with its Dangerous Organizations and Individuals policy. The Board agrees with this initial position and that protecting hostages’ safety and dignity should be Meta’s default approach. However, the Board finds that Meta’s later decision to allow such content with a warning screen when shared for the purposes of condemning, awareness-raising, news reporting or calling for release was justifiable. Indeed, given the fast-moving circumstances, and the high costs to freedom of expression and access to information for removing this kind of content, Meta should have moved more quickly to adapt its policy.

Meta confirmed that on or around October 20, it began allowing hostage-taking content from the October 7 attacks, but initially only from accounts on the company’s cross-check lists. This was not expanded to all accounts until November 16, and only for content posted after this date.

This highlights some of the Board’s previous concerns around cross-check including the unequal treatment of users, lack of transparent criteria for inclusion and the need to ensure greater representation of users whose content is likely to be important from a human-rights perspective on Meta’s cross-check lists. The use of the cross-check program in this way also contradicts how Meta has described and explained the purpose of the program, as a mistake prevention system and not a program that provides certain privileged users with more permissive rules.

Finally, Meta has a responsibility to preserve evidence of potential human-rights violations and violations of international humanitarian law. Even when content is removed from Meta’s platforms, it is vital to preserve such evidence in the interest of future accountability. While Meta explained that it retains all content that violates its Community Standards for a period of one year, the Board urges that content specifically related to potential war crimes, crimes against humanity, and grave violations of human rights be identified and preserved in a more enduring and accessible way for purposes of longer-term accountability.

In addition, the Board reiterated the urgent need to act on the following content moderation guidance:

  • Earlier this year, Meta committed to developing a protocol to preserve content that could assist investigations of atrocity crimes or human rights violations. While Meta reported that it is in the final stages of developing a protocol, the company should promptly demonstrate its approach in this area during the current conflict ( Recommendation no.1 in the Armenian Prisoners of War Video case).
  • Content identified under Media Matching Service banks should be subject to reassessment on whether it is violating if it is found to be frequently and successfully appealed ( Recommendation no.1 in the Colombia Police Cartoon case).
  • The Board advised Meta to notify users when automation is used to take enforcement action against their content. Meta reports that it completed the global rollout of this recommendation, but the company still needs to provide the Board with evidence of implementation ( Recommendation no. 3 in the Breast Cancer Symptoms and Nudity case).

Note: The Oversight Board is an independent organization that examines Meta's decisions to remove or leave up content on Facebook and Instagram in a selected number of emblematic cases. The Board reviews and, where necessary, reverses the company's decisions. The Board’s decisions are binding on Meta.

Back to news and articles