Overturned
Hostages Kidnapped From Israel
December 19, 2023
The Board overturns Meta’s original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities.
In the weeks following the publication of this decision, we will upload a translation in Hebrew here and an Arabic translation will become available through the ‘language’ tab accessed in the menu at the top of this screen.
לקריאת החלטה זו בעברית יש ללחוץ כאן.
1. Summary
The case involves an emotionally powerful video showing a woman, during the October 7 Hamas-led terrorist attack on Israel, begging her kidnappers not to kill her as she is taken hostage and driven away. The accompanying caption urges people to watch the video to better understand the horror that Israel woke up to on October 7, 2023. Meta’s automated systems removed the post for violating its Dangerous Organizations and Individuals Community Standard. The user appealed the decision to the Oversight Board. After the Board identified the case for review, Meta informed the Board that the company had subsequently made an exception to the policy line under which the content was removed and restored the content with a warning screen. The Board overturns Meta’s original decision and approves the decision to restore the content with a warning screen but disapproves of the associated demotion of the content barring it from recommendations. This case, together with Al-Shifa Hospital ( 2023-049-IG-UA), are the Board’s first cases decided under its expedited review procedures.
2. Case Context and Meta’s Response
On October 7, 2023, Hamas, a designated Tier 1 organization under Meta’s Dangerous Organizations and Individuals Community Standard, led unprecedented terrorist attacks on Israel from Gaza that killed an estimated 1,200 people, and resulted in roughly 240 people being taken hostage ( Ministry of Foreign Affairs, Government of Israel). Israel immediately undertook a military campaign in Gaza in response to the attacks. Israel’s military action has killed more than 18,000 people in Gaza as of mid-December 2023 (UN Office for the Coordination of Humanitarian Affairs, drawing on data from the Ministry of Health in Gaza), in a conflict where both sides have been accused of violating international law. Both the terrorist attacks and Israel’s subsequent military actions have been the subjects of intense worldwide publicity, debate, scrutiny, and controversy, much of which has taken place on social media platforms, including Instagram and Facebook.
Meta immediately designated the events of October 7 a terrorist attack under its Dangerous Organizations and Individuals policy. Under its Community Standards, this means that Meta would remove any content on its platforms that “praises, substantively supports or represents” the October 7 attacks or their perpetrators. It would also remove any perpetrator-generated content relating to such attacks and third-party imagery depicting the moment of such attacks on visible victims.
In reaction to an exceptional surge in violent and graphic content being posted to its platforms following the terrorist attacks and military response, Meta put in place several temporary measures, including lowering the confidence thresholds for the automatic classification systems (classifiers) of its Hate Speech, Violence and Incitement, and Bullying and Harassment policies to identify and remove content. Meta informed the Board that these measures applied to content originating in Israel and Gaza across all languages. The changes to these classifiers increased the automatic removal of content where there was a lower confidence score for the content violating Meta’s policies. In other words, Meta used its automated tools more aggressively to remove content that might be prohibited. Meta did this to prioritize its value of safety, with more content removed than would have occurred under the higher confidence threshold in place prior to October 7. While this reduced the likelihood that Meta would fail to remove violating content that might otherwise evade detection or where capacity for human review was limited, it also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict.
When escalation teams assessed videos as violating its Violent and Graphic Content, Violence and Incitement and Dangerous Organizations and Individuals policies, Meta relied on Media Matching Service banks to automatically remove matching videos. This approach raised the concern of over-enforcement, including people facing restrictions on or suspension of their accounts following multiple violations of Meta's content policies (sometimes referred to as "Facebook jail"). To mitigate that concern, Meta withheld “ strikes” that would ordinarily accompany automatic removals based on the Media Matching Service banks (as Meta announced in its newsroom post).
Meta’s changes in the classifier confidence threshold and its strike policy are limited to the Israel-Gaza conflict and are intended to be temporary. As of December 11, 2023, Meta had not restored confidence thresholds to pre-October 7 levels.
3. Case Description
This case involves a video of the October 7 attacks depicting a woman begging her kidnappers not to kill her as she is taken hostage and driven away on a motorbike. The woman is seen sitting on the back of the vehicle, reaching out and pleading for her life. The video then shows a man, who appears to be another hostage, being marched away by captors. The faces of the hostages and those abducting them are not obscured and are identifiable. The original footage was shared broadly in the immediate aftermath of the attacks. The video posted by the user in this case, approximately one week after the attacks, integrates text within the video stating: “Israel is under attack,” and includes the hashtag #FreeIsrael, also naming one of the hostages. In a caption accompanying the video, the user states that Israel was attacked by Hamas militants and urges people to watch the video to better understand the horror that Israel woke up to on October 7, 2023. At the time of writing, both people being abducted in the video were still being held hostage.
An instance of this video was placed in a Media Matching Service bank. Meta initially removed the post in this case for violating its Dangerous Organizations and Individuals policy, which prohibits third-party imagery depicting the moment of designated terror attacks on visible victims under any circumstances, even if shared to condemn or raise awareness of the attack. Meta did not apply a strike. The user then appealed Meta’s decision to the Oversight Board.
In the immediate aftermath of the October 7 terrorist attacks, Meta enforced strictly its policy on videos showing the moment of attack on visible victims. Meta explained this was due to concerns about the dignity of the hostages as well as the use of such videos to celebrate or promote Hamas’ actions. Meta added videos depicting moments of attack on October 7, including the video shown in this case, to Media Matching Service banks so future instances of identical content could be removed automatically.
Meta told the Board that it applied the letter of the Dangerous Organizations and Individuals policy to such content and issued consolidated guidance to reviewers. On October 13, the company explained in its Newsroom post that it temporarily expanded the Violence and Incitement policy to remove content that clearly identified hostages when Meta is made aware of it, even if it was done to condemn the actions or raise awareness of their situation. The company affirmed to the Board that these policies applied equally to both Facebook and Instagram, although similar content has been reported to have appeared widely on the latter platform, indicating there may have been less effective enforcement of this policy there.
The Violence and Incitement Community Standard generally allows content that depicts kidnappings and abductions in a limited number of contexts, including where the content is shared for informational, condemnation, or awareness-raising purposes or by the family as a plea for help. However, according to Meta, when it designates a terrorist attack under its Dangerous Organizations and Individuals policy, and those attacks include hostage-taking of visible victims, Meta’s rules on moment-of-attack content override the Violence and Incitement Community Standard. In such cases, the allowances within that policy for informational, condemning or awareness-raising sharing of moment-of-kidnapping videos do not apply and the content is removed.
However, as events developed following October 7, Meta observed online trends indicating a change in the reasons why people were sharing videos featuring identifiable hostages at the moment of their abduction. Families of victims were sharing the videos to condemn and raise awareness, and the Israeli government and media organizations were similarly sharing the footage, including to counter emerging narratives denying the October 7 events took place or denying the severity of the atrocities.
In response to these developments, Meta implemented an exception to its Dangerous Organizations and Individuals policy, while maintaining its designation of the October 7 events. Subject to operational constraints, moment-of-kidnapping content showing identifiable hostages would be allowed with a warning screen in the context of condemning, raising awareness, news reporting, or a call for release.
Meta told the Board that the roll-out of this exception was staggered and did not reach all users at the same time. On or around October 20, the company began to allow hostage-taking content from the October 7 attacks. Initially it did so only from accounts included in the “Early Response Secondary Review” program (commonly known as “cross-check”), given concerns about operational constraints, including uncertain human review capacity. The cross-check program provides guaranteed additional human review of content by specific entities whenever they post content that is identified as potentially violating and requiring enforcement under Meta content policies. On November 16, Meta determined it had capacity to expand the allowance of hostage-taking content to all accounts and did so, but only for content posted after this date. Meta has informed the Board and explained in the public newsroom update that the exception it is currently making is limited only to videos depicting the moment of kidnapping of the hostages taken in Israel on October 7.
After the Board identified this case, Meta reversed its original decision and restored the content with a “mark as disturbing” warning screen. This restricted the visibility of the content to people over the age of 18 and removed it from recommendations to other Facebook users.
4. Justification for Expedited Review
The Oversight Board’s Bylaws provide for expedited review in “exceptional circumstances, including when content could result in urgent real-world consequences,” and decisions are binding on Meta (Charter, Art. 3, section 7.2; Bylaws, Art. 2, section 2.1.2). The expedited process precludes the level of extensive research, external consultation or public comments that would be undertaken in cases reviewed on ordinary timelines. The case is decided on the information available to the Board at the time of deliberation and is decided by a five-member panel without a full vote of the Board.
The Oversight Board selected this case and one other case, Al-Shifa Hospital (2023-049-IG-UA) because of the importance of freedom of expression in conflict situations, which has been imperiled in the context of the Israel-Hamas conflict. Both cases are representative of the types of appeals users in the region have been submitting to the Board since the October 7 attacks and Israel’s subsequent military action. Both cases fall within the Oversight Board’s crisis and conflict situations priority. Meta’s decisions in both cases meet the standard of “urgent real-world consequences” to justify expedited review, and accordingly the Board and Meta agreed to proceed under the Board’s expedited procedures.
In its submissions to the Board, Meta recognized that “the decision on how to treat this content is difficult and involves competing values and trade-offs,” welcoming the Board’s input on this issue.
5. User Submissions
The author of the post stated in their appeal to the Board that the video captures real events. It aims to “stop terror” by showing the brutality of the attack on October 7, in which the hostages were captured. The user was notified of the Board’s review of their appeal.
6. Decision
The Board overturns Meta’s original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities. However, the Board also concludes that Meta’s demoting of the restored content, in the form of its exclusion from the possibility of being recommended, does not accord with the company’s responsibilities to respect freedom of expression.
6.1 Compliance With Meta’s Content Policies
The Board finds Meta’s initial decision to remove the content was in line with its Dangerous Organizations and Individuals policy at the time, prohibiting “third-party imagery depicting the moment of [designated] attacks on visible victims.” Restoring the content, with a warning screen, also complied with Meta’s temporary allowance to permit such content when shared for the purposes of condemning, awareness-raising, news reporting or calling for release.
6.2 Compliance With Meta’s Human-Rights Responsibilities
The Board agrees with Meta’s initial policy position, on October 7, to remove “third-party imagery depicting the moment of [designated] attacks on visible victims,” in accordance with the Dangerous Organizations and Individuals policy. Protecting the dignity of hostages and ensuring they are not exposed to public curiosity should be Meta’s default approach. In exceptional circumstances, however, when a compelling public interest or the vital interest of hostages require it, temporary and limited exceptions to this prohibition can be justified. In the specific circumstances of this case, as Meta recognized in restoring the content and adding a warning screen to it after the Board had selected it, the content should be allowed. The Board finds Meta’s decision to temporarily change its initial approach -- allowing such content with a warning screen when shared for purposes of condemning, awareness-raising, news reporting or calling for release -- was justifiable. Moreover, this change was justifiable earlier than November 16, as it became clear that Meta's strict enforcement of the Dangerous Organizations and Individuals policy was impeding expression aimed at advancing and protecting the rights and interests of the hostages and their families. Given the fast-moving circumstances, and the high costs to freedom of expression and access to information of removing this kind of content, Meta should have moved more quickly to adapt its policy.
As the Board stated in the Armenian Prisoners of War Video case, the protections for freedom of expression under Article 19 of the International Covenant on Civil and Political Rights (ICCPR) “remain engaged during armed conflicts, and should continue to inform Meta’s human rights responsibilities, alongside the mutually reinforcing and complementary rules of international humanitarian law that apply during such conflicts.” The UN Guiding Principles on Business and Human Rights impose a heightened responsibility on businesses operating in a conflict setting ("Business, human rights and conflict-affected regions: towards heightened action," A/75/212).
When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. In doing so, the Board attempts to be sensitive to how those rights may be different as applied to a private social media company than as applied to a government. Nonetheless, as the UN Special Rapporteur on freedom of expression has stated, while companies do not have the obligations of governments, “their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” (report A/74/486, para. 41).
Legality requires that any restriction on freedom of expression should be accessible and clear enough to provide guidance as to what is permitted and what is not. As applied to this case, Meta’s Dangerous Organizations and Individuals rule prohibiting third-party imagery depicting the moment of designated terror attacks on visible victims, regardless of the context it is shared in, is clear. In addition, Meta publicly announced on October 13, through a newsroom post, that it would remove all such videos. While Meta subsequently changed its approach -- first on October 20, allowing such content (with a warning screen) when shared by entities benefiting from the ERSR program for informational or raising awareness purposes, and again on November 16, expanding that allowance for all users -- the company did not announce this change publicly until December 5. This was after the Board identified this case for review but before the Board publicly announced it was taking this case on December 7. Throughout the conflict, the rules that Meta has applied have changed several times but have not been made fully clear to users. It is also not clear under which policy the warning screen is imposed, as neither the Dangerous Organizations and Individuals nor Violence and Incitement policies provide for the use of warning screens. The Board encourages Meta to address these legality concerns by clarifying publicly the basis and scope of its current policy regarding content relating to the hostages taken from Israel on October 7, and its relation to the more general policies at issue.
Under Article 19, para. 3 of the ICCPR, expression may be restricted for a defined and limited list of reasons. The Board has previously found that the Dangerous Organizations and Individuals and the Violence and Incitement policies pursue the legitimate aim of protecting the rights of others (See Tigray Communication Affairs Bureau and Mention of the Taliban in News Reporting).
The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34, para. 34).
The Board finds Meta’s initial decision to remove all content depicting visible hostages was necessary and proportionate to achieve the aims of protecting the safety and dignity of hostages and to ensure Meta’s platforms were not used to further the violence of October 7 or to encourage further degrading and inhumane treatment of hostages. The Board considered international humanitarian law, the digital context during the first week of the conflict, and the mitigation measures Meta undertook to limit the impact of its decision to remove all such content.
International humanitarian law, in both international and non-international armed conflicts, identifies the special vulnerability of hostages in an armed conflict and provides protections to address those heightened risks; the taking of hostages is prohibited by the Geneva Conventions ( Common Article 3, Geneva Conventions; Articles 27 and Article 34, Geneva Convention IV). According to the ICRC’s commentary, the definition of hostage taking in the International Convention against the taking of hostages, as the seizure or detention of a person, accompanied by the threat to kill, to injure or to continue to detain that person in order to compel a third party to do or to abstain from doing any act as an explicit or implicit conditions for his or her release, should be used to define the term under the Geneva Conventions. Common Article 3 also prohibits “outrages upon personal dignity, in particular humiliating and degrading treatment.” Article 27 of Geneva Convention IV protects protected persons, including hostages, from inhumane and degrading treatment, including from insults and public curiosity. The sharing of hostage videos can serve as an integral part of a strategy to threaten a government and the public and can promote the continuing detention and degradation of hostages, an ongoing violation of international law. Under such circumstances, permitting the dissemination of the images of violence, mistreatment, and ongoing vulnerability, can promote further violence and degrading treatment.
Videos of hostages began circulating on social media platforms simultaneously with the October 7 attack. According to reporting, throughout the first week videos were broadcast by Hamas and the Palestinian Islamic Jihad, with grave concerns that further livestreams and videos of executions or torture could follow. Under such circumstances, Meta’s decision to prohibit all such videos on its platforms was reasonable and consistent with international humanitarian law, its Community Standards, and its value of safety. Industry standards, such as commitments laid out in the Christchurch Call, require companies to react rapidly and effectively to harmful content shared in the aftermath of violent extremist attacks. However, the Christchurch Call also emphasizes the need to respond to such content in a manner consistent with human rights and fundamental freedoms.
At the same time, Meta took measures to limit the potential adverse impact of its decision to remove all such imagery by deciding not to apply a strike to users who had their content removed under this policy, mitigating the detrimental effects of the strict policy on users who may have been posting the content for purposes such as condemning or raising awareness and reporting on events. The Board finds that it can be reasonable and helpful for Meta to consider calibrating or dispensing with strikes to mitigate the negative consequences of categorical rules strictly applied at scale, and therefore making a restriction on expression more proportionate to the aim being pursued.
The Geneva Conventions prohibit exposing protected persons, including hostages, to public curiosity as it constitutes humiliating treatment. There are narrow exceptions to this prohibition, which the Board analyzed in Armenian Prisoners of War Video decision, requiring a reasonable balance to be struck between the benefits of public disclosure of materials depicting identifiable prisoners, “given the high value of such materials when used as evidence to prosecute war crimes, promote accountability, and raise public awareness of abuse, and the potential humiliation and even physical harm that may be caused to the persons in the shared materials.” The exceptional disclosure requires a compelling public interest or that the vital interests of the hostage are served.
For these reasons, any decision to make a temporary exception to restrictions on content showing identifiable hostages or prisoners of war must be assessed on the particular facts relating to those hostages or Prisoners Of War and their rights and interests, must be continuously reviewed to ensure it is narrowly tailored to serve those rights and interests, and does not become a general exception to the rules aimed at protecting the rights and dignity of protected persons. The facts in this case provided strong signals that such disclosure was in the vital interest of the hostages. Within the first week following the October 7 attack, families of the hostages also began to organize and share videos as part of their campaign to call for the release of hostages and to pressure various governments to act in the best interest of the hostages. The video which was used as part of the post in this case was also part of a campaign by the family of the woman depicted. In addition, from approximately October 16, the Israeli government began showing video compilations to journalists to demonstrate the severity of the October 7 attacks. There were reports of narratives spreading denying the atrocities in the weeks following October 7. Given this, it was reasonable for Meta to conclude that the company must not silence the families of hostages and frustrate the work of news organizations and other entities to investigate and report on the facts. It can be crucial to seeking the future safety of the hostages for families and authorities to see a hostage alive and to be able to identify their physical condition, and even identify the kidnappers. This is particularly important while Meta lacks a transparent and effective mechanism for preserving such content (see further discussion below). In short, given the changes in the digital environment in the weeks following the events of October 7, Meta was justified in making a temporary exception to its policies, limited to the hostages taken in Israel on October 7.
The Board also concludes that Meta took too long to roll out the application of this exception to all users. Meta was also too slow to announce this temporary change to the public. On October 20 Meta began allowing such videos with a warning screen when shared for informational and awareness-raising purposes, limited to those entities on the cross-check ERSR list. On November 16, almost four weeks after the initial allowance was acknowledged and nearly a month and a half into the conflict, Meta extended that allowance to all users. On December 5, Meta finally announced through a newsroom article that it had made a change to its policy prohibiting videos depicting hostages at the moment of attack. While the Board finds the concept of a staged rollout of changes to the policy is reasonable in principle, the Board concludes that the company should have reacted to changing circumstances more quickly.
After Meta changed its initial approach and introduced the allowance, Meta still applied a warning screen on the content. As the Board has concluded in previous cases, applying a warning screen can in certain circumstances be a proportionate measure, even though it has a negative impact on the reach of the content, because it provides users with the ability to share and a choice of whether to see disturbing content (see Armenian Prisoners of War Video). The Board finds that excluding content raising awareness of potential human-rights abuses, conflicts, or acts of terrorism from recommendations is not a necessary or proportionate restriction on freedom of expression, in view of the very high public interest in such content. Warning screens and removal from recommendations serve separate functions, and should in some instances be decoupled, in particular in crisis situations. Removing content from recommendation systems means reducing the reach that this content would otherwise get. The Board finds this practice interferes with freedom of expression in disproportionate ways in so far as it applies to content that is already limited to adult users and that is posted to raise awareness, condemn, or report on matters of public interest such as the development of a violent conflict.
The Board is also concerned that Meta’s rapidly changing approach to content moderation during the conflict has been accompanied by an ongoing lack of transparency that undermines effective evaluation of its policies and practices, and that can give it the outward appearance of arbitrariness. For example, Meta confirmed that the exception permitting the sharing of imagery depicting visible victims of a designated attack for informational or awareness-raising purposes is a temporary measure. However, it is unclear whether this measure is part of the company’s Crisis Policy Protocol or was improvised by Meta’s teams as events unfolded. Meta developed the Crisis Policy Protocol in response to the Board’s recommendation no. 18 in the Former President Trump’s Suspension case. According to the company it is meant to provide Meta with a framework for anticipating and responding to risks consistently across crises. The lack of transparency on the protocol means neither the Board nor the public knows whether the policy measure used in this case (i.e. allowing content violating the letter of the relevant rule under the Dangerous Organizations and Individuals policy for raising awareness and condemnation purposes with a warning screen) was developed and evaluated prior to this conflict, what the exact scope of the temporary policy measure is (e.g., whether it applies to videos depicting hostages in detention, after the October 7 attack), the criteria for its use, the circumstances under which the measure will no longer be necessary, and whether Meta intends to resume removing all such content once the temporary measure ends. The Board reemphasizes the lack of timely and effective notification, for users and the public, of these ad hoc crisis measures. The Board has previously held that Meta should announce such exceptions to its Community Standards, “their duration and notice of their expiration, in order to give people who use its platforms notice of policy changes allowing certain expression” (see Iran Protest Slogan, recommendation no. 5, which Meta has partially implemented). The lack of transparency can also have a chilling effect on users who may fear their content will be removed and their account penalized or restricted if they make a mistake. Finally, given the baseline general prohibition on allowing hostages to be exhibited and the very exceptional circumstances under which this can be relaxed, prompt and regular notice and transparency regarding the exact scope and time limitations of the exceptions helps to ensure that they will remain as limited as possible.
Moreover, the company first began allowing entities benefiting from its cross-check program to share videos of hostages with a warning screen for informational or awareness-raising purposes before expanding this allowance to all users. Adopting an intermediate step to ease into a more permissive temporary policy appears reasonable given the context, allowing the company to test the effects of the change on a more limited scale before implementing it broadly. However, doing so through the use of the cross-check program also highlights anew some of the problems that the Board had previously identified in its policy advisory opinion on the subject. These include unequal treatment of users, lack of transparent criteria for the cross-check lists, the need to ensure greater representation of users whose content is likely to be important from a human-rights perspective, such as journalists and civil society organizations, and overall lack of transparency around how cross-check works. The use of the cross-check program in this way also contradicts how Meta has described and explained the purpose of the program, as a mistake prevention system and not a program that provides certain privileged users with more permissive rules. Meta has indicated that it continues to work on implementing most of the recommendations the Board made in that policy advisory opinion, but neither the Board nor the public have sufficient information to evaluate whether the reliance on the cross-check list during the conflict was in line with Meta’s human rights responsibilities or was likely to lead to a disparate impact, privileging one market or one group of speakers over another.
Finally, Meta has a responsibility to preserve evidence of potential human-rights violations and violations of international humanitarian law, as also recommended in the BSR report (recommendation no. 21) and advocated by civil society groups. Even when content is removed from Meta’s platforms, it is vital to preserve such evidence in the interest of future accountability (see Sudan Graphic Video and Armenian Prisoners of War Video). While Meta explained that it retains all content that violates its Community Standards for a period of one year, the Board urges that content specifically related to potential war crimes, crimes against humanity, and grave violations of human rights be identified and preserved in a more enduring and accessible way for purposes of longer-term accountability. The Board notes that Meta has agreed to implement recommendation no. 1 in the Armenian Prisoners of War Video case. This called on Meta to develop a protocol to preserve and, where appropriate, share with competent authorities, information to assist in investigations and legal processes to remedy or prosecute atrocity crimes or grave human-rights violations. Meta has informed the Board that it is in the final stages of developing a “consistent approach to retaining potential evidence of atrocity crimes and serious violations of international human rights law” and expects to provide the Board with a briefing about its approach soon. The Board expects Meta to fully implement the above recommendation.
*Procedural note:
The Oversight Board's expedited decisions are prepared by panels of five members and are not subject to majority approval of the full Board. Board decisions do not necessarily represent the personal views of all members.