Oversight Board Upholds Meta's Decision in Altered Video of President Biden Case
The Oversight Board has upheld Meta’s decision to leave up a video that was edited to make it appear as though U.S. President Joe Biden is inappropriately touching his adult granddaughter’s chest, and which is accompanied by a caption describing him as a “pedophile.” The Facebook post does not violate Meta’s Manipulated Media policy, which applies only to video created through artificial intelligence (AI) and only to content showing people saying things they did not say. Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it does not violate the existing policy. Additionally, the alteration of this video clip is obvious and therefore unlikely to mislead the “average user” of its authenticity, which, according to Meta, is a key characteristic of manipulated media. Nevertheless, the Board is concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent (for example, to electoral processes). Meta should reconsider this policy quickly, given the number of elections in 2024.
About the Case
In May 2023, a Facebook user posted a seven-second video clip, based on actual footage of President Biden, taken in October 2022, when he went to vote in person during the U.S. midterm elections. In the original footage, he exchanged “I Voted” stickers with his adult granddaughter, a first-time voter, placing the sticker above her chest, according to her instruction, and then kissing her on the cheek. In the video clip, posted just over six months later, the footage has been altered so that it loops, repeating the moment when the president’s hand made contact with his granddaughter’s chest to make it look like he is inappropriately touching her. The soundtrack to the altered video includes the lyric “Girls rub on your titties” from the song “Simon Says” by Pharoahe Monch, while the post’s caption states that President Biden is a “sick pedophile” and describes the people who voted for him as “mentally unwell.” Other posts containing the same altered video clip, but not the same soundtrack or caption, went viral in January 2023.
A different user reported the post to Meta as hate speech, but this was automatically closed by the company without any review. They then appealed this decision to Meta, which resulted in a human reviewer deciding the content was not a violation and leaving the post up. Finally, they appealed to the Board.
The Board agrees with Meta that the content does not violate the company’s Manipulated Media policy because the clip does not show President Biden saying words he did not say, and it was not altered through AI. The current policy only prohibits edited videos showing people saying words they did not say (there is no prohibition covering individuals doing something they did not do) and only applies to video created through AI. According to Meta, a key characteristic of “manipulated media” is that it could mislead the “average” user to believe it is authentic and unaltered. In this case, the looping of one scene in the video is an obvious alteration.
Nevertheless, the Board finds that Meta’s Manipulated Media policy is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent. In short, the policy should be reconsidered.
The policy’s application to only video content, content altered or generated by AI, and content that makes people appear to say words they did not say is too narrow. Meta should extend the policy to cover audio as well as to content that shows people doing things they did not do. The Board is also unconvinced of the logic of making these rules dependent on the technical measures used to create content. Experts the Board consulted, and public comments, broadly agreed on the fact that non-AI-altered content is prevalent and not necessarily any less misleading; for example, most phones have features to edit content. Therefore, the policy should not treat “deep fakes” differently to content altered in other ways (for example, “cheap fakes”).
The Board acknowledges that Meta may put in place necessary and proportionate measures to prevent offline harms caused by manipulated media, including protecting the right to vote and participate in the conduct of public affairs. However, the current policy does not clearly specify the harms it is seeking to prevent. Meta needs to provide greater clarity on what those harms are and needs to make revisions quickly, given the record number of elections in 2024.
At present, the policy also raises legality concerns. Currently, Meta publishes this policy in two places: as a standalone policy and as part of the Misinformation Community Standard. There are differences between the two in their rationale and exact operational wording. These need to be clarified and any errors corrected.
At the same time, the Board believes that in most cases Meta could prevent the harm to users caused by being misled about the authenticity of audio or audiovisual content through less restrictive means than removal of content. For example, the company could attach labels to misleading content to inform users that it has been significantly altered, providing context on its authenticity. Meta already uses labels as part of its third-party fact-checking program, but if such a measure were introduced to enforce this policy, it should be carried out without reliance on third-party fact-checkers and across the platform.
The Oversight Board’s Decision
The Oversight Board has upheld Meta’s decision to leave up the post.
The Board recommends that Meta:
- Reconsider the scope of its Manipulated Media policy to cover audio and audiovisual content, content showing people doing things they did not do (as well as saying things they did not say) and content regardless of how it was created or altered.
- Clearly define in a single unified Manipulated Media policy the harms it aims to prevent – beyond users being misled – such as preventing interference with the right to vote and to participate in the conduct of public affairs.
- Stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and could mislead. Such a label should be attached to the media (for example, at the bottom of a video) rather than the entire post and be applied to all identical instances of that media on Meta’s platforms.
For Further Information
To read the full decision, click here.
To read a synopsis of public comments for this case, please click the attachment below.