Identify and Label AI-Manipulated Audio and Video At-Scale

The Oversight Board has overturned Meta’s decision not to label a likely manipulated audio clip of two Iraqi Kurdish politicians discussing rigging parliamentary elections, less than two weeks before the polls opened, in a highly contested and polarized election. The Board requires Meta to label the content.

The Board is concerned that, despite the increasing prevalence of manipulated content across formats, Meta’s enforcement of its manipulated media policy is inconsistent. It must prioritize investing in technology to identify and label manipulated audio and video at scale in order that users are properly informed.

As in this case, Meta’s failure to automatically apply a label to all instances of the same manipulated media is incoherent and unjustifiable.

Additionally, Meta should make labels for manipulated media available in the local language already available on its platforms. This should, at the least, form part of Meta’s electoral integrity efforts.

About the Case

Less than two weeks before Iraqi Kurdistan’s parliamentary elections, in October 2024, a popular media outlet affiliated with one of the region’s main political parties, the Kurdistan Democratic Party (KDP), shared a two-minute audio clip on its Facebook page. The post’s caption in Sorani Kurdish alleges the audio is a “recorded conversation” between brothers Bafel and Qubad Talabani, members of the region’s other main political party, Patriotic Union of Kurdistan (PUK), about their “sinister plans” to rig the October 2024 elections. In the audio, two men speak with an English voiceover (with Sorani Kurdish and English subtitles). One man says a “minimum of 30 seats” have been guaranteed to the PUK but they must “get rid” of the “11 seats” the KDP allegedly “has always been using to their advantage.” The other man agrees, emphasizing the need to make it appear that those seats have been legitimately won since people are aware – yet cannot prove – that the PUK is supported by Baghdad and their “neighbor.” The media outlet’s Facebook page has about 4,000,000 followers. The post has had about 200,000 views.

Two users reported the content for misinformation but Meta closed the reports without review. After one of those users appealed to Meta, the company upheld its decision based on a classifier score. The user then appealed to the Oversight Board.

Meta identified other posts containing the audio clip on the Facebook and Instagram pages of the same media outlet and the KDP Facebook page. After consulting with a news media outlet based outside of Iraqi Kurdistan and a Trusted Partner to review the possibility of the audio being digitally created, Meta labeled some of the posts, but not the content in this case. The label applied to other posts with the same audio states: “This content may have been digitally created or altered to seem real.”

Key Findings

When it comes to identifying AI-created or manipulated content on its platforms, Meta told the Board that it is only able to automatically identify and label static images, not video or audio content. Given the company’s expertise and resources and the wide usage of Meta’s platforms, it must prioritize investing in technology to identify and label manipulated video and audio at scale.

Meta’s failure to deploy the tools it has to automatically apply the “AI Info” label to all instances of the same manipulated media is incoherent and unjustifiable. In the Altered Video of President Biden case, Meta committed to implementing the Board’s recommendation that for manipulated media, not violating other Community Standards, the company should apply a label to “all identical instances of that media on the platform.” Meta’s claim in this case that it does not automatically apply the “High Risk” label to content containing the audio contradicts this recommendation. “AI Info” and “High Risk” labels are informative labels Meta has for manipulated media.

The Board notes there are reliable indicators, including technical signals, that the clip was digitally created. It meets the requirements of manipulated media under Meta’s Misinformation policy. Placing a “High Risk” label on it is consistent with Meta’s policies and human rights responsibilities. The audio was posted during a highly contested electoral period in a region with a history of irregular elections. This increases the audio’s ability to influence electoral choices and harm electoral integrity. Placing an informative label on the case content, instead of removing the content altogether, satisfies the requirements of necessity and proportionality.

The Board is concerned that Meta’s manipulated media labels are not available in Sorani Kurdish. This is despite Sorani Kurdish being one of the in-app languages available to Facebook users. To ensure users are informed when content is digitally created or altered, making the label available in the local language already available on Meta’s platforms should, at the least, form part of its electoral integrity efforts.

The Board is also concerned by the company’s reliance on third parties for technical assessment of likely manipulated content. Meta should reconsider having this expertise available internally.

The Board notes that the issue in this case concerns whether the audio is real or fake, rather than if what is said in the audio is true. Given that labeling the audio as likely digitally created or altered would similarly alert users to the accuracy of its content, the Board finds the application of the Misinformation policy on manipulated media to be sufficient. However, the Board is concerned that Meta did not have Kurdish language fact-checkers available to review content during the election as part of its election integrity measures.

The Oversight Board’s Decision

The Oversight Board overturns Meta’s decision not to label the content, requiring the post to be labeled.

The Board also recommends that Meta:

  • Apply a relevant label to all content with the same manipulated media, including all posts containing the manipulated audio in this case.
  • Ensure that the informative labels for manipulated media on Facebook, Instagram and Threads are displayed in the same language that the user has elected for its platform.

Further Information

To read public comments for this case, click here.

Volver a Noticias