Deceptive AI on Social Media During Conflicts is a Growing Threat. Here is How Platforms Should Meet the Challenge   

An Open Letter to Tech Platforms

By Evelyn Aswad, Paolo Carozza, Pamela San Martin and Helle Thorning-Schmidt, Oversight Board Co-Chairs 

As missiles are exchanged again across the Middle East, we are flooded by images, videos and reports from the region. While we try to make sense of these events, we must also ask ourselves a troubling question – is what we are seeing on our screens authentic? Or could it be AI-generated content designed to deceive and manipulate us? There have been hundreds of millions of views of AI-generated videos and fabricated satellite imagery deployed with misleading and false assertions regarding the conflict. 

Social media is one of the main channels where deceptive AI content spreads, especially during conflicts. Such fake content can be harmful by inciting more violence and fueling further conflict. We believe tech platforms are not currently doing enough to help users identify whether content is AI-generated or authentic.  

That is why the Oversight Board is recommending Meta adopt a series of new measures to address deceptive AI-generated content during conflicts, and strongly urging other platforms and AI companies to do the same. These include committing to robust third-party provenance standards, and making more investments in technical capacity and tools to review, identify and label AI-generated content as “high risk,” so users can make informed judgements. 

The Oversight Board is recommending Meta adopt a series of new measures to address deceptive 
AI-generated content during conflicts, 
and strongly urging other platforms and
AI companies to do the same.

The Growth of Deceptive AI Videos During Conflicts 

The Israel-Iran conflict in June 2025, when AI-generated content proliferated with massive reach, was an inflection point. The use of deceptive AI content on social media to influence opinion was itself called a “soft war.” The BBC reported that just three deceptive AI-generated videos of that war received more than 100 million views across varied platforms. 

Deceptive AI-generated content has become a characteristic of conflict, with serious consequences. In the ongoing war in Sudan, for example, reports have debunked viral AI-generated content of a woman and child about to be killed and highlighted that such imagery is deployed to manipulate public perception on social media.  

At the moment, there are no totally consistent indicators for audiences to understand the source of what they are seeing. And not all AI-assisted or AI-generated material is deceptive or harmful – it could be political, artistic or satirical speech, for example. Removing or limiting the reach of non-harmful AI content would impinge on social media users’ freedom of expression – a freedom that constitutes our ultimate guarantee for robust and resilient, free societies. In conflicts, free expression can save lives. 

The Board’s Latest Case Decision on AI 

Our latest decision, which involved an AI-generated video that circulated during the 2025 Israel-Iran conflict, makes recommendations to Meta – many of which are applicable to platforms across the industry – on how to deal with deceptive AI-generated content during conflicts and beyond. Indeed, given content moves quickly across platforms, it is important that other platforms adopt such measures.  

The case involved the viral circulation of an AI-generated video posted on Facebook by a page misrepresenting itself as credible news. The video showed extensive damage to buildings, surrounded by plumes of smoke and rubble, with text purporting that it was in the northern Israeli city of Haifa. The video was very similar to one that first appeared on TikTok, which fact-checkers at Agence France-Presse rated as fake. But it was still quickly reshared on multiple platforms, including Facebook, Instagram and X.  

Labeling, demoting and even removing deceptive AI-generated content can be necessary, depending on how much harm it threatens, for example, whether it risks putting people in harm’s way or inciting imminent violence. But such actions must be strictly guided by clear norms that protect freedom of expression. In conflicts, people need to discern whether others are trying to influence them with deceptive AI content, rather than not seeing it at all. In this case, the Board overturned Meta’s decision to leave up the content without a label reading “High Risk AI.” 

Yet labeling output saying AI helped make it will only go so far, since soon almost all content on social media platforms will have been touched by AI in some way.  

Our Recommendations 

Commit to Provenance Standards: Social media platforms should consistently and comprehensively employ clear and accessible cross-industry methods to show the history of content – how it was made and adapted (also called provenance). That means users are informed about what they are seeing and can decide how to interpret and treat it accordingly. There are already available provenance standards that verifiably show the origin of a digital asset, such as the Coalition for Content Provenance and Authenticity (C2PA).  

All social media platforms, no matter how big or small, should agree on and implement a common provenance standard – such as the C2PA. They should do this at scale and ensure that the credentials are clearly and consistently visible and accessible to users.  

Invest in New Technical Tools: Social media and AI firms need to invest in developing technical capacity and tools to improve the detection of AI-generated audio, audio-visual and image content. Improving this would enable more preservation of provenance information and more accurate labeling of AI content across platforms.  

Social media platforms need to provide automated, technical and human-led solutions to limit harmful impacts of AI content intended to deceive, while upholding people’s freedom of expression.

Ensure Human Intervention When Needed: Even with widely adopted content credentials, it is possible for users to strip out indicators of AI manipulation, for instance, by taking a screenshot of content. That is why nuanced, human intervention is still needed, which can come in different forms: third-party fact-checking; user-led community notes programs; and platforms’ own in-depth human review. Social media companies need to give these reliable support. Non-governmental organizations making public comments as part of the Board’s most recent case expressed similar views

Platforms Must Be Vigilant 

As the risk of human rights abuses is heightened during conflicts, platforms need to be extra vigilant. Conflicts often present restrictive and asymmetric information environments, which the use of AI content can heighten. 

Social media platforms need to provide automated, technical and human-led solutions to limit harmful impacts of AI content intended to deceive, while upholding people’s freedom of expression. We urge all platforms to hold themselves accountable to their users and give them the information they need to discern authentic content from deceptive AI, especially in situations when the stakes cannot be higher.  

Retour au leadership éclairé