Board to Address AI-Generated Content in Israel-Iran Conflict

Today, the Board is announcing a new case for consideration. As part of this, we invite people and organizations to submit public comments by using the button below.

Case Selection

As we cannot hear every appeal, the Board prioritizes cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Meta’s policies.

The case that we are announcing today is:

AI-Generated Video in Israel-Iran Conflict

2026-004-FB-UA

User Appeal

Submit a public comment using the button below

To read this announcement in Hebrew, click here.

.לקריאת הודעה זו בעברית, יש ללחוץ כאן

To read this announcement in Farsi, click here.

.برای خواندن این اطلاعیه به زبان فارسی، در اینجا کلیک کنید

On June 15, 2025, a Facebook user posted to a page self-identifying as a news source a 13-second video depicting alleged damage to buildings in Haifa, Israel, during the 12-day conflict (June 13 - June 25, 2025) between Israel and Iran. Text in English overlaid the video reading “Live now - Haifa” with the posting date. The video appeared to be the same as one that was identified by independent fact-checkers as AI-generated and reportedly originating on TikTok. A caption in English mentioned headline-style phrases linked to the conflict as well as disjointed terms and hashtags, without following a clear narrative. These included that there is a “big attack” by Iran on Israel and that the Israeli war cabinet is in a bunker, as well as referring to scores of missiles, the downing of aircraft, global political figures, ongoing conflicts, including in Gaza, a nuclear deal, wildfires and hashtags for unfreezing accounts. It also mentioned Israeli news sources warning of an imminent attack. The post was viewed over 700,000 times.

Six users reported the content a total of nine times for terrorism, violence, fraud and being a scam. However, the reports were not prioritized for human review. On the same day the content was posted, a Meta classifier that estimated that the content contains misinformation flagged the post to third-party fact-checkers. The third-party fact-checkers did not rate the content.

One of the reporting users appealed Meta’s decision to leave the content on the platform to the Board. Meta confirmed to the Board, that in its view, the post did not violate the Misinformation Community Standard as it did not “directly contribute to the risk of imminent physical harm” or “directly contribute to interference with the functioning of political processes.” However, since the Board selected the case, Meta disabled three accounts linked to the page due to signals of engagement abuse and inauthenticity, making the page and the content unavailable on the platform. Nevertheless, the Board decided to pursue the case because of its implications for important policy and enforcement issues and practices.

The Board selected this case to address the issue of moderating likely AI-generated content that may undermine information integrity and erode public trust in the context of armed conflict. This case provides an opportunity to evaluate Meta’s human and automated moderation of AI-generated content, including in conflict situations. It will also allow the Board to investigate the best ways to address such material in the information environment while respecting freedom of expression and access to information in conflict situations.

This case falls within the Board’s Crisis and Conflict Situations and Automated Enforcement of Policies and Curation of Content strategic priorities.

The Board would appreciate public comments that address:

  • The role that AI-generated mis/disinformation played in the Israel-Iran June 2025 conflict, including in media and public discourse.
  • Research on the prevalence and impact of AI-generated mis/disinformation on social media platforms in general, and during armed conflicts in particular, and the incentives and motivations for the creation and sharing of such content.
  • Challenges in accurately detecting, labelling or fact-checking AI-generated content, in particular in the context of coordinated mis/disinformation campaigns, and the effectiveness of policy, product and enforcement responses.
  • The human rights responsibilities of social media companies to address any adverse impacts of AI-generated misrepresentations, especially during armed conflict, on the information environment, while respecting freedom of expression and ensuring users’ access to information.

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to this case.

Public Comments

If you or your organization feel you can contribute valuable perspectives that can help with reaching a decision on the case announced today, you can submit your contributions using the button below. Please note that public comments can be provided anonymously. The public comment window is open for 14 days, closing at 23:59 Pacific Standard Time (PST) on Tuesday 2 December.

What’s Next

Over the next few weeks, Board Members will be deliberating this case. Once they have reached their decision, we will post it on the Decisions page.

Return to News