Meta Should Move Faster and Bolder on AI Content
14 de Maio de 2026
By Khaled Mansour, Oversight Board Co-Chair

Every morning, hundreds of millions of people reach for their phones. They check in with family and friends on social media. Increasingly, they also linger over news and commentary. And alongside the genuine, they encounter a rising tide of deceptive content.
Iranian missiles flatten Tel Aviv. American soldiers are paraded before Iranian cameras. Skyscrapers in the UAE collapse in fireballs. None of it happened. Much of it goes viral, reaching hundreds of millions of viewers.
This fabricated or misleading content is produced in seconds by widely available AI apps. It serves two ends at once: the psychological operations of warring parties, and the income of individual content creators. The companies that supply both the creation tools and the distribution platforms can — and should — help slow this flow and lessen its harmful impact. That content, after all, is sustained in part by their own engagement-driven business model.
On 10 March 2026, the Oversight Board called on Meta to do more to help users identify deceptive AI-generated content during conflicts, improve labeling, clarify policies and put all this into a new comprehensive policy. The specific case that the Board considered involved a fabricated video purporting to show an attack on Israel during the Israel-Iran conflict of June 2025.
Deepfakes designed to deceive proliferate on technology platforms in wartime. Their consequences are destructive and lasting. False information puts lives at risk. The flood of credible-looking but inauthentic content also breeds public distrust of all information. The damage is especially severe in countries with repressive media environments, where credible reporting is already scarce.
The whole online world is now grappling with how AI content is generated and consumed. It is incumbent upon Meta and the wider tech industry to act faster, and ideally together.
In its recommendations, the Board was clear on a key principle. The fact that content is AI-generated, or even misleading, is not in itself sufficient reason to restrict free expression. The challenge lies in considering this fundamental right against the real and likely harms such expression can cause. That balancing act is sometimes more art than science. Still at the end of the day Meta and the wider tech industry need coherent guidelines to navigate it — and to help users distinguish AI-generated content, deceptive or not. In addition, they need to restrict or remove the content that may lead to offline harm.
In its decision, the Board made seven specific recommendations. Among them: provide details at scale about the origin of media, drawing on established content provenance standards; invest in stronger detection tools and better labeling methods; create a distinct set of rules so users can reliably recognize AI-generated content; and amend current policies to ensure timely, effective responses to deceptive AI output.
Meta is required to respond publicly to Board recommendations within 60 days. It did so on 8 May.
First, the Board had hoped Meta would draw all of the Board’s recommendations together by establishing a separate Community Standard dedicated to AI-generated content, distinct from its existing Misinformation Community Standard. The company has declined that recommendation. This is a missed opportunity for structural clarity.
For another recommendation, the company reports that it is already incorporated in its work: amending the Misinformation Community Standard so that Meta does not rely solely on signals from external partners to flag misinformation that risks imminent harm or violence in crises. Meta says its existing processes and protocols to deal with crises account for this recommendation. For example, Meta says it draws on outside expertise, supported by in-house experts, ahead of critical events to create “Pre-Reviewed Harmful Claims” that strengthen enforcement of its Misinformation and Harm Policy at scale. This could be a step forward, but a partial one as it still lacks the agility needed for evolving global conflicts. There is a clear need for a faster, broader response that is not limited to crises Meta can predict in advance, or that requires alerting by external partners, especially when an increasing number of these partners complain about and are discouraged by the lack of response from the company.
Meta says three of the Board’s recommendations are partially implemented and that it is exploring stronger content credentials, better detection tools, provenance information, and more robust watermarks. These are welcome moves. Yet the impact of deceptive AI on information integrity has grown so pervasive so fast that many people now doubt even genuine factual reporting – the liar’s dividend – simply because they have been exposed to so much falsehood. This erodes public trust — a vital common good, especially in moments of polarization and conflict. A partial implementation of such recommendation is not good enough.
Meta says it is assessing the feasibility of two further recommendations – to make the use of “High Risk” and “High Risk AI” labels more frequent and consistent; and to publish a clear explanation of penalties for failure to self-disclose digitally created or altered content. While this leaves the door open to implementation, it raises concerns since many earlier Board recommendations have remained under feasibility assessment for a long time — in a few cases, more than two years. Greater clarity on timelines for addressing or implementing our recommendations would help drive transparency and trust in the process. Experts in the field who also submitted public comments to this case, such as WITNESS, agree that the scale of the problem demands a commitment to act.
Since the Board issued its first decision in 2021, Meta has implemented 75% of its recommendations in full or in part. These recommendations have strengthened users' right to free expression, added more transparency and consistency to Meta’s rules and helped protect users from harm. Many of the recommendations that remain open, under assessment or that have been declined could deepen those gains.
The whole online world is now grappling with how AI content is generated and consumed. It is incumbent upon Meta and the wider tech industry to act faster, and ideally together. Coordinated, industry-wide self-regulation can preserve the delicate balance between free expression and public trust, while reducing harm wherever possible. Companies have many tools from limiting the reach of likely harmful or misleading content all the way to removal. The Board has proposed several measures in this decision to enable such tools. Meta and its peers should consider all these interdependent measures seriously and act swiftly to address the underlying problems. The stakes are too high for half measures.
