Portal de comentários públicos

AI-Generated Video of Hungarian Politician

Prazo final: 23:59 PST, 7 de Maio de 2026

Idiomas aceitos:English and Hungarian

23 de Abril de 2026 Caso selecionado
23 de Abril de 2026 Comentários públicos abertos
Por vir Decisão publicada
Por vir Meta implementa decisão

Descrição do caso

The Board is announcing a case involving an apparent AI-generated video of a prominent Hungarian politician, which was reported by hundreds of users for violating Meta’s policies. The company ruled that the content did not violate its Misinformation policy and did not require an AI label. Through this case, the Board will assess content governance challenges posed by AI tools, especially in electoral contexts, and how AI-generated content can impact electoral integrity.

 

On November 4, 2025, the administrator of a Facebook page focused on political issues in Hungary posted an eight second video depicting the Hungarian politician Péter Magyar. The video appears to be AI-generated due to inauthentic facial expressions and speaking style. It shows Magyar expressing exaggerated frustration with the prevalent practice in Hungary of robocalling – using phone calls as a campaign tool. A caption in Hungarian references an incident from 2024 in which Magyar walked out of a television interview and comments that the video shows why he was “raging” in the interview. The post was viewed over 100,000 times and received over 3,000 reactions, with more than 1,400 of these being “laughing” reactions.

 

Magyar leads the Tisza party, which won 138 seats in the 199-seat parliament in elections held on April 12, 2026. Magyar is expected to be sworn in as prime minister in the coming weeks. Before the elections, the European Parliament expressed concerns with the “increasing use of unlabeled AI-generated political content in Hungary … notably the posting of deepfake videos.”

 

In total, 209 users reported the content between November 4 and November 24, 2025, for a variety of potential violations, including a generic violation (i.e., unspecified), fraud and scams, and hateful conduct. On November 24, one of those user reports was sent for review. Based on an automated decision, Meta determined the content did not violate its Community Standards and left it on Facebook. The post also was not reviewed and rated by third-party fact-checkers at the time. Fact-checkers may either identify content on their own initiative or select from a queue of Meta referrals of potential misinformation.

 

The user who made this report appealed to the Board against Meta’s decision to leave the content on the platform. Meta confirmed to the Board that, in its view, the post did not violate the Misinformation Community Standard as it “did not appear to be related to interference with the functioning of political processes.” Moreover, it “would not have merited an informative AI label under the Misinformation policy” had it been flagged to Meta’s policy teams around the time of its posting because the video was posted “well in advance of a critical event, such as the April 2026 elections.” The company can apply labels telling users that content has been created with AI or is manipulated media. Meta added that the video also “seems intended for comedic effect,” thereby making it unlikely that it “creates a particularly high risk of materially deceiving the public on a matter of public importance.”

 

The Board selected this case to address new content governance challenges posed by AI tools that generate media, especially in electoral contexts. The case provides an opportunity to evaluate Meta’s human and automated moderation of such AI-generated content. It will also allow the Board to investigate how AI-generated videos that impersonate politicians can be used to influence voters and potentially distort the integrity of electoral processes.

 

The case falls within the Board’s Elections and Civic Space and Automated Enforcement of Policies and Curation of Content strategic priorities.

 

The Board would appreciate public comments that address:

  • The role that AI-generated content played in the recent Hungarian elections, or other electoral contexts, including in the media and public discourse.
  • Research on the nature and impact of AI-generated mis- and disinformation campaigns on social media platforms, especially in electoral contexts, and the incentives and motivations for creating and sharing such output.
  • Platforms’ responses to these campaigns and such content, especially during elections, and their risk environments.
  • The role of online “political influencers,” particularly coordinated networks of such content creators, in shaping public opinion around electoral issues.
  • The relationship between satire and misinformation in social media content, particularly around political speech, and the challenges and trade-offs involved in moderating such posts.

 

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to this case.