Case Description
The Oversight Board has accepted Meta’s request for a policy advisory opinion on its approach to expanding its community notes program outside of the United States.
Meta has requested the Board’s guidance on the factors it should consider when deciding whether any country should be omitted from its community notes expansion, as contextual elements may impact the program’s operations. Additionally, Meta has asked the Board how to weigh those factors in relation to one another, in a way that can be applied on a large scale.
In its request, Meta said that the community notes program is in an “early stage of product development” and it possesses “limited data from the US beta rollout.” Because of these considerations, the company’s “primary interest lies in establishing fundamental guiding principles” for its rollout worldwide.
On January 7, 2025, Meta announced that it was ending its third-party fact-checking program in the United States and transitioning to community notes. At the time, Meta indicated that it would refine community notes before making it available to users outside the United States. Community notes allows users to add labels with additional context to potentially misleading content (in contrast to the third-party fact-checking program, which relies on partner organizations to label misleading information).
Meta’s request to the Board is here.
In its request, Meta describes how community notes work. Meta users apply to contribute to the program. Should they meet Meta’s eligibility criteria, they are “gradually and randomly” admitted from the waitlist and may then write and rate notes. At present, contributors can compose and submit notes to “add more context” to public, organic content on Facebook, Instagram, and Threads originating in the United States. They have access to a dedicated feed of posts that users have flagged as having the potential to benefit from a note. Contributors must include a link supporting the context shared in the note. They also have the option to rate notes written by other contributors as “helpful” or “not helpful” and explain their response by selecting a reason from a list of options.
Meta disclosed that it built its community notes system using the open-source algorithm from the community notes program of social media platform X. Meta described the algorithm as a “consensus algorithm that uses separate measures of ‘helpfulness’ and ‘consensus’ to calculate an overall ‘helpful consensus’ score.”
In its request, Meta states that the algorithm calculates this score by identifying agreement that a note is helpful among a sufficient number of contributors who usually disagree with each other based on past ratings. According to Meta, if the combined “helpful consensus” score on a note exceeds a “certain threshold” and the note does not violate Meta’s Community Standards, the note will be published. The note appears as a banner on the bottom of the underlying post, which users can click to read the full note and supporting link. Meta has said this approach “helps ensure that notes reflect a range of perspectives and reduces the risk of bias.”
The request describes steps Meta is currently taking to retain and support volunteer community notes contributors, as well as to prevent coordinated manipulation of submission and rating of notes.
Meta states that its approach to enforcing its Misinformation and Harm Community Standard remains unchanged. Under this policy, the company removes misinformation where it is likely to directly contribute to the “risk of imminent physical harm” and “interference with the functioning of political processes.” Meta continues to use trusted third parties to help identify content violating that community standard.
Meta’s questions to the Board:
Meta presented the following list of factors it might consider when deciding which countries to omit from community notes. It emphasized that the list is not exhaustive and not “intended to constrain the Board from considering other factors which may be relevant”:
- Low levels of freedom of expression
- The absence of freedom of the press
- Government restrictions on the internet
- Low levels of digital literacy
- The ability, currently and in the past, to achieve the disagreement required for consensus [in the community notes algorithm]
The Board will consider these factors, among others.
The Board requests public comments that address:
- The risks and opportunities of crowd-sourced and community notes-style approaches to content moderation, particularly when it comes to potentially misleading content.
- The suitability and adaptability of consensus or bridging-based algorithms, which are employed in systems like community notes to identify and promote content that appeals across divided audiences, to different political contexts and information environments.
- Meta’s human rights responsibilities regarding the expansion and deprecation of products and programs, particularly those addressing misleading information.
- Challenges and best practices in risk assessment, monitoring, and mitigation for the global rollout of social media products, particularly in contexts of polarization, conflict or limited human rights protections.
- Research into the efficacy of responses to misleading information beyond content removal, such as fact-checking, labelling, reduced distribution, increased friction, and user-generated context. Additionally, research on avoiding bias in such responses.
- Studies that employ quantitative and/or qualitative research methods for identifying and measuring country-level factors that might impact the functioning of social media products across different contexts.
Prior decisions
The Board has engaged with Meta’s Misinformation policy, fact-checking and labelling of content in several cases, including:
- Alleged Audio Call to Right Elections in Iraqi Kurdistan, June 2025 (see Meta’s response here)
- Posts Supporting UK Riots, April 2025 (see Meta’s response here)
- Australian Electoral Commission Voting Rules, May 2024 (see Meta’s response here)
- Altered Video of President Biden, February 2024 (see Meta’s response here)
- Removal of COVID-19 Misinformation policy advisory opinion, April 2023 (see Meta’s response here)
- COVID Lockdowns in Brazil, August 2021 (see Meta’s response here)