Taiwan Case Shows Need for More Action on Job Scams

The Oversight Board calls for Meta to do more to stop fraudulent online labor recruitment. In addition to removing fraudulent recruitment content from its platforms that leads to offline harm, Meta should introduce an informative notice when users engage with content that may violate its policies, but where Meta’s automated systems do not have enough confidence to remove it. This would provide additional protection to users from scam content that spreads across platforms.

In analyzing a case on the removal of content from a Taiwanese police department warning about job scams, the Board has overturned Meta's original decision to take down the content.

About the Case

In October 2024, a Taiwanese police department reshared a post on its Facebook page. The post contains an image of animated pigs and a bird in a police uniform holding a sign. Overlay text in Chinese describes the signals of job scams and warns job seekers. The caption includes a similar list of job scam keywords, advice on how to prevent being scammed and information on an anti-scam hotline.

In July 2025, Meta’s automated systems identified the content as potentially violating the Human Exploitation Community Standard, then removed it. An administrator of the police department’s Facebook page appealed to Meta. A human reviewer upheld the original decision. The administrator then appealed to the Board, stating that the post aimed to prevent fraud and was part of a governmental initiative to educate the public and raise awareness on safe employment practices.

When the Board brought the case to Meta’s attention, Meta’s experts reviewed the post under the Human Exploitation and Fraud, Scams, and Deceptive Practices policies and concluded it was shared to raise awareness and educate. The company restored the post.

Online labor scams by transnational crime syndicates tricking people into being trafficked or stealing people’s money are a significant problem on social media. Social media posts are reportedly the fastest-growing source of scams in Taiwan, with most online scam losses stemming from Facebook ads. The Board found that many posts with signs of job scams request follow up on messaging platforms off Facebook.

Key Findings

In addition to removing fraudulent recruitment content from its platforms that lead to offline harm, Meta should explore ways to improve its technology to better distinguish non-violating anti-scam content.  

There may also be a range of content that has some signals of fraudulent recruitment, but more tenuous links to harm. To protect expression while still protecting against the potential of serious offline harm, Meta should explore less intrusive means targeting these specific patterns.

For example, Meta’s Messenger chats employ advanced scam detection that allows users to send recent chat messages for AI scam review when “a new contact sends a potentially scammy message.” If a potential scam is detected, users receive a warning pop-up that outlines information on common scams and suggests actions including blocking or reporting the suspicious account.

To disrupt the spread of fraudulent labor recruitment across platforms and to provide additional protections to users, Meta should introduce a similar informative notice for its platform users. This notice would not apply to posts that violate Meta’s Human Exploitation or Fraud, Scams, and Deceptive Practices policies, which should still be removed. However, there is a significant gray area in enforcing this policy, given evolving evasion efforts in a highly dynamic space.

The Board finds that while it may have been challenging for Meta’s classifier to assess this post, it is clearly anti-scam content. It does not violate either the Human Exploitation or Fraud, Scams, and Deceptive Practices policy. The Board finds that removing the content from Facebook was not consistent with Meta’s human rights responsibilities.

The Oversight Board’s Decision

The Board overturns Meta's original decision to take down the content.

The Board also recommends that Meta:

  • Should introduce an informative notice to disrupt the spread of fraudulent labor recruitment across platforms. It will be applied when users are engaging with (react, comment, share or click on an external link) content that is flagged by Meta’s technology as involving signals of job fraud and recruitment into labor exploitation, but left on the platform due to low or medium levels of confidence for removal.

Further Information

To read public comments for this case, click here.

समाचार पर लौटें