Portal de comentarios públicos

Taiwan Job Scam Warning

23 de octubre de 2025 Caso seleccionado
6 de noviembre de 2025 Comentarios públicos cerrados
29 de enero de 2026 Decisión publicada
Próximo Meta implementa la decisión

Comentarios


nombre
Amberly Jeffries
organización
MartinJeffries
país
United States
idioma
English

Public Comment on 2026-003-FB-UA: Taiwan Job Scam Warning

Amberly Martin, JD, (inactive), former VP & General Counsel / Chief Privacy Officer
Current Location: Horseshoe Bend, Arkansas USA

1. Purpose of Comment

This submission supports the Board’s investigation into Meta’s enforcement of its Fraud, Scams and Deceptive Practices and Human Exploitation standards in the context of job-related content.
I offer a direct comparison from the United States showing that the same moderation logic that wrongly penalized Taiwan’s police anti-scam campaign is also silencing legitimate, factual job information intended to help citizens.

2. My Case in Brief

In October of 2025, my Facebook post listing verified remote-contractor roles from Mercor (https://work.mercor.com) triggered removal under Job Fraud and Scam and a one-year restriction on core account features, including voice and video.
The post: Contained clear job titles and realistic pay ranges;
Linked directly to a verified U.S. platform;
Requested no fees or personal information;

Made no guarantees of income or employment.

My intent—as a civic volunteer in a remote Arkansas community with chronic unemployment—was to educate neighbors about real, lawful remote work.
Automated detection labeled it “job fraud,” and human review affirmed the algorithm’s finding without context analysis.

3. Why This Mirrors the Taiwan Case

Both cases reveal a single structural flaw: Meta’s automated systems misclassify protective or educational job content as exploitative.

Element Taiwan Case My U.S. Case
Content type Government anti-scam PSA Verified job listing notice
Purpose Public awareness & prevention Public access & education
Trigger term(s) “Job,” “recruitment,” “scam” “Remote job,” “contract,” “AI training”
Enforcement result Removed as Human Exploitation Removed as Job Fraud
Outcome Reversed after Board attention Upheld internally; pending appeal
Common flaw Algorithmic context blindness + rubber-stamp human review Same

Thus, what began as a regional moderation issue in Asia is, in fact, a global systemic over-enforcement problem that chills legitimate employment discourse and economic participation.

4. Socio-Economic Parallels

Both Taiwan and rural U.S. communities face labor-market vulnerability: low wages, limited mobility, and digital dependence for work opportunities.
Over-zealous filtering of job content removes a key self-help channel for citizens who lack other employment pipelines.
When legitimate posts are erased, scammers simply migrate to closed groups and encrypted apps, leaving the public less protected.

5. Policy-Level Observation

Meta’s Fraud/Scam standard is written broadly but administered mechanically.
Its current classifier bundles together:

job-fraud risk phrases (e.g., “work from home,” “hiring now”);

pyramid-scheme markers (e.g., “guaranteed income”); and

financial solicitation terms.

Without contextual logic or a verified-source whitelist, the model can’t tell prevention, education, or legitimate access from exploitation.
That design flaw now spans continents.

6. Real-World Harm

a. Disproportionate restriction. Losing Messenger voice/video for a year in a low-signal region has tangible safety implications.
b. Economic exclusion. Residents who relied on my post for job leads lost trust in sharing employment info publicly.
c. Psychological chilling effect. Professionals who follow Meta’s rules experience reputational damage when flagged as “scammers.”
d. Cross-regional inequality. Urban users with verified-business pages are spared; rural or foreign users face automatic suspicion.

7. Evidence of Legitimate Source

Mercor is a Delaware-registered technology company providing AI-training contracts.
At the time of enforcement, the identical job listings appeared on its verified corporate site and on LinkedIn.
This satisfies any reasonable due-diligence threshold for legitimacy—proof that Meta’s detection was purely keyword-based.

8. Broader Policy Question for the Board

How can Meta protect users from job scams without criminalizing job access itself?

The Taiwan case centers on government speech; mine centers on citizen speech. Together they define the outer boundary of lawful, good-faith “employment communication.”
A Board ruling that distinguishes educational and verified employment information from deceptive solicitation would establish a clear, global precedent.

9. Recommended Findings

Acknowledge global over-enforcement. Meta’s models over-flag employment-related content across languages and jurisdictions.

Recognize public-interest value. Economic-access posts and anti-scam PSAs advance user safety, not exploitation.

Apply proportionality. Severe account restrictions for a single misclassification violate Meta’s stated “least intrusive means” principle.

10. Recommended Policy Reforms

A. Context-based Enforcement

Require human reviewers to check five objective indicators before confirming “job fraud”:
(1) presence of fee request; (2) guarantee of employment or return; (3) vague employer identity; (4) off-platform solicitation; (5) verified platform link.

If (4) and (5) show legitimate source, enforcement must default to allow.

B. Verified-Platform Whitelist
Maintain a dynamic registry of recognized employment platforms (e.g., Mercor, Upwork, Indeed) to suppress false positives.

C. Public-Interest Tag
Allow pages or individuals to label posts as educational/awareness about employment or scams, triggering lighter review.

D. Transparency and Metrics
Publish quarterly regional accuracy rates for Fraud/Job Scam enforcement, akin to transparency reports for CSAM and Hate Speech.

E. Remedial Proportionality
Messenger communication bans should never attach to single-post misclassifications; require an independent human proportionality check.

11. Intersection with the Board’s Strategic Priorities

This comment supports two Board priorities:

Automated Enforcement of Policies and Curation of Content – highlighting how the same automation logic affects multiple geographies.

Government Use of Meta’s Platforms – showing that misclassification now extends from official anti-fraud messaging (Taiwan police) to ordinary civic speech (U.S. citizen advocate).

A holistic remedy therefore benefits both public institutions and individuals.

12. Comparative Legal and Ethical Perspective

Under international human-rights standards—Article 19 ICCPR and UN Guiding Principles on Business and Human Rights—companies must ensure restrictions on expression are lawful, necessary, and proportionate.
Removing lawful job information fails all three tests.
Meta’s AI models act as de facto gatekeepers of economic speech without transparency or appeal.
The Oversight Board’s ruling can realign enforcement with human-rights norms by requiring Meta to incorporate contextual proportionality into its automation loop.

13. Broader Economic Implications

Digital platforms increasingly mediate global labor flows.
If users fear posting job opportunities, the result is informational asymmetry favoring scammers who operate privately.
By contrast, verified public sharing—like my post and the Taiwan police PSA—creates visibility and collective defense.
Encouraging such transparency strengthens, rather than weakens, fraud prevention.

14. Proposed Research/Follow-Up

I encourage the Board to request that Meta disclose:

false-positive rates of its job-fraud classifier by region and language;

appeal success rates for employment-related content;

any internal thresholds for “high-risk” economic terminology.

This data will help determine whether Meta’s models systematically discriminate against small jurisdictions, non-corporate speakers, or rural users.

15. Concluding Statement

The Taiwan police sought to protect citizens from job scams. I sought to connect mine with real work.
Both actions were punished by the same flawed enforcement logic.
That pattern—across languages, governments, and ordinary people—demonstrates a systemic failure of contextual moderation.
I urge the Oversight Board to:

Affirm that awareness and access to lawful employment constitute protected expression;

Direct Meta to implement contextual, proportional review standards for the Fraud, Scams and Deceptive Practices policy; and

Extend those reforms globally.

If Meta can differentiate satire from hate speech, it can certainly differentiate help from harm in employment communication.

Respectfully submitted,
Amberly Martin, JD / (inactive)
Horseshoe Bend, Arkansas USA
amberly.jeffries@gmail.com
Case Ref:FB-ZH4TWLVM

nombre
Leo Chen
organización
資鋒法律事務所
país
Taiwan
idioma
Mandarin

臉書的詐騙廣告氾濫,其平台的審查功能往往沒有發揮功效,申訴功能也流於形式而已。
不符合聖塔克拉拉原則、馬尼拉中立原則。

Descripción del caso

In October 2024, a police department in Taiwan reshared a post on its Facebook page. The reshared post, which is in Chinese, contains an image of animated pigs and a bird in a police uniform holding a sign. Overlay text on the image describes the signs of job scams and warns job seekers. The image caption includes a similar list of job scam keywords and advice on how to prevent being scammed. The caption ends with information about an anti-scam hotline.   

In July 2025, Meta’s automated systems identified the content as potentially violating the Human Exploitation Community Standard, then removed it. This policy prohibits content that recruits or facilitates labor exploitation. Meta also has an anti-scam policy, the Fraud, Scams and Deceptive Practices Community Standard, which aims to protect users and businesses from being deceived out of money, property or personal information, including “job fraud and scams.” Both policies allow content that raises awareness or condemns scams.  

An administrator of the police department’s Facebook page appealed to Meta, and a human reviewer upheld the original decision to remove the post. A page administrator then appealed to the Board, stating that the post aimed to prevent fraud and was part of an official governmental initiative to raise awareness on safe employment practices.   

When the Board brought the case to Meta’s attention, Meta’s subject matter experts reviewed the post, and concluded that it was shared to raise awareness and educate users on common scam tactics and labor exploitation. As a result, Meta reversed its original decision and restored the post. 

The Board selected this case to assess Meta’s moderation practices in enforcing its Human Exploitation and Fraud, Scams and Deceptive Practices policies, particularly in the context of online job scams. This case falls within the Board’s Automated Enforcement of Policies and Curation of Content and Government’s Use of Meta’s Platforms, two of the Board’s seven strategic priorities. 

The Board would appreciate public comments that address: 

  • The socioeconomic impacts of online job scams in Taiwan and the broader Asia Pacific region.  
  • Best practices for addressing online scam enforcement circumvention efforts.  
  • Effectiveness of Meta’s enforcement practices for its rules against online job scams in Taiwan and other regions, including any potential implications of Meta’s overenforcement or underenforcement of these policies. 
  • Insights on campaigns against job scams on Meta platforms, as well as government anti-scam efforts in Taiwan.  

 

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days.  

As such, the Board welcomes public comments proposing recommendations that are relevant to this case. 

Public Comments 

If you or your organization feel you can contribute valuable perspectives that can help with reaching a decision on the case announced today, you can submit your contributions using the button below. Please note that public comments can be provided anonymously. The public comment window is open for 14 days, closing at 23:59 Pacific Standard Time (PST) on Thursday 6 November. 

 What’s Next 

Over the next few weeks, Board Members will be deliberating these cases. Once they have reached their decision, we will post it on the Decisions page.