Why Don’t All Tech Companies Have Independent Oversight?
تم النشر بتاريخ 4 كانُون الْأَوَّل 2025
Lessons From Meta’s Watchdog
By Evelyn Aswad, Paolo Carozza, Pamela San Martin and Helle Thorning-Schmidt, Oversight Board Co-Chairs
If “move fast and break things” was Facebook’s motto in 2013, “slow down and fix this mess” was the message in 2018. From privacy scandals to allegations of involvement in human rights abuses, electoral interference and impeding free speech, Facebook was in crisis. It was not trusted to protect users’ rights and yet found itself in the position of singlehandedly determining what content should stay on its platforms and what should be removed, critically affecting the free speech rights of its billions of users, 95% of whom are today outside the United States.
In an increasingly polarized world, the Oversight Board has proved that it is possible to bring more voices and expertise to the most formidable content challenges.
Amid that chaos, Meta created the Oversight Board. No longer would the toughest content decisions be made solely in a meeting room in Menlo Park, but also increasingly by a group of independent experts representing every part of the world and empowered to put the human rights of users first. Our mandate: protect free expression for all users – no matter where or who they were, or how inconvenient their views. We would do this through principled and consistent judgments grounded in international human rights law principles. Facebook could not fire us for decisions it did not like. The company would uphold our case decisions and had to publicly respond to our recommendations.
Five years later, the whole tech sector is moving fast, now supercharged by AI. So, what lessons can be drawn from our novel experiment in independent oversight?
Consequential improvements
In an increasingly polarized world, the Oversight Board has proved that it is possible to bring more voices and expertise to the most formidable content challenges. Meta is more accountable, more transparent and has better systems and processes as a result of the Board’s work, though there is still far to go. Meta has honored its promise to uphold our case decisions on content and publicly responded to our recommendations, opening itself up to real public scrutiny. This benefits not only its billions of users, but its network of customers – organizations large and small – that use Meta as a vital place to advertise.
The cases we considered, decisions we made and the recommendations we issued have spurred consequential improvements.
In our first year of operations, we found that regular moderation processes are often inadequate during times of crisis. Our recommendations pushed Meta to create an industry-leading crisis protocol that more effectively responds to emergencies, whether a conflict, social unrest, contested elections or a natural disaster. This area of focus continues for the Board, as we recently made recommendations for Meta to activate the crisis protocol more swiftly or to modify it in high-risk situations, such as the riots that broke out in the UK in 2024 or the conflict in Syria that resulted in the overthrow of the Assad regime.
We also looked at perceived bias in the moderation of political views. We found that such bias can arise from several factors, including out-of-date or blunt policies, underinvestment in language capabilities and cultural understanding, and repeated enforcement mistakes that create patterns of censorship because systems are learning and implementing the wrong lessons over time. The Board recommended that Meta regularly provide the public with the data the company uses to evaluate the accuracy of its enforcement actions. Such disclosure would allow for analysis of whether the errors were isolated or a larger-scale problem.
Clearer, fairer, more transparent
Importantly, our case outcomes increased users’ ability to raise awareness on critical health issues such as eating disorders and Covid-19. This included cases on breast cancer where posts containing nudity were removed because algorithms could not understand the awareness-raising context around them. We pushed Meta to create new techniques to identify awareness-raising content, preserving access to potentially life-saving information.
We made clarity and transparency of Meta’s complicated rules a priority. Because of our work, users are now notified about which rule they have violated if their content is taken down, allowing them to rewrite their post or to appeal if they think their content was removed by mistake.
We also found that Meta’s strike system, which gives users a strike on their account if they violate the rules, has one of the biggest impacts on users’ speech. Under this system, multiple strikes can lead to account restrictions, limiting a person’s ability to post anything at all. To safeguard users’ free expression, we urged Meta to consider alternatives to strikes for less severe violations. As of early 2025, Meta can now send an “eligible violation notice” to users who commit their first minor violation. The notice includes details about the policy the person breached, along with the option of either appealing the decision or completing an educational exercise to avoid a strike being applied to their account.
And, last but certainly not least, we have fought to make the platform fairer and more consistent. We examined Meta’s system (called cross-check), noting that Meta was favoring some high-profile users. For example, when a post from a user on cross-check lists was identified as violating, that content remained up for days while an additional review was carried out. It is during this time that violating content is at its most viral and therefore most harmful. The Board made more than 32 recommendations to Meta regarding its cross-check program, including increasing transparency around the program and how it works, and removing violating content while the additional review is conducted.
Continued challenges
As we reflect on what started as a platform governance experiment, we credit Meta for its work to mature as a company and create sophisticated procedures to curate massive amounts of content every day. They have not always gotten it right. And they have not always liked it when we told them so and applied pressure for them to make changes. But as new issues, new regulations and new products emerge, we know there is much more work to do to keep Meta focused on its human rights commitments and to protect users’ free expression. If European Union regulators talk about systemic risk, we will dig into that. As platforms struggle to address youth issues, we will be there. As AI ushers in a new era for most technologies, we will be assessing the impact on users’ rights.
We urge all platforms to open themselves up to public scrutiny and some form of independent oversight.
All of this makes us wonder how other platforms are making decisions about the content on their platforms. Who is deciding what gets taken down and what gets to stay up? Who will be assessing the impact of these decisions on their users’ rights? And, in the era of AI, who will push for transparency in how content is created, evaluated and promoted? We urge all platforms to open themselves up to public scrutiny and some form of independent oversight.
As the sector now engages in another extraordinary period of growth and competition, the real test for how much tech companies have learned from past harms to users’ rights will be in their commitment to protect them moving forward. Is it possible that this time they will move fast and responsibly? Maybe. Hopefully. But they cannot – and should not – do it alone.