Board Member Spotlight: Julie Owono

In the first of a Q&A series featuring insights and experiences from individual Oversight Board Members, we put the spotlight on Julie Owono, the executive director of Internet Sans Frontières and one of the founding members of the Board. Julie is also a researcher affiliated with the Berkman Klein Center at Harvard University and the University of California, Berkeley's Human Rights Center.

What was it that moved you to join the Oversight Board?

Several reasons. First, I had spent years at Internet Sans Frontières documenting what happens when platforms fail their users, especially the ones who are in countries and markets where the consequences of content moderation errors can lead to broken lives: hate speech escalating to offline violence, journalists silenced, or even activists deplatformed with no recourse. The Board struck me as a structural innovation which would attempt to build independent accountability into a system that didn't have any. I wanted to be part of proving that model could work, and I wanted to make sure it worked for users whose contexts are too often invisible in Silicon Valley policy rooms.

The second was more personal. I believe that how we govern speech online is one of the defining constitutional questions of our era. The platforms have become, in practice, a kind of transnational legal order. I didn't want to simply critique that from the outside. I wanted to be inside, doing the work.

When the Board issues a decision reversing a content removal, Meta must implement it. When we issue policy recommendations, the company must publicly respond. That accountability mechanism has produced changes.

Why do you think content moderation is an important issue for users?

Because every user of a major platform is living under a system of rules they didn't choose, can't fully see, and have very little power to challenge. Content moderation determines what information reaches you, what voices get amplified or suppressed, and whether you have any meaningful recourse when a decision affects you.

The costs of content moderation getting it wrong are not evenly distributed. Moderation systems failures tend to fail people in communities that are already marginalized: speakers of lower-resource languages, users in conflict zones, journalists and activists facing coordinated harassment. When done well, content moderation can contribute to democratic participation. But when done badly, it reinforces exclusion.

What is one thing you wish people knew about the Oversight Board?

That our decisions are binding on Meta, and that Meta has complied. I think many people assume we're an advisory body that the company can simply set aside. That's not the case. When the Board issues a decision reversing a content removal, Meta must implement it. When we issue policy recommendations, the company must publicly respond. That accountability mechanism has produced changes.

That doesn't mean the Board is perfect, or that our mandate is unlimited. But the notion that we are simply a PR exercise fundamentally misreads how the institution actually functions.

How do you think Meta's users have been impacted by the Board's work?

In the most direct sense, the Board has given individual users a real path to external appeal. The cases we've taken up have resulted in actual reversals of Meta's original decisions, content reinstatements and actual policy changes. But the larger impact is harder to see and perhaps more important. The Board's existence has changed the experience of using Meta's platforms in ways that go beyond individual cases. Users today interact with systems that have been scrutinized, challenged, and in many instances reformed, because of our work. Policies have been revised. Enforcement gaps have been identified and addressed. The company operates with the knowledge that its decisions can be independently reviewed.

विचार नेतृत्व को लौटें