Board Member Spotlight: Endy Bayuni

In the latest edition of our series featuring insights and experiences from individual Oversight Board Members, we put the spotlight on Endy Bayuni. Endy is Senior Editor and Advisor to the Editorial Board at the Jakarta Post and a writer on national politics, international relations, political Islam and the media landscape.

What moved you to join the Oversight Board?

For me, joining the Oversight Board is a calling. It’s the same spirit that has driven my work as a journalist for more than 40 years. Journalism, for me, is a profession that serves the interests of the people and the community. We not only keep the public critically informed about events that impact it, but through our work, we push the lines of freedom, human rights, democracy and justice, to make the world a better place for everyone.

In Indonesia, where I come from, and much of the Global South, these are issues that people still struggle with. And they are also important in the Global North. It is the same spirit of serving the community, and now the wider global community, that drives my work at the Board. Social media has become indispensable to the lives of people around the world, and now even more so with the growth of artificial intelligence. 

In providing independent oversight of Meta’s content moderation policies and practices, the Board aims to make the company’s media platforms safe places for people to conduct their activities. We are not under any illusion that they are already safe, which is precisely the challenge, and one that we at the Board have the intent to address.

Why do you think content moderation is an important issue for users?

Social media companies have content moderation policies and practices not only to protect users, but also to protect non-users who are impacted by what appears on platforms. An effective content moderation policy seeks to protect not only free expression, but also the safety, privacy and dignity of people. Independent oversight gives it accountability.

With the explosion of social media globally, we see how the platforms have increasingly been used for bad purposes. We are seeing the proliferation of misinformation and disinformation, as well as hate speech, bullying and harassment, as well as pornography and various other content that can make platforms unsafe, if not dangerous, including for children.

Although platforms have content moderation policies, they are still widely criticized for not taking down dangerous content fast enough and for taking down legitimate content. The challenge is one of scale. Although platforms have machines and human reviewers to identify and remove harmful content, the relatively few posts that escape their detection can still amount to hundreds of thousands, if not millions, of pieces of content each day.

Platforms are still figuring out the best way to moderate content that protects users’ voice and other rights. The Board, through our work in hearing cases, has helped improve Meta’s content moderation policies and practices.

What is one thing you wish people knew about the Oversight Board?

That the Oversight Board exists to serve user interests and to protect speech and other rights; and to help them with mitigation and even some sort of remediation.

Content moderation policies are too long and too complex for ordinary users to read and comprehend. They only read them when something happens to their content or their accounts. We can’t expect them to be familiar with the community standards or guidelines that platforms publish. But it would be good for them to know that there is an institution like the Oversight Board to which they can turn if their content on Meta platforms is removed, or if they encounter content they feel is dangerous.

How do you think Meta’s users have been impacted by the Board’s work?

First, obviously, the decisions we make on whether particular content should be taken down or left up sets precedents for Meta to follow when dealing with future content that has similar issues. These decisions, which are binding on Meta, come with strong analysis and rationales after careful and principled deliberation, always referencing international human rights law.

Second, through the cases we have heard, the Board has come up with more than 300 policy recommendations for Meta to better protect the interests of users and non-users. The majority of these recommendations have been implemented. They have significantly strengthened Meta’s content moderation policy, enforcement and transparency.

Given the challenges in content moderation, one of which is the question of scale, these improvements may seem a drop in the bucket in Meta’s overall content moderation. But many of the Board’s recommendations are impactful. They include, for example, the creation of crisis policy protocols, the policy requiring labelling for artificial intelligence and the requirement that Meta informs users that their content has been taken down because of a government request.  

Meta still has progress to make in its content moderation policies, even more so now as artificial intelligence is used on platforms. The Board will continue to provide independent oversight to help make sure that Meta finds the right path.

Regreso al liderazgo intelectual