Meta has announced that it is replacing its third-party fact-checking program with a user-driven “Community Notes” system, similar to the one employed by X (formerly Twitter).
This change was detailed by Meta CEO Mark Zuckerberg and Chief Global Affairs Officer Joel Kaplan, signaling a significant shift in content moderation strategy on platforms like Facebook, Instagram, and Threads. Here’s what you need to know:
End of Third-Party Fact-Checking: Meta is discontinuing its fact-checking program with independent third parties in the United States, citing the political bias of fact-checkers and the excessive amount of content being fact-checked.
Community Notes Implementation: Starting in the US, Meta will introduce a system where users contribute notes to provide context or corrections to posts, akin to X’s Community Notes. This is intended to empower the community to address potentially misleading content, with notes requiring approval from users with diverse perspectives to be visible to all.
Policy Simplification: Alongside this, Meta aims to simplify its content policies, reducing restrictions on topics like immigration and gender, which have been contentious. The company plans to focus its automated systems more on high-severity violations such as terrorism, child sexual exploitation, and scams, rather than on a broad set of content.
This move comes after the 2024 U.S. presidential elections, which Zuckerberg described as a “cultural tipping point” towards prioritizing free speech. There’s an underlying political dimension to this decision, with some viewing it as an alignment with conservative critiques of content moderation as censorship.
The change has elicited mixed reactions. Some applaud it as a return to free expression, while others, including misinformation researchers, fear it might lead to an increase in misinformation.
The rollout of Community Notes in the U.S. will begin over the next couple of months, with plans to refine the system throughout the year.
This shift is part of Meta’s broader strategy to reduce the complexity of its content moderation systems, which they argue have led to too many mistakes and censorship. Critics worry about the potential for less accurate information control, whereas supporters see it as enhancing user autonomy and expression.