Meta overhauls content moderation to reduce censorship
Meta, the company that owns Facebook, Instagram, and Threads, is implementing significant changes to its content moderation policies to lessen censorship, reports a Kazinform News Agency correspondent.
CEO Mark Zuckerberg outlined five key updates that will significantly impact how content is reviewed and managed on these platforms.
Meta will replace its third-party fact-checkers with user-generated “community notes”, a system similar to what Elon Musk implemented at X (formerly Twitter). This shift will apply to all Meta platforms.
“Fact checkers have been too politically biased and have destroyed more trust than they’ve created,” Zuckerberg said. “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far.”
Content guidelines will be streamlined, making them easier to understand and follow. As a result, less content will violate the rules. Zuckerberg highlighted that topics like immigration and gender will no longer be censored. Instead of scanning 100% of posts for violations, Meta will now rely on user reports to flag content for review. Automated systems will focus on identifying only high-severity violations, such as terrorism, child exploitation, and fraud.
“We’ve reached a point where it’s just too many mistakes and too much censorship,” Zuckerberg said, acknowledging that the previous system mistakenly removed too much legitimate content.
Since 2020, Meta has limited recommendations for political content in user feeds. This policy will be reversed, allowing users to see more politics-related content, even from accounts they do not follow. This change aims to promote civic engagement and open discourse. Additionally, Meta will move its Trust and Safety teams from California to Texas and other U.S. locations. Zuckerberg explained that this relocation is intended to build trust and address concerns about bias within the moderation teams.
“I think that will help us build trust to do this work in places where there is less concern about the bias of our teams,” he said.
These changes mark a stark departure from Meta’s previous stance. In 2016, the company launched an independent fact-checking program to combat misinformation after claims that foreign actors used its platforms to spread disinformation.
Over the years, Meta built safety teams, automated moderation systems, and even created the Oversight Board to manage complex moderation decisions. However, conservatives have long criticized these measures as biased and overly restrictive.
“Anything I put on there about our president is generally only on for a few minutes and then suddenly they’re fact-checking me,” a Trump supporter said during a CNN interview in 2020, reflecting widespread dissatisfaction among right-wing users.
Joel Kaplan, Meta’s Chief of Global Affairs, acknowledged the limitations of third-party fact-checking. “Well-intentioned at the outset, but there’s just been too much political bias in what they choose to fact-check and how,” he said.
These changes align with broader political and strategic considerations. Meta’s announcement comes as the U.S. government considers banning TikTok and amid reports of strained relations between Elon Musk and the current administration.
Additionally, Meta recently appointed Trump ally and UFC CEO Dana White to its board and pledged $1 million to Trump’s inaugural fund. These moves suggest an ideological shift within Meta’s leadership and a possible attempt to align more closely with the incoming administration.
Zuckerberg acknowledged the tradeoffs involved in these changes. “The reality is this is a tradeoff,” he said. “It means that we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
By shifting its moderation policies, Meta aims to promote free expression, address past criticisms, and adapt to evolving political and social dynamics.
Earlier it was reported that Meta seeks advice from nuclear power developers to help meet its AI goals.