PARIS: The body created by Facebook to review content moderation decisions warned Thursday that user-generated fact-checks could harm people living under repression or conflict if they are introduced worldwide.

Facebook parent Meta announced last year that it would end its use of external fact-checkers in the US.

That scheme had employed third parties including AFP to expose misinformation.

Instead, Meta said it would ask ordinary users to verify controversial claims in a system known as “community notes,” aping methods on X and other social networks.

If rolled out worldwide, that scheme “could... pose significant human rights risks and contribute to tangible harms,” Meta’s Oversight Board said in a Thursday advisory.