Meta's chief executive Mark Zuckerberg announced Tuesday the tech giant was ending its third-party fact-checking program in the United States and turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X.
The decision comes after years of criticism from supporters of President-elect Donald Trump, among others, that conservative voices were being censored or stifled under the guise of fighting misinformation, a claim professional fact-checkers vehemently reject.
The announcement, which included plans for slashing content moderation and "restoring free expression" on its platforms, included an acknowledgement from Zuckerberg that the policy shift meant "we're going to catch less bad stuff."
"Abandoning formal fact-checking for crowdsourcing tools like Community Notes has failed platforms in the past," Nora Benavidez, senior counsel at the nonprofit watchdog Free Press, told AFP.
"Twitter tried it and can't withstand the volume of misinformation and other violent, violative content," she added.
After his 2022 purchase of Twitter, rebranded as X, Musk gutted trust and safety teams and introduced Community Notes, a crowd-sourced moderation tool that the platform has promoted as the way for users to add context to posts.
Researchers say the lowering of the guardrails on X, and the reinstatement of once-banned accounts of known peddlers of misinformation, has turned the platform into a haven for misinformation.
- 'Mistaken beliefs' -
Studies have shown Community Notes can work to dispel some falsehoods such as vaccine misinformation, but researchers caution that it works best for topics where there is broad consensus.
"Although research supports the idea that crowdsourcing fact-checking can be effective when done correctly, it is important to understand that this is intended to supplement fact-checking from professionals -- not to replace it," said Gordon Pennycook, from Cornell University.
"In an information ecosystem where misinformation is having a large influence, crowdsourced fact-checking will simply reflect the mistaken beliefs of the majority," he added.
Meta's new approach ignores research that shows "Community Notes users are very much motivated by partisan motives and tend to over-target their political opponents," added Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech.
By comparison, a study published last September in the journal Nature Human Behavior showed that warning labels from professional fact-checkers reduced belief in -- and the sharing of -- misinformation even among those "most distrusting of fact-checkers."
Ending the fact-checking program opens the floodgates for harmful misinformation, researchers say.
"By axing his factcheckers, Zuckerberg has ripped out yet another of his companies' safety measures on platforms such as Facebook and Instagram," said Rosa Curling, co-executive director of UK-based legal activist firm Foxglove, which has backed a lawsuit against Meta in Kenya.
"If he's all-in on the Musk playbook, the next step will be slashing yet more of his content moderator numbers," including those that take down violent content and hate speech.
- 'Unsafe' -
As part of the overhaul, Meta has said it will relocate its trust and safety teams from liberal California to the more conservative state of Texas.
AFP currently works in 26 languages with Facebook's fact-checking programme, including in the United States and the European Union.
Meta has also announced major updates to its moderation policies, in a move that advocacy groups said lowers the bar against hate speech and harassment of minorities.
The latest version of Meta's community guidelines said its platforms allow users to accuse people of "mental illness or abnormality" based on their gender or sexual orientation.
Abandoning industry-standard hate speech policies makes Meta's platforms "unsafe places," said Sarah Kate Ellis, president of the advocacy group GLAAD.
Without these policies, "Meta is giving the green light for people to target LGBTQ people, women, immigrants, and other marginalized groups with violence, vitriol, and dehumanizing narratives," Ellis added.
Australia frets over Meta halt to US fact-checking
Sydney (AFP) Jan 9, 2025 -
Australia is deeply concerned by Meta's decision to scrap US fact-check operations on its Facebook and Instagram platforms, a senior minister said Thursday.
The government -- which has been at the forefront of efforts to rein in social media giants -- was worried about a surge of false information spreading online, Treasurer Jim Chalmers said.
"Misinformation and disinformation is very dangerous, and we've seen it really kind of explode in the last few years," Chalmers told national broadcaster ABC.
"And it's a very damaging development, damaging for our democracy. It can be damaging for people's mental health to get the wrong information on social media, and so of course we are concerned about that."
Meta chief executive Mark Zuckerberg announced Tuesday the group would "get rid of fact-checkers" and replace them with community-based posts, starting in the United States.
Chalmers said the decision was "very concerning".
The government had invested in trusted Australian news providers such as the ABC and national newswire AAP to ensure people had reliable sources for information, he said.
Disinformation and misinformation had become "a bigger and bigger part of our media, particularly our social media", the treasurer said.
- Social media restrictions -
Australia has frequently irked social media giants, notably Elon Musk's X, with its efforts to restrict the distribution of false information or content it deems dangerous.
Late last year, the country passed laws to ban under-16s from signing up for social media platforms. Offenders face fines of up to Aus$50 million (US$32.5 million) for "systemic breaches".
But in November, a lack of support in parliament forced the government to ditch plans to fine social media companies if they fail to stem the spread of misinformation.
Prime Minister Anthony Albanese said Wednesday he stood by the ban on children's access to social media because of the impact it had on their mental health.
Asked about Meta's fact-checking retreat, Albanese told reporters: "I say to social media they have a social responsibility and they should fulfil it."
Australian group Digital Rights Watch said Meta had made a "terrible decision", accusing it of acting in clear deference to incoming US president Donald Trump.
AFP currently works in 26 languages with Facebook's fact-checking programme.
Facebook pays to use fact checks from around 80 organisations globally on the platform, as well as on WhatsApp and Instagram.
Australian fact-checking operation AAP FactCheck said its contract with Meta in Australia, New Zealand, and the Pacific was not impacted by the group's US decision.
"Independent fact-checkers are a vital safeguard against the spread of harmful misinformation and disinformation that threatens to undermine free democratic debate in Australia and aims to manipulate public opinion," said AAP chief executive Lisa Davies.
Related Links
Satellite-based Internet technologies
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |