Investigation finds that Facebook removed factual check markers on some climate change denial posts

While Facebook continues efforts to remove COVID-19 misinformation, various reports over the past few weeks have suggested that it does not do nearly as much to stop the spread of other types of misinformation in its network, raising questions about the role that Facebook plays in such a role. and what the motives are for following a more ‘hands off’ approach in certain contexts.

The latest report comes from Popular Information, which today published a report of a specific literature denial on climate change which, despite the fact that Facebook’s parties found it ‘partially false’, and is labeled as such under Facebook’s policy. eventually the label was removed, apparently after intervention by Facebook management.

According to the popular information:

‘The article, written by Michael Shellenberger and published on The Daily Wire, uses 12′ facts’ to argue that climate change concerns are being overcome. […] But then, without explanation, the fact check was removed. If a Facebook user tries to share the article today, there is no warning and no link to the fact check. Shellenberger’s piece, on The Daily Wire and elsewhere, has now been shared more than 65,000 times on Facebook. ‘

In their investigation, the PI team found that top executives of Facebook – including Nick Clegg, its VP of Global Affairs and Communications, Campbell Brown, Facebook’s VP of Global News Partnerships and Joel Kaplan, VP of Global Public Policy at The Social Network – were specifically consulted on the fact-checking ruling, which ultimately led to the removal of the marker, which proves to be effective from Facebook’s own research instrument is. to limit the spread of misinformation online.

The report also points out that Facebook ‘was asked by the office of Congressman Mike Johnson (R-LA), a powerful member of the Republican leadership, to reverse the fact-checking’.

It is not entirely clear what happened in this case, but it appears that Facebook, on behalf of a political representative, may have chosen to remove a fact-checking marker on a climate denial post, despite being if false marked. fact checking partners.

This is the second major incident of Facebook over the past two months, around this exact topic. Last month, reports emerged that Facebook had allowed the content of the denial of climate change to remain on its platform requesting its staff to deliver such discussion is not eligible for fact checking by considering it as ‘opinion’.

Indeed, various denial posts, groups and pages on climate change are active and see great involvement in the social network – which, given thelocal media platforms now surpass print newspapers as news source for Americans, seems to be a major source of concern.

Climate change groups on Facebook

Of course, there is still a debate about the severity of climate change and its consequences, hence the ‘opinion’ loophole. But when Facebook’s own fact checkers raise concerns, it seems like it’s time for Facebook to uphold such an action.

Once again, Facebook is working very hard to get rid of COVID-19 misinformation, why would not the falsehoods of climate change, based on science, either? And then, why not advertise politicians, the even bigger elephant in the room, with facts?

There are various theories as to why Facebook may not want to insist so strongly on certain issues, including that Facebook, of course, benefits from such discussions.

As Bill McKibben recently remarked in The New Yorker:

“Why is it so hard to get Facebook to do something about the hatred and fraud that fills its pages, even when it’s clear they’re helping to destroy democracy? And why has the company done everything recently? decision to free a climate denial position from the factual process? The answer is clear: Facebooks core business is to spend as many hours as possible on its website as many people as possible so that it can sell the attention of advertisers to advertisers. ‘

Many shared the same view that Facebook ultimately benefits from such engagement, with divisive, argumentative content like this that results in emotional response. Emotional response is the key to viral dynamics, so in many ways it is actually in the interest of Facebook to bring such content to life on its platform.

This could possibly be one of the reasons why Facebook has been so keen on promoting the use of groups for the past few years. When people share such content on their public feeds, it calls for investigation, but the same in private groups gives Facebook all the benefits of engagement, without the accompanying criticism.

Facebook’s algorithm is, in fact, built around the enticing involvement, regardless of the involvement. Facebook’s system is therefore also designed to reinforce content that causes debate and to provide users with comments – and as such, it is clearly in the interest of Facebook to have such a debate take place at a certain level and presented on its websites. to become.

You could also argue that the same process has changed the way such issues are reported more commonly. Because Facebook’s system spurs the debate, publishers are encouraged to provide more partial, biased headlines to gain optimal reach in the network. Balancing can be one of the biggest factors in reinforcing divisions in modern society. The rise of online sharing algorithms, which predict the strengthening based on comments and shares, has changed the motivations for online publishers, moving their readers to certain sides of a given debate through increasingly biased reporting.

Either way, Facebook benefits from division. So, what can be done? What should regulators and officials do to limit the impact of platform algorithms and remove bias – if indeed anything can be done to reduce it online?

Demand is becoming increasingly challenging as Facebook continues to grow (now includes 3 billion users), and more users are relying on its apps to stay up to date. The recent closure of smaller publications in regional areas, due to the COVID-19 exclusions, will only intensify it, and if Facebook is motivated by another way to remove measures such as fact-checking, it seems to be a reality with which we will get to do.

But such decisions are of great importance – if Facebook can simply pick and choose when applying labels such as fact checks, it should not be allowed to have such an influence.

This is more of a question that regulators need to pay attention to, but if you want to determine why we feel more divided and why anti-scientific movements are gaining more traction than ever, this may be where you need to start.

Source