To combat fake news, correct reading MIT News

The battle to stop fake news and misinformation online is not going to end anytime soon, but a new finding from MIT scholars could help alleviate the problem.

In an experiment, the researchers discovered that fact-checking labels, when attached to online news headlines, actually work better after people read fake headlines, compared to when they precede or go with the headline.

“We found that either the false claim was corrected before people read it, while they were reading it, or after they read it, it affected the effectiveness of the correction,” said David Rand, an MIT- professor and co-author of a new article outlining the study’s results.

The researchers specifically found that the fact that ‘true’ and ‘false’ labels appeared immediately after participants in the experiment read the headlines reduced the people’s incorrect classification of the headlines by 25.3 percent. In contrast, there was a decrease of 8.6 percent when labels appeared with the headings, and a decrease of 5.7 percent in incorrect classification when the correct label appeared beforehand.

“Timing is important when factual investigations are delivered,” says Nadia M. Brashier, a cognitive neuroscientist and postdoctoral fellow at Harvard University and lead author of the article.

The paper, “Timing is important when correcting fake news”, appeared this week in Proceedings of the National Academy of Sciences. The authors are Brashier; Edge; Gordon Pennycook, assistant professor of behavioral science at the University of Regina’s Hill / Levene Schools of Business; and Adam Berinsky, the Mitsui Professor of Political Science at MIT and the Director of the MIT Political Experiments Research Lab.

To conduct the study, the scholars conducted experiments with a total of 2,683 people, who looked at 18 true news headlines from major media sources and 18 fake headlines made down by the fact-checking website snopes.com. Treatment groups of participants saw “true” and “false” labels before, during, or after reading the 36 headings; a control group. All participants rated the headings for accuracy. A week later, everyone looked at the same headlines, without any information about the fact, and judged the headlines accurately again.

The findings confused the expectations of the researchers.

“When I got into the project, I expected it would work best to give the correction in advance so people already knew they would believe the false allegation when they came in contact with it,” Rand says. ‘To my surprise, we actually found the opposite. The most effective was to unravel the claim after being exposed to it. ‘

But why can his approach – “debunking” rather than “prebunking”, as the researchers call it, achieve the best results?

The scholars write that the results correspond to a “simultaneous storage hypothesis” of cognition, which suggests that people can keep false information and corrections in their minds at the same time. It may not be possible to get people to ignore fake headlines, but people are willing to update their beliefs about it.

“Allowing people to form their own news headlines and then provide ‘true’ or ‘false’ labels can serve as feedback,” says Brashier. “And other research shows that feedback ‘fixes’ correct information.” It is important to note that the results would have been different if the participants had not explicitly assessed the accuracy of the headings if they had been exposed to them – for example, if they had just browsed. through their news feeds.

Overall, Berinsky suggests, the research helps inform tools that platforms can use on social media and other content providers, as they seek better methods to name and limit the flow of misinformation online.

“There is no single magic bullet that can cure the problem of misinformation,” says Berinsky, who has long studied political rumors and misinformation. ‘Studying basic questions in a systematic way is a critical step towards a portfolio of effective solutions. Like David, I was somewhat surprised by our findings, but this finding is an important step forward in helping us combat misinformation. ”

The study was made possible by support for researchers from the National Science Foundation, the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the William and Flora Hewlett Foundation, the Reset Project of Luminate, the Social Sciences and Humanities Research Council. of Canada, and Google.

.Source