The risk of them being eliminated drives scientists to bad methods Science

In the race for a COVID-19 vaccine, second place still offers honor – unlike some scientific fields.

KORA_SUN / SHUTTERSTOCK, Adapted by C. SMITH /SCIENCE

By Cathleen O’Grady

Leonid Tiokhin, a meta-artist at the University of Technology in Eindhoven, learned early on to be afraid of being abolished. He recalls emails from his undergraduate adviser emphasizing the importance of being the first to publish: “We need to be in a hurry, we need to rush better.”

A new analysis by Tiokhin and his colleagues shows how risky the competition is for science. Rewarding researchers who first publish, push them to cut their corners, shows their model. And while some proposed reforms in science may help, the model suggests that others may inadvertently exacerbate the problem.

Tiokhin’s team is not the first to argue that competition poses risks to science, says Paul Smaldino, a cognitive scientist at the University of California (UC), Merced, who was not involved in the research. But the model is the first to use details that examine exactly how the risks play out, he says. “I think it’s very powerful.”

In the digital model, Tiokhin and his colleagues build a toy world of 120 scientific “bots” competing for rewards. Each scientist toiled through the simulation and collected data on a series of research questions. The bots are programmed with different strategies: some collect larger, more meaningful datasets than others. And some tended to drop a research question once someone else published about it, while others stubbornly persisted. As the bots made discoveries and published them, they built up rewards – and those with the most rewards passed on their methods more often to the next generation of researchers.

Tiokhin and his colleagues documented the successful tactics developed by 500 generations of scientists for various simulations. When the population first provided greater benefits for publication, the population tended to accelerate their research and collect less data. This has led to research filled with shaky results, they report in Nature Human Behavior today. When the difference in reward was not so great, the scientists deviated to larger sample sizes and a slower publication rate.

The simulations also enabled Tiokhin and colleagues to test the effects of reforms to improve the quality of scientific research. For example PLOS magazines, as well as the journal eLife, offers ‘scoop protection’ that gives researchers the chance to publish their work, even if they are second. There is still no evidence that these policies work in the real world, but the model suggests they should: Greater rewards for extensive research have led the bots to decide on their larger datasets than their winning tactics.

But there was also a surprise in the results. The reward of scientists for publishing negative findings – a regular reform – lowered the quality of research because the bots found that they could conduct studies with small sample size, find nothing interesting, and still be rewarded. Proponents of publishing negative findings often emphasize the danger of publishing only positive results, which promotes the bias of the publication and hides the negative results that help create a complete picture of reality. But Tiokhin says that modeling suggests rewarding researchers for publishing negative results without focusing on research quality will encourage scientists to “do the craziest studies they can.”

In the simulations, it fixed the problem of making scientific bots more difficult to conduct low-cost cost studies. Tiokhin says that it points to the value of real-world reforms such as registered reports, which are study plans, peer-reviewed before data collection, forcing researchers to put more effort into starting their projects, and to cherry-pick data discourage.

Science is meant to pursue truth and self-correctness, but the model helps explain why science sometimes moves in the wrong direction, says Cailin O’Connor, a science philosopher at UC Irvine, who was not involved in the work. not. The simulations – with bots collecting data points and testing importance – reflect fields such as psychology, animal research and medicine more than others, she says. But the patterns should be similar in all disciplines: ‘It’s not based on a few small details of the model.’

Scientific disciplines vary just like the simulated worlds – in how much they are rewarded for publishing first, how likely it is to publish negative results and how difficult it is to get a project off the ground. Now Tiokhin hopes that metaresearchers will use the model to research how these patterns play out in flesh-and-blood scientists.

Source