Subscribe now

Humans

Evolutionary forces are causing a boom in bad science

Brain Scanner is Simon Oxenham's weekly column that sifts the pseudoscience from the neuroscience

By Simon Oxenham

8 July 2016

A graph of some results in the foreground with two scientist in lab coats inspecting data in the background

Under pressure

Monty Rakusen/Getty

Call it a crisis. Researchers are finding it harder to replicate each other’s findings, while the rate of retractions of published studies is rapidly rising. But why is this happening?

It’s difficult to determine to what extent the retraction trend is caused by more studies reporting false findings, and how much is down to the fact false findings are now more likely to be identified. Some speculate that the internet has made it far easier for scientists to scrutinise each other’s work, and plagiarism can now be detected automatically.

However, new research supports the idea that, in fact, we are encouraging poor scientific practices by accident.

Paul Smaldino and Richard McElreath at the University of California Davis used an evolutionary theory-based computational model to analyse the problem of bad science. They found that “the most powerful incentives in contemporary science actively encourage, reward and propagate poor research methods and abuse of statistical procedures”. In short, it’s natural selection for shoddy science.

Hypothetical researchers

They came to this conclusion by modelling a system of competitive science researchers. In their model, the researchers all had high integrity and never cheated, but the way they did their research could vary and evolve according to their ability to get hired and keep a job.

Smaldino and McElreath included three assumptions in their model. First, each researcher had the ability to identify a true scientific pattern. Second, if a researcher had a stronger ability to spot such a pattern, they were also more likely to identify false positives – associations that weren’t really there – unless they tried hard not to. And thirdly, trying hard to spot false positives led to better research methods, but reduced productivity, because it takes longer to perform rigorous research.

As rewards, the hypothetical scientists received pay-offs – representing real-world prestige and funding – for publishing a study, and these pay-offs were higher if they published novel findings, as opposed to replicating the findings of others.

Arguably, these conditions all reflect the state of scientific research today. Most scientists do not mean to cheat, but they need to be productive to have a successful career – something that may obscure false positives and incentivise them to focus on new and surprising findings, rather than testing the work of others.

25 times more innovative

Smaldino and McElreath found that their model pushed researchers to do less rigorous science, and publish more false positives. They suggest that their model shows that bad science can be explained as a result of the evolutionary selective pressures that are acting on scientists.

In recent decades, there has been a massive increase in the number of studies scientists are expected to publish in order to compete for research funding and jobs at universities and institutes. Newly hired biologists, for example, now have twice as many published papers as they did only 10 years ago.

The situation is made worse by the fact that far more people achieve doctorate degrees each year than there are vacant job positions. This increases the competition, and could explain why between 1974 and 2014, the use of the words “innovative”, “groundbreaking” and “novel” in research abstracts in a journal database increased by at least 2500 per cent.

Smaldino and McElreath argue that “it is unlikely that individual scientists have really become 25 times more innovative in the past 40 years”. They conclude that such language reflects a response to increasing pressures for new, particularly noteworthy science.

Biased against debunkers

Ultimately, the problem may come down to Goodhart’s law, borrowed from economics: “when a measure becomes a target, it ceases to be a good measure”. In this case, measuring scientists’ success by the volume of their papers and the prestige of the journals they are published in increases incentives for bad research practices.

Unfortunately, researchers who take the time to debunk incorrect findings are not rewarded as highly. One analysis of widely cited ecology studies that have been called into question found that the original scientific findings were cited by other scientists 17 times more often than their rebuttals. To add insult to injury, when these rebuttals were cited, they were often misinterpreted as supporting the original findings.

According to Smaldino and McElreath’s evolutionary perspective on the problem, the way to stop the rising tide of bad science is to change the incentives for success. One way to do this would be to take the reliability of a researcher’s work into account when deciding whether to hire or fund them, rather than relying on misleading metrics like journal prestige or the sheer number of a person’s publications.

Read more: Why so much science research is flawed – and what to do about it

 

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up