Shaky evidence from animal experiments may be giving a false picture of the effectiveness of potential new drug treatments, a study has found.
Biased results and misleading conclusions may be undermining clinical trials and dooming them to failure, it is claimed.
A team of US experts assessed 160 studies pooling scientific data on new drugs for neurological disorders from more than 4,000 animal experiments.
Such “meta-analyses” which draw together evidence from many different studies can provide a greater level of statistical power. They are often relied upon when deciding whether or not to make the transition from laboratory tests to clinical trials.
But the new research revealed a high level of bias in favour of “positive” results, with more than twice as many experiments as expected appearing to produce statistically significant results.
It was likely this had led to failed clinical trials that should never have been conducted, said the scientists, led by John Ioannidis, from Stanford University in California.
“We saw that it was very common for these interventions to have published evidence that they would work,” said Ioannidis. “It was extremely common to have results that suggest they would be effective in humans.
The studies looked at involved research on new drugs for a wide range of diseases including Parkinson’s, Alzheimer’s and MS
“Under the current conditions, only a tiny proportion of interventions that have published some promising results in animals have shown to be at all effective in humans.”
The bias was not the result of fraud, but thought to come from two main sources, said the researchers. One was that scientists conducting animal studies often chose methods of data analysis that provided “better” results. The other arose from the fact that high-profile scientific journals tend to prefer studies with positive, rather than negative, findings.
The studies looked at involved research on new drugs for a wide range of diseases including Parkinson’s, Alzheimer’s and MS.
Similar bias was likely to contaminate medical research on other kinds of diseases, said the scientists, whose findings appear in the online journal Public Library of Sciences Biology.
The researchers used statistical methods to calculate the number of experiments included in the meta-analyses that would be expected to yield positive results. This was compared with the actual number of published experiments with positive results.
In total, 919 “positive” studies were expected, but a striking 1,719 published. This implied that either many negative findings were not published, or that results were interpreted too optimistically.