Researchers estimate that 88% of trial spending is wasted.
Dodgy research design and bad statistical methodology mean that most randomised trials are a waste of time, money and effort, and of no or dubious scientific value, say Stefania Pirosca, Frances Shiely, Mike Clarke and Shaun Treweek, in a new paper published in the journal Trials in early June.
Their paper examined 1,659 randomised trials, involving 400,000 participants, that took place between May 2020 and April 2021 in 84 countries as well as 193 multinational trials.
The majority of trials (62%) showed a high risk of bias. More than half of trial participants were in these high risk of bias trials. Trials where the risk of bias was unclear accounted for 30% of those reviewed, while trials with a low risk of bias – those that can be trusted – accounted for just 8% of the total.
Bad trials – ones where we have little confidence in the results – are not just common, they represent the majority of trials in all countries and across most clinical areas. For instance, all trials looking at drugs and alcohol exhibited a high risk of bias. The most reliable field was anaesthesia, with 60% of trials exhibiting a low risk of bias.
The research team drew trial data from 96 reviews from 49 of the 53 clinical Cochrane review groups. Cochrane is an international organisation that helps to gather and propagate the results of medical research to better guide medical decision-making. This is done by experts compiling and evaluating research trials and results in “standardised, high-quality systematic reviews”.
Bad science was common everywhere. “No patient or member of the public should be in a bad trial and ethical committees, like funders, have a duty to stop this happening,” the paper’s authors write.
The least reliable science, in countries that conducted 20 or more randomised clinical trials, was done in Spain and Germany, with 86% and 83% of the trials exhibiting a high risk of bias.
This amounts to a massive waste of money and effort.
Statisticians and research-method experts have been sounding the alarm on biased research for years, since Doug Altman’s 1994 paper in the British Medical Journal, “The scandal of poor medical research".
Doctors want to know if they can rely on a particular treatment to produce a desired outcome, and need research that confers a degree of confidence. One way to do that – the most popular – is randomised control trials.
*See this explainer video for more: How do we know vaccines work?
But, if there is a high risk that the results were biased by errors in how they were conducted and how results were achieved, they should not be relied on. Pirosca and colleagues did not examine the type (or domain) of bias in the studies, arguing that having a high risk in one type of bias is sufficient to undermine the trial’s results.
In short, for Pirosca and colleagues, health research in randomised trials is bad when there is an identifiable risk of bias in the way that the results were obtained.
The large number of high risk of bias trials appears to be due to “a lack of input from methodologists and statisticians at the trial planning stage combined with insufficient knowledge of research methods among the trial teams”. You would not, they say by analogy, think it appropriate that a statistician conduct surgery, just because they are doing work in a surgical domain.
But once the methodology and statistics were looked at closely, many of these papers were deemed unscientific – for instance, patients were excluded from analysis for no good reason. And once these trials were excluded from the review, the drug’s promise as a Covid treatment vanished.
Medical research watchdog Retraction Watch currently lists 12 papers purporting to investigate ivermectin that were subsequently withdrawn or for which concerns have been expressed. According to their records, 235 Covid papers have been withdrawn to date.
But the crisis is not insurmountable. Pirosca and colleagues say that relatively simple fixes would dramatically reduce the amount of untrustworthy health research – by ensuring that methodological principles that underlie RCTs are not compromised.
Pirosca and colleagues propose that no medical RCT should be funded or given ethical approval if it cannot prove that the team conducting the trial has a member that has methodological and statistical expertise. Every RCT should, in its design, use risk-of-bias tools to make sure that results are not compromised.
The expertise that could restore the worth to medical research is in short supply.
More methodologists and statisticians are needed, and money should be invested in training people with this expertise, and investing in applied methodology research and supporting infrastructure. The authors call for 10% of a funder’s budget.
This might seem like a lot of money, but, argue Pirosca and co, it would be a fraction of the cost of the wasted research in the year under review – estimated to be billions of rands.
The task is urgent: “Randomised trials have the potential to improve health and wellbeing, change lives for the better and support economies through healthier populations … Society will only see the potential benefits of randomised trials if these studies are good, and, at the moment, most are not.”