Would Data Sharing End A Crisis Of Confidence In Science? A Third Of Clinical Trial Conclusions May Be Off
When it comes to cops, many people would say that cases of internal affairs — when police officers investigate one of their own — should never be undertaken by officers in the same department; an outsider is needed to see and judge most clearly. Now researchers at the Stanford University School of Medicine are suggesting a similar principle may underlie the interpretation of randomized clinical trial data. As many as one-third of previously published data from randomized clinical trials, considered the gold standard when testing medication for consumer use, could be re-analyzed in ways that modify the conclusions of how many or what types of patients need to be treated, the Stanford scientists stated in a new study. In other words, researchers who are given access to the data of other scientists will not always agree with the original results.
"Without this access, and possibly incentives to perform this work, there is increasing lack of trust in whether the results of published, randomized trials are credible and can be taken at face value,” said Dr. John Ioannidis, professor of medicine and director of the Stanford Prevention Research Center. To elaborate his point, Ioannidis recalls the recent “hot debates” about whether oseltamivir actually works. Oseltamivir is an antiviral medication marketed under the trade name Tamiflu. Although it is licensed to treat influenza A and influenza B, some analyses and trials conducted after the drug was approved have suggested that its benefits do not outweigh the risks of side effects.
For Ioannidis, oseltamivir represents only “the tip of the iceberg” in a crisis of scientific confidence and so he and his colleagues went about studying this issue. They began with the MEDLINE database, a bibliographic database maintained by the National Library of Medicine, which contains over 25 million citations from roughly 5,600 journals worldwide. They searched for articles written in English describing the re-analysis of raw data used in previously published studies. After screening nearly 3,000 articles, little more than 30 met the strict criteria of their study.
The researchers discovered 13 of the re-analyses (35 percent of the total) came to conclusions differing from those of the original trial with regard to who could benefit from the tested drug. Some re-analyses also identified errors in the original trial publication, such as the inclusion of patients who should have been excluded from the study.
“Making the raw data of trials available for re-analyses is essential not only for re-evaluating whether the original claims were correct, but also for using these data to perform additional analyses of interest and combined analyses," Ioannidis said. "I am very much in favor of data sharing, and believe there should be incentives for independent researchers to conduct these kinds of re-analyses. They can be extremely insightful."
Source: Ioannidis J, Thorland K, Mills E. JAMA. 2014.