A couple of recent articles in the lay press expose the fallacy that publication in a scientific journal means “it must be true.” Thanks to Aunt Sue for noticing the first, published in the Atlantic by David Freedman, titled, “Lies, Damned Lies, and Medical Science,” and the second published in the New Yorker by Jonah Lehnrer, titled, “The Truth Wears Off.” Both articles rely heavily on the work of John Ioannidis, an epidemiologist at Stanford.
Highlights (lowlights?) discussed in these articles:
- Of the 49 most cited clinical research studies, only 34 have been replicated, and in 41% of cases where replication was attempted, the results contradicted the original findings or seriously downgraded the estimated effect size.
- One third of all studies are never cited, let alone repeated
- Of 432 genetic studies examined, most had serious flaws, and only 1 study held up when others attempted to replicate it
The public is slowly starting to catch on. Spectacular failures such as Hormone Replacement Therapy, Cox-2 Inhibitors, and Vitamin E have demonstrated to the public that early results indicating that the benefits outweigh the burdens can collapse under closer scrutiny. But while these studies were tainted by the corrosive influence of a profit motive, the problem is not limited to pharmaceutical sponsored trials. It’s pervasive throughout science. Some explanations:
- Regression to the mean. Phenomena such as symptoms have natural variation; in other words, they wax and wane over time. Patients are most likely to come to their doctors attention when they are most symptomatic. If nothing were done, the most symptomatic patients would tend to feel better over time, just due to the natural variation of most symptoms. But those same patients are enrolled in trials when they are most symptomatic, because that is when they come to their doctors attention. They feel better over time and voila! It must be the drug! But had they done nothing, they likely would have felt better anyway. An improvement in symptoms due to natural regression to the average level is mistakenly attributed to the drug: professors are promoted; drug companies make billions.
- Publication bias. Journals tend to favor publishing only positive studies. In a recent paper in Archives of Internal Medicine by Emerson and colleagues, 210 reviewers were randomized to receive an article with a positive finding, or an identical article showing no difference. 97% recommended publication of the positive study, and only 80% recommended publication of the no-difference study.
- Selective reporting. Scientists find a way to confirm what they want to find. Of 47 studies of acupuncture conducted in China, Taiwan, and Japan, 100% found acupuncture to be effective. In contrast, only 56% of 94 studies of acupuncture conducted in the US, Sweden, and the UK were positive.
So what are we to believe? Are most published studies lies? There is ample reason to be skeptical. We need more support from journals and academics for replication studies. And we shouldn’t believe that just because something is published in a journal it is “the truth.”
We should be skeptical of the following findings until we see repeated high quality evidence:
- If a little vitamin D is good, a lot must be better
- Opioids for non-cancer pain cause more suffering than benefit (see pallimedpost)
- New treatmentsfor delaying and reversing Alzheimer’s disease
- Palliative care prolongslife
The closing words of the New Yorker article describe the conundrum:
Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to chose what to believe.
by: Alex Smith