July 31, 2015

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

Correcting the myths about missing drug trials

Not all drug trials get published. This is a problem because doctors, journalists, and others look to published data for the fullest picture of whether and how a drug works, and in whom.

The international AllTrials campaign launched a US branch this week. We need better reporting of : Although we have legislation supposedly mandating trials to be published, many still aren't. A 2012 study found that only 22% of trials complied with the law. According to Alltrials.net, the FDA hasn't ever fined anyone for violating the law.

The last time I wrote about publication bias in trials—just a quick pointer tothis excellent Salon piece by Rob Waters—some readers just didn't get it. Here are a few of the comments that show why we need to communicate better about why publication bias is a problem:

"There's really no use for the data if it shows that [a drug] doesn't work." (here)

"People generally aren't interested in failure. Failure isn't progress." (here)

"So drug co's don't waste their time publishing tests of products that don't work? #efficient #Scary #journalism #WellDuh" (here)

These readers assume the missing trials are unimportant ones. If a handful of trials show that a drug works, and only those get published, don't we have the information we need?

No, we don't. Say there's an antidepressant, and half the trials show that it works and half show that it doesn't. (Why the difference? Maybe they were done on different patient populations, or maybe some were done with better methodology than others.)

Trials aren't just for the FDA to read at approval time.

In addition to answering the question "Does this drug work?" published data also helps answer questions like "Does this drug work better than these older ones?" and "How do the benefits stack up against harms?" (data that theNNT presents very clearly, by the way—mostly based on Cochrane reviews.)

If a large fraction of the for an antidepressant are missing, we could end up with a skewed view of how well it works. That's exactly what happened with antidepressants as a class, according to this study led by Erick Turner. Trials published in journals painted a much rosier picture of the drugs' effectiveness than the data submitted to the FDA. And we don't know if the FDA had complete data, either; Alltrials suggests that regulators often don't.

Evidence-based medicine is only as good as the evidence it's based on. Hiding data skews the understanding that doctors and researchers rely on; it isn't harmless.

Provided by PLOS Blogs

Load comments (0)