Does publication bias make antidepressants seem more effective at treating anxiety than they really are?
In scientific literature, studies with "good" results are more likely to be published than studies with results that are unclear or negative. A study with a new, exciting finding (a positive result) is likely to see the light of day, even if the finding is not in line with the authors hypothesis. But a study that doesn't have a new finding (a negative result), or has an unclear finding is far less likely to be published.
The fact that positive results are more likely to be published is called publication bias and unfortunately it's quite common. Since science is about acquiring knowledge, when scientific literature is distorted as a result of publication bias this "knowledge" becomes less trustworthy.
In medicine this can have serious consequences. Patients may be prescribed therapies that are based on biased results, and physicians and policymakers might not know that the information they are basing these decisions on has flaws.
For example, we know that the scientific literature has overestimated the efficacy of antidepressants to treat depression. These drugs are also used to treat anxiety disorders. We wanted to find out if the the efficacy of antidepressants for these conditions had also been overestimated because of publication bias.
What causes publication bias?
Before we look at our study on antidepressants and anxiety, let's take a look at why positive results are more likely to be published than results that are negative or unclear.
Researchers may suffer from "cognitive bias," which means they are more likely to interpret findings to be consistent with their own hypotheses. In short, people may find what they expect (or want) to find.
Or what if you find something, but it just doesn't seem that significant? Cognitive bias may lead researchers to look at other, sometimes less significant, outcomes from their research when they don't find what they were expecting to. Or they may analyze their data with a new statistical technique or combine different outcomes into a new endpoint that lets them arrive at a result that is consistent with their hypothesis.
Researchers may decide not to publish negative results. This happens for a lot of reasons, but money tends to be a big one. Finding "nothing" is simply less likely to lead to funding for follow-up research, than finding "something." That isn't the only financial reason for why negative results might not be published. Prescription and non-prescription therapies are big money around the world and negative results do not help to sell drugs.
And sometimes it's out of the researchers' hands. Trials with negative results are less likely to be accepted for publication at major journals. A journal sends submitted manuscripts out for peer review. Experts in the field review the work and provide feedback as to whether it should be accepted for publication or not.
Peer review allows outside reviewers to express their own opinions and ideas regarding the research. Ultimately, the decision to accept or reject the manuscript may depend on these revisions.
In this setting, positive results tend to be reviewed more favorably since they often correspond better to the peer reviewer's opinions. Positive findings also attract more widespread attention than negative results, which can increase the visibility and reputation of the journal.
Publication bias and antidepressants for anxiety
We wanted to find out whether publication bias was also present in trials on the efficacy of second-generation antidepressants to treat anxiety disorders. These drugs, selective serotonin reuptake inhibitors (SSRIs) and serotonin and norepinephrine reuptake inhibitors (SNRIs), are the primary pharmacological treatment choice for anxiety. You might have seen commercials for these drugs – Cymbalta and Effexor, for example are SNRIs and Zoloft and Prozac are SSRIs.
We looked at trials examining SSRIs and SNRIs in the treatment of generalized anxiety disorder, panic disorder, social anxiety disorder, post-traumatic stress disorder and obsessive-compulsive disorder.
We compared the trials submitted to the Food and Drug Administration (FDA) by pharmaceutical companies to the resulting publications in scientific journals. We wanted to see how many of the trials were published in articles and if those published articles portrayed the results of the trials accurately.
Of the 57 trials that were registered with the FDA, 41 had positive results and 16 did not. In our study, published in JAMA Psychiatry, we found that of the 45 journal articles that reported on these trials, 43 were positive. That means that 96% of journal articles were positive as opposed to 72% of the FDA reviews.
In further examination, we found that trials with "not-positive results" (results that were negative or unclear) were less likely to be published than trials with positive results. Of the 41 positive trials registered with the FDA, 40 were published in journal articles. But of the 16 trials with not-positive results, just nine were published.
Turning negative results into positive results
Of the nine published trials with not-positive results, three were published with positive results in the accompanying journal article. This is called outcome reporting bias. These studies usually reported the results for a secondary, less important outcome or used creative statistical approaches to make the primary outcome seem positive, when the true result was negative or unclear.
And for another three of the not-positive trial results were reported accurately but the authors concluded that the results were really positive – this is called spin. Just three of the nine not-positive trials were published without any bias.
Overestimation of treatment effects
When we compared the results between the FDA-reviewed trials and the published literature, the literature overestimated the effects of the drugs by 15%, which isn't statistically significant.
In this case, we found that publication bias contributed more to an overabundance of positive results than to the estimates of how well these drugs actually worked.
This overabundance of positive trials creates a skewed representation of the efficacy of these drugs for anxiety disorders. And this in turn may create unrealistic expectations about how well these drugs work among prescribers and patients.
Openness about research results
Publication bias is not limited to psychiatry – it's a problem across medicine. Because so much medical research is publicly funded, researchers and the public should have easy access to reliable results from the studies which current medical therapies are based on. Unfortunately, the current system relies heavily on voluntary reporting, which facilitates publication bias.
Now that the scientific community is paying more attention to publication bias and the problems it creates, various initiatives have begun including trial registrations, journals that explicitly welcome studies with negative results, and open access initiatives in which data becomes publicly available regardless of the outcome of the study.
These initiatives, combined with increasing awareness among researchers, study participants and government agencies will hopefully aid in increasing the accuracy and completeness of reporting of clinical study results.
This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).