What kind of research can we trust?

What kind of research can we trust?
Conflicting recommendations about flu drugs has made it difficult for doctors to decide whether to prescribe them. Andrew Wales/Flickr, CC BY-SA

Research involving pharmaceutical company input is notoriously compromised. While not all industry ties lead to biased research, and not all biases are a consequence of industry ties, many studies show industry influence can make drugs look safer and more effective than they really are. So where can doctors and indeed the public turn to for reliable information?

One favoured option is research known as systematic reviews, which sift through evidence, evaluate their quality and synthesise conclusions and recommendations for clinical practice. Systematic reviews are considered to be the highest level of medical evidence because they summarise large volumes of evidence and follow strict processes to avoid biases.

Systematic reviews form the basis of evidence-based medicine, but there's now growing doubt about whether these reviews are as untouched by industry influence as many of us expect them to be.

A particular case

Consider the case of a class of drugs known as , which has been causing controversy in the last few years. These drugs are said to minimise the impact of the flu; you'll know them by their commercial names Tamiflu and Relenza.

Tens of millions of prescriptions for these drugs have been dispensed and governments worldwide have stockpiled them in preparation for a flu pandemic at the cost of billions of dollars. But there are conflicting views about both their safety and their efficacy – and they're fuelled by conflicting systematic reviews.

One systematic review published this year, for instance, encouraged early use of the drugs in any patient who looks appreciably unwell. Another cautioned about their safety and questioned whether they should be used in practice at all.

In an article published today in the Annals of Internal Medicine, we tried to make sense of how such discrepancies arise despite the strict processes that underpin systematic reviews.

Given what we already know about industry influence on research, we suspected the differences might be associated with reviewers' to companies that make the drugs. To test our hypothesis, we examined 26 systematic reviews published about neuraminidase inhibitors.

Sleight of hand?

We found reviewers with financial ties to companies were more likely to present evidence in favourable ways and recommend use of the drugs. In the reviews written by researchers with such ties, 88% of the conclusions were favourable. In the absence of financial links, just 17% were positive.

In other words, reviewers with financial ties to drug manufacturers overwhelmingly decided the drugs were safe and effective while those without ties were considerably more reserved about their value.

So how did the systematic reviews arrive at such different conclusions?

While we were unable to examine the differences statistically, one part of the review process stood out as the point where biases could be more easily introduced: generalising from results to recommendations.

For some systematic reviews, the recommendations made in the discussion sections didn't match the evidence in the results. That suggests reviewers may have generalised in ways that aligned with predetermined views rather than what the evidence showed.

What can be done?

Ours is not the only study that has identified this type of problem. Last year, researchers identified the same association in systematic reviews of sweetened beverages and weight gain.

While this may make it tempting to ignore all evidence reported by researchers who receive industry funding, we don't think that's the answer. There's much to be gained from collaborations with industry. What we need are better strategies for managing conflicts of interest.

Being able to detect the kind of polarisation in the conclusions of systematic reviews we did is one step towards managing the effects of conflicts of interest. And one way to mitigate these effects may be to ask independent researchers to interpret results and formulate recommendations.

As with other drugs, conflicting recommendations about neuraminidase inhibitors has made it difficult for doctors to decide whether to prescribe them. The most authoritative reviews now show these drugs have small benefits and some risks. These reviews have led to suggestions that stockpiling them may have been unjustified.

To be able to make informed decisions together, doctors and patients need research that's trustworthy. If systematic reviews are to remain the pinnacle of evidence-based medicine, then the processes underpinning them need to be continually reassessed to ensure they meet the highest of standards.

Provided by The Conversation

This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).
The Conversation

Citation: What kind of research can we trust? (2014, October 8) retrieved 18 April 2024 from https://medicalxpress.com/news/2014-10-kind.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Industry sponsorship leads to bias in reported findings of clinical trials

 shares

Feedback to editors