Five key things to know about data on adverse effects

The potential harms of interventions are tricky to get a handle on. Our feelings about them are, too. It wouldn't be easy – even if we didn't have to deal with people trying to beat up, or minimize, the risks of treatments. Here are the top 5 issues I keep in mind – and a bit on what lies behind them.

Let's start with the often mis-used terminology of "side effects".

1. An adverse event is not necessarily an adverse effect

A side effect isn't necessarily a bad thing. An unexpected side effect of taking aspirin, for example, turned out to be lowering your risk of stroke. "Adverse" specifies it's unwanted or harmful.

But is it an effect? People have gotten pretty good at knowing that correlation is not causation. Yet, the term "adverse effect" is still used far too loosely. Adverse event is usually more accurate.

When the causal relationship is very strong, especially with drugs or environmental effects (like allergies), adverse reaction can be the right term. However, there are legal obligations to tell people events that have been associated with medicines and vaccines, for example. That doesn't mean that they are all definitely well-proven adverse effects.

It's not at all easy to be sure whether or not something is causing harm, if a toxic relationship isn't already well-studied. Some of what you're looking for here: is there a logical explanation for how a could result in b? Is there a dose-response relationship (the higher dose, the more of the harm)? Did the harm stop when treatment stopped? (And return if resumed?)

You often have to dig into the details of a study to be sure about whether the data are about or effects. You usually can't rely on the abstract or a . If you want to make your results interesting, adverse effects are a surefire winner!

It may have been reported that 10% of people suffered from a particular adverse effect – say, nausea. But look closely, and 8% of the control group might have had nausea, too. So really, perhaps only 2% had nausea you could attribute the intervention – and even that might be a coincidence.

2. You can't really prove something is "safe" – but that doesn't mean it's risky

It's easy to either play on our fears about risks – or our willingness to be in a state of denial about them. And the numbers when it comes to adverse effects can be small – especially for serious ones.

This cartoon is a silly example to show how you can choose data to frame a risk one way or the other. If we just look at the recent track record of space diving, both people who did it survived (100% survival). But that doesn't mean you can call it "safe". (Overall, 1 out of 3 have died.)

Safety is always a relative term. But the claim that something is "safe", or hasn't been proven to be "safe" (and is therefore risky), is right up there for misuse with the word "cure".

It's hard to prove a negative – that something won't happen – decisively. That gives people who want to plant seeds of safety doubt in people's mind lots of room to play foul.

It's easy to forget, too, that if there is a potential benefit to something, losing out on the benefits by not doing it is a harm. And that can be a more definite risk than a theoretical chance of something extremely unlikely.

On the other hand, it's easy to gloss over the potential for harm, too. John Ioannidis and colleagues have singled out as a particular problem the phrase, "there was no difference" in adverse events between groups just because there was no statistically significant difference.

Unfortunately, we're systematically more ignorant about some kinds of harm than others – the benefits and adverse effects of treatments on women, children, and old people, for example.

3. Who reported what to whom matters – a lot

Lisa in this cartoon here isn't feeling particularly lucky. This adverse effect is dramatic and clear.

But if she had known for sure she was in a control group, Lisa might have been pretty disappointed. And that could affect how she rated all sorts of outcomes – particularly more subjective ones. So too could researchers.

There is far less uncertainty about outcomes in a trial when the participant doesn't know what group they're in and nor do the people treating people (double blind). And especially when the people assessing the outcomes don't know either (blinded outcome assessment).

The more subjective the outcome measure, the more it matters who is judging or measuring it.

More objective measurement is particularly important for serious adverse events that aren't common. Here's an example of how that can play out: a drug that's injected into the eyeball. Injecting anything into the eye can cause infection. The infection could be classified as uveitis (which is less severe) or endophthalmitis (which can cause blindness).

In this critical trial at the time, the researchers chose an objective measure to determine the difference: if antibiotics were prescribed, that got the endophthalmitis rating. Doing that, meant the rate of the serious adverse effect was just over 1%. That automatically attracted the designation "common adverse effect" by the European equivalent of the FDA.

However, one of the cases had been called uveitis by one of the investigators. Keep that more subjective classification, and the serious adverse effect could have been called "uncommon".

Standardizing how outcomes are measured is a critically important strategy that researchers are collaborating on to enable the results of clinical research to be combined to get a better picture.

4. The more uncommon or long-term the adverse event, the weaker the evidence about it is likely to be

The good news here is that so many scare stories – like the vaccine and autism link – are based on such weak evidence, that there should be no "scare" in the first place.

The bad news is that to get the benefits of treatments, especially new ones, there is an inevitable degree of uncertainty about adverse effects. We don't want to do research on incredibly vast numbers of people, or wait decades before ever having access to any new treatment.

So we accept shortcuts – like using biomarkers as measures. The adequate size of studies is often determined by the benefits we're looking for and the more common adverse events we're concerned about.

Most of the time, it will work out. But sometimes it won't. With medicines, for example, we'll only learn about some serious adverse effects for about a quarter of all drugs after they've been approved. Yet, many people don't realize that regulatory authorities are unsure about serious and long-term effects when they approve drugs.

People probably know even less, though, about the systems and reliability of information after that. Which leads to people getting needlessly scared by fishing expeditions in databases of adverse events, for example – and a rush of reports of adverse events that get publicity. Yet mostly, under-reporting by patients to adverse event reporting systems. (Only around 10% of people in the UK and Australia might be aware of their system, for example.)

5. Access to data on adverse events is a bit of a game of hide and seek

Drugs, and other treatments, don't actually have to better than other options to cross the finish line. And there is a race – to win people over to interventions.

Whether it's for commercial reasons or people just believe they can help others, researchers can fail to look for the downside properly – or sweep it under the carpet when they do.

"The more you search, the more you find", is the title of the paper by Ioannidis and colleagues that I referred to earlier. It relates to the way adverse event data is collected in clinical trials. If you just ask people to report adverse events, you might hear about far fewer events than if you give people a list of possible events and ask if they occurred.

But even when researchers know about harms, it doesn't automatically follow that the rest of us will hear. Clinical trials can under-report in journal articles – and then systematic reviewers can compound the problem by under-reporting the harms from the journal articles.

People have a right – and need – to know about the risk of adverse events if they're to make informed decisions. While some worry that being told about adverse events might itself do harm, the evidence on that is pretty reassuring so far.

Sometimes, though, people just don't believe what they do could cause harm. Just see how often exercise research doesn't even look for data on injuries, for example, or psychologists don't discuss whether or not therapists did harm. Or people believe that because something is herbal or based on vitamins, what harm can it do?

In the end, the main thing to remember is just that anything that is powerful enough to do good, can also do harm.

Provided by PLOS

This story is republished courtesy of PLOS Blogs: blogs.plos.org.

Citation: Five key things to know about data on adverse effects (2015, September 1) retrieved 19 April 2024 from https://medicalxpress.com/news/2015-09-key-adverse-effects.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Some adverse drug events not reported by manufacturers to FDA by 15-day mark

6 shares

Feedback to editors