COVID vaccines: How to make sense of reports on their effectiveness
As a middling but competent clinician-scientist, I feel desperately sorry for the general public trying to make head or tail of some of the scientific discussions on the pandemic right now.
Science is being done, disseminated, argued about—sometimes peer-reviewed if we are lucky—and then immediately rewritten days later. Even with some experience, it's hard to keep up. Data and reports come thick and fast, with little time to assess what they really mean.
One particularly fast-moving topic at the moment is vaccine efficacy. The emerging data from vaccination programs looks great and seems to strongly back up the findings of clinical trials. However, in keeping with these frenzied times, on closer inspection what's being presented is actually more complex.
Assessing real-world effectiveness
A "leaked" paper from Israel, since widely reported on, has suggested that the Pfizer/BioNTech vaccine is highly effective at preventing disease—and possibly transmission—in the real world. An additional unreviewed study has looked at the effectiveness of the Pfizer/BioNTech vaccine in health workers in Israel and is similarly positive. Almost immediately, a peer-reviewed paper looking at a wider Israeli population then emerged. It too suggests the vaccine is very effective.
UK data on both the Oxford/AstraZeneca and Pfizer/BioNTech vaccines was also recently revealed. All of these real-world studies are pointing at vaccine efficacy being around the 80-90% mark or above, though with some variation. So far, so straightforward. But from here on, things get a bit more complicated.
There has already been extensive and public critiquing of this work. These trials are "observational". This means we are just watching what happens, rather than guiding or directing it. This is what a real-world trial usually is, but it comes with some problems.
Let's take some of the UK data—Scotland's specifically. The headlines were focused on the vaccine cutting hospitalisations: Oxford/AstraZeneca by 94% and Pfizer/BioNtech by 85%. Social media immediately began comparing the two—missing arguably the biggest challenge that comes with observational studies: confounding variables.
These are additional factors that can influence results (rather than the thing being studied—in these cases, the vaccines). They are why in trials we try to guide what happens to some extent by randomizing things where possible, to try to minimize the risk of things we're not assessing influencing the result.
In the Scottish data, of the various possible confounding factors, two stand out. First is the different times at which the vaccines were rolled out. This is important because the amount of circulating virus significantly changed during the second wave, which muddies comparisons between the two vaccine types. The second factor is who was targeted for vaccination, as there were differences in who got what. As well as potentially accounting for differences in the two vaccines' perceived effects, these and other confounding factors may also explain differences between how effective the vaccines were in trials versus the real world.
And speaking of the trials, hospitalization wasn't their focus. You need to look at very large numbers of people to prove or disprove differences in hospitalization with COVID-19. Rather, the trials looked at whether vaccines prevented symptomatic disease—so measured effectiveness in a related but different way.
What about mutations?
Complicating things further are debates about how well the vaccines work against emerging variants, such as the B1351 variant first reported in South Africa. As with everything else in the pandemic, the data on this is still in evolution. The first thing we in the scientific community need to acknowledge is what we don't know.
We think that some of these variants are likely escaping some of the effects of the vaccines, though the size of these effects and differences between specific vaccines is still unclear. The variants we are worried about now might not end up being the same ones we are worrying about in six months.
We don't know for sure yet what the effects of the variants will be on vaccine efficacy because we have different things to look at—transmission, cases, more severe cases, hospitalisations and deaths. We will continue to collect real-world data, and when it becomes clear, this will change policies. Updates to the existing vaccines will almost certainly be a rolling process, and the great news is that these are already underway.
In the meantime, I think scientists and commentators need to have an eye on the bigger picture. All of the licensed vaccines have excellent safety profiles. So far, they are dramatically reducing serious infections, significantly affecting milder cases, and transmission data will probably soon follow.
Debates about which vaccines are best are for the moment moot because we have no good head-to-head data and most importantly are in a situation where we have limited supplies of licensed vaccines that we need to quickly get into people. Debates around optimum strategies often don't acknowledge the real world. And as time advances, it's likely we will all end up being re-vaccinated with whatever emerges as the best options.
So what should you make of all this? Well, if you are offered a vaccine—any vaccine—take it enthusiastically. You should feel confident that it is safe and by the positive nature of both the trial data and the experience of countries that are rolling them out in high numbers already—even if we don't have highly precise measures of their real-world effectiveness yet.
But acknowledge too that because of the variants, you're likely to end up getting a booster down the road. That booster may be a different vaccine entirely and be reassured that work is already underway to make sure it is based on the best evidence available.