Getting to the Root of the UK's Problematic Covid-19 Response
There is a Twitter account with the name 'Covid one year ago', which has been providing reminders of what happened on the analogous day in 2020 in terms of the evolution of the pandemic in the UK. It is possible to imagine a parallel world where this account offers vindication of a number of prudent decisions which helped to quash the progress of the virus, ultimately mitigating the spread and meaning that the excess mortality which emerged was minimal. Unfortunately, this is not the case. Instead, the account provides reminders, which can only be described as cringeworthy, of numerous choices made by health institutions, scientific advisors and the government which seem nothing short of risible now. Perhaps the most laughable of these came on March 16th 2020, where model Caprice went head-to-head with GP Dr Sarah Jarvis on the Jeremy Vine show. If you recall, this was the point in which things were starting to really heat up in the UK but was about 2 weeks after the peak of the first wave had been reached in many Asian countries. This televised exchange begins by Caprice putting forward the simple suggestion that the UK should learn from Taiwan and Singapore - two nations who managed to control the pandemic - by mandating mask-wearing and closing the borders. Masks "make no difference at all", Jarvis retorts with a level of certainty I can only ever dream of having on any issue so important.
This was of course the consensus in the UK and many other countries at the time but was one which many found to be troublesome. If masks were so useless, why were we providing healthcare workers with them? Surely basic physics would tell you that putting something in front of your mouth and nose would be helpful in limiting the transmission of an airborne disease? Now I think it is certainly true that a part of this anti-mask messaging was an effort to preserve supplies for the aforementioned healthcare workers (although we should have just focused on making more masks), there was a genuine sense that mask-wearing was lacking sufficient evidence. This assessment is a textbook misapplication of decision theory - a broad discipline concerned with how to make the optimal choice under uncertainty.
The uncertainty, in this case, stems from whether or not masks 'work', the decision is whether or not to mandate mask-wearing in early March 2020 and the outcome, to be reductive, is how many people in the UK die from Covid-19 and the impact on UK GDP. We also have to define our loss function - how painful each possible outcome is. This is obviously controversial, but let's just assume we find both deaths and recessions to be painful and we wish to minimise both.
Broadly speaking again, there are two ways to proceed which each relate to a particular interpretation of statistics and econometrics - a frequentist approach or a Bayesian approach. In the former, we begin with a null hypothesis - masks do not work - and only change that belief if we have sufficiently strong evidence. In an ideal world this would come from something like a randomised controlled trial but, in reality, is more likely to come from an observational setting in this case for practical and ethical reasons. The big downside of this is if we have strong evidence that masks are useful, but not strong enough to reject the null hypothesis, a dogmatic frequentist cannot use any of this evidence as they have failed to reject the null. When taking the decision, the frequentist recommends not wearing masks. This seems to be at least a partially accurate description of the UK's decision-making process, as insufficient evidence on mask efficacy was cited as the reason not to bother with them. The trouble with this approach is that it is very difficult to gather high-quality evidence on mask usage, and even now the best we have is weak evidence from observational settings and a priori reasons for thinking they are helpful. A stylised analogy to this is that we do not have any gold standard evidence that parachutes are helpful in preventing death when jumping out of a plane, but I imagine most people would be quite keen on having one.
The Bayesian approach is much more well-suited to these types of decisions where obtaining gold standard evidence is not feasible. A Bayesian first forms a prior belief on how effective masks are, which could come from multiple sources. For example, we have the physics I previously referred to as well as evidence on mask usage in the context of other respiratory diseases, which combined imply that having a prior belief that masks are effective is reasonable. We then use what evidence and data we have to update that prior belief to form what is called the posterior distribution of our estimate of mask efficacy. Crucially, we then have to integrate the loss function over this posterior. All this means is that we have to consider how costly each decision is for each point of our estimated distribution. This really gets to the crux of the issue - if masks are useless and we make people wear them that is relatively costless, whereas if they are useful and we don't make people wear them this is incredibly costly as many lives would be lost which could have been saved. Therefore, even faced with weak evidence, it would practically be a no-brainer for a Bayesian to recommend universal mask-wearing. Eventually, the UK came to its senses and did this, but not nearly soon enough.
We have also seen similar mistakes made with vaccines. The most egregious of these came with the decision of many European countries to pause the use of the AstraZeneca vaccine after a very small number of recipients developed blood clots. I think politics were involved here, as to me it seems far too coincidental that the AZ vaccine has more or less been the only one subjected to the degree of negative PR it has been right through its development process to its rollout. This being said, the pausing decision again exemplifies the same kind of dogmatic frequentism. The poor evidence that the AZ vaccine caused blood clots was seemingly enough for the European countries to reject their null hypothesis that it is harmless, and so from a dogmatic frequentist point of view this justifies the pause. A Bayesian would have formed a prior belief on the safety of the vaccine, using previous results from numerous trials which showed no concerns, and then updated that belief with the data that showed a minuscule number of blood clots. Integrating the loss function over the posterior would then reveal that the expected costs of a pause - fewer people protected against Covid, damage to already weak Εuropean confidence in the vaccine etc. - would dwarf the expected benefits. Many pointed to the suspension decision as an application of the precautionary principle - which dictates that under uncertainty and in the face of harm, caution and pausing is warranted. This is a sensible approach with many medicines, as the terrible example of thalidomide would illustrate. However, it is entirely unclear to me how this principle works in a pandemic, as the suspension of a vaccine allows cases to spread further and causes greater risk and uncertainty in and of itself.
A pandemic was always going to be challenging for a democratic government to deal with, and it is naive to think that there is a wealth of easy solutions to an incredibly challenging global crisis. This being said, many big decisions taken were not only poor but reflected a flawed framework underlying them. Decision theory offers tools to provide policymakers with a coherent structure to potentially make better choices in the next crisis, choices that could possibly save countless lives.