Dr John Lee
Where is the vigorous debate about our response to Covid?
After a career as a scientist and clinical academic, I have been struck by how often they (we!) have very complicated and exceedingly well-reasoned ways of getting things quite wrong. That’s why I have always thought it best for the recommendations of experts to have ‘advisory’ status only. Experts’ roles are to examine the minutiae of a small subject area – with a view to gaining or advancing understanding. It is the job of our politicians and civil servants to develop appropriate policies.
Experts can be guilty of being monomaniacs, interested only in the thing they are studying. That’s understandable, of course, because many of these things are hard to comprehend. And having put so much effort into their work, it’s also not unexpected, and very human, that most experts put a lot of weight on their conclusions and are convinced of their importance.
That’s exactly why, when scientists call for their findings to be implemented by government, we need politicians and civil servants to moderate their enthusiasm, examine contrary views and express appropriate scepticism. And, in short, judiciously weigh all the other factors that come to bear on any given set of conclusions. The Covid-19 crisis took the world by surprise, and the world (Sweden excepted) has reacted in roughly the same way: with lockdowns. In the rush, the usual checks and balances have not been applied.
Certainty in science is a variable feast depending on what you’re looking at. In the physical sciences you can often be pretty sure of the numbers. Stresses in girders, for example, can be accurately calculated. But in the biological sciences, things are a lot messier. Living organisms have endless layers of mind-boggling complexity and this makes getting clear-cut answers difficult. You have to make many assumptions before you start your investigation, and then it’s very difficult or impossible to predict and control all the factors that could inadvertently influence the results. That is why so much of biological and (as part of it) medical science can be viewed as an ongoing debate rather than a trading of clear, unequivocal results.
Of course, groups of scientists who publish particular results may well believe and state that their approach and findings are definitely the right ones. But no matter how good your experimental technique, no matter how sophisticated your mathematical model, there will be others with different ideas that generate different outcomes and interpretations. Different ideas should be welcomed and tested. This is how science advances. Without a robust debate and constant interrogation of evidence, there’s a greater chance of big mistakes being made.
It has never been more important to understand this point than now. Mathematics in particular is such a closed book to so many that there is a tendency to regard its practitioners with excessive deference, to believe that the predictions they make – based as they are on impenetrable symbols and incontrovertible logic – must be true. But this is just a mistake.
Any mathematical model, however detailed, is only as good as its assumptions and its input data. It simply doesn’t matter how many equations there are, or how impressive the reasoning: if the assumptions are wrong, or the data is faulty, the outputs of the model will differ from reality, perhaps wildly.
Now consider the politics. At the moment, one particular approach to modelling the Covid-19 epidemic - that of Imperial College, London - is holding court in the UK. The actions that we are taking were based on these modelling results. Barely a day goes by without a politician saying that they will be 'led by the science'. But what we are seeing with Covid-19 is not 'science' in action. Science involves matching theories with evidence and testing a theory with attempts to falsify it, so that it can be refined to better match reality. A theory from a group of scientists is just that: a theory. Believing the opinion of that group without a critical verification process is just that: belief.
The modelling results may be close to the truth, or they could be very far from it. The idea of science is that you can test the data and the assumptions, and find out.
We know for sure that the input data in the run-up to lockdown was extremely poor. For example, it’s highly likely that a large majority of Covid-19 cases have not even been detected – and most of those that were identified were in hospitals, and therefore the most severe cases. Because of this, the WHO initially suggested a case fatality rate (CFR) of 3.4 per cent, which would have been genuinely awful. But as new evidence comes in the predictions of the models change accordingly. A paper from Imperial on 10 February suggested CFR of 0.9 per cent, a more recent one on 30 March 0.66 per cent (both based on Chinese figures, the reliability of which many doubt). Recent data from a German town suggest a CFR of 0.37 per cent, having found an actual infection rate in the town of about 15 per cent.
Infection rates in the UK are unknown, but the more of us who have been infected, the less lethal Covid-19 is for a given number of deaths. For example, if only five per cent of us have had the disease, this implies a CFR of 0.3 per cent on current death figures (8,973), but 0.68 per cent if there are 20,000 deaths. But if 15 per cent of us have been infected, the CFRs fall to 0.1 per cent or 0.23 per cent respectively. If 30 per cent of us have had the disease, the numbers would be 0.05 per cent and 0.12 per cent. And as I have previously argued, the way we are recording causes of death in terms of respiratory infection is different in this epidemic from any previous one, meaning that our observation and recording of Covid-19 deaths is more comprehensive and therefore potentially more alarming.
Death figures, or indeed any other figures, on their own mean nothing without context. For example, we keep being shown how busy intensive care units are. But we know that ICUs are often very busy. Exactly how does what is happening now compare in terms of numbers and case mix? Have the thresholds for admission to ITU changed because of what doctors thought they knew about this disease from initial, but inaccurate figures? What about treatment protocols? What about the differences in organisation and capacity in different cities and countries? The different experiences with the same disease in Italy, Spain, UK, Germany, Iceland, New York, and globally show that these are not irrelevant questions. What we think we are dealing with alters what we pay attention to and how we respond to it.
So much for the data. What about the assumptions of the models? These are many and complex, including, among other things, ideas about virulence, infection rates and population susceptibility, all of which are supported only weakly if at all by directly measured evidence. But to give an example from left field (which is exactly the sort of thing that destroys predictions): what do the models say about transmission between humans and animals? Apparently a tiger in a zoo has caught Covid-19 (what this implies about the two metre rule I don’t know). But could our cats therefore be susceptible to the disease and could they spread it between us? If true, would that make a difference to the validity of the model? Of course it could. Did the model predict or discuss this? Of course it didn’t.
More surprisingly perhaps, the Imperial College paper published on 30 March states that 'Our methods assume that changes in the reproductive number – a measure of transmission - are an immediate response to these interventions being implemented rather than broader gradual changes in behaviour' (my emphasis). That is to say: in this study, if the virus transmission slows it is 'assumed' that this is due to the lockdown and not (for example) that it would have slowed down any way. But surely this is a key point, one that is absolutely vital to understanding our whole situation? I may be missing something, but if you are presenting a paper trying to ascertain if the lockdown works, isn’t it a bit of a push to start with an assumption that lockdown works?
And what about the fact that the Imperial model seems to mirror the measured numbers of cases so precisely – when we know that these figures are all over the place, depend on test availability, and are measured completely differently in different countries? In the UK it is not even necessary to have a positive Covid-19 test to implicate it in the cause of death on the death certificate.
The issues I raise above may or may not be on the money. But the point is that for this type of science to work properly, it needs to be constantly challenged. If it is going to help us understand the world better, it absolutely requires wide discussion of different approaches and interpretations. It absolutely requires critical evaluation of both assumptions and data, and ongoing argument as to how well the evidence matches the theory. Only with this robust process can we learn from the evidence, improve our understanding of where we are – and where we are likely to go.
Until the last few weeks, claims of an extraordinary cure for a disease would have required extraordinary evidence. It would have been paramount to ensure that the side-effects were minor and tolerable compared with other treatments.
Yet we have now suffered three weeks of the most severe disruption our society has ever suffered, outside of wartime, with hardly any assessment of the side-effects on public health, let alone the economy. We are placing a huge amount of weight on modelling predictions, created with not much evidence, and untested assumptions. And in the certain knowledge that exactly this approach – the early modelling of pandemics – has been wildly wrong in its predictions before. There has been nowhere near enough discussion of the strengths and weaknesses of the model being used, or about whether the direct and indirect harms caused by our response to Covid-19 may outweigh the harm caused by the virus itself.
It's not hard to understand why politicians felt obliged to act by the distressing pictures of Covid-19 patients that were beamed around the world, combined with what we were being told about worst-case scenarios. But the fact that so many governments have jumped together, taking extraordinary actions based on modelling and prediction, is not a testament to the validity of those models. It is instead evidence of is what can happen when the emergence of a new virus interacts with science and politics in the multimedia age. And how difficult it can be, in times of panic, to stop and think.
It is time for us to return, critically and calmly, to a rounded and robust scientific debate that generates a range of views about the severity and significance of this virus. And for our politicians to weigh these differing views extremely carefully against the clear and manifest harms of lockdown. It is for ministers, not scientists, to decide whether, in the light of changing evidence and understanding, our response to the virus is proportionate - and how to take us forward.