James Ball
Can we trust Neil Ferguson’s computer code?
Newspapers aren’t the place to debate expert advice on a crisis. Advisors advise, ministers decide. We should keep politics out of science.
These three cries – and numerous variations upon them – have become common refrains as the UK’s increasingly fractious debate on the lockdown, the science behind it, and the best way to lift its various restrictions rolls on.
At first, they sound completely reasonable and unarguable: people are stepping up to the plate to help the government make life-or-death decisions in a time of crisis. That’s an admirable thing to do. What’s more, they’re doing it with years of expertise in their field behind them. Of course we should leave them to their work, and let them help guide our course.
The reality, of course, is messier.
Perhaps the most contentious of the government’s high-profile scientific advisors is professor Neil Ferguson of Imperial College, who heads up that university’s epidemiological modelling team, and whose model was credited as influential in sparking the lockdown.
Ferguson was the subject of surely unwelcome press attention this week when his lockdown liaisons with a married lover were splashed across the newspaper front pages. It was clear he would have to step down from his role on the government’s scientific advisory committee at this point – but only for the hypocrisy of not following the rules that he was influential in shaping, nothing more.
It is nonsense to imply, as some have, that Ferguson’s participation in a little (apparently ethical) non-monogamy affects his work as a scientist in any way. It does not – hypocrisy was his sin here, nothing more. This would provide no reason for SAGE or for ministers not to continue consulting his modelling, or even informally consulting Ferguson himself if they so choose.
But if we are to say that Ferguson and the Imperial model’s work should be judged on its own merits, that does mean that we – all of us – should be allowed to do just that. We cannot sit back and allow it to stand because he’s the expert. Other people with relevant expertise should be able to see the team’s workings, to ask awkward questions, and to loudly disagree.
For a long time, this was all but impossible. In a fairly unusual break from best practice, Ferguson did not release the code on which his model runs (and has run in various forms for several years), saying it was largely undocumented and would make little sense to outsiders.
This is poor practice for multiple reasons, not least of which being that replicating another’s work is a core principle of science, and essential to check workings. It’s also well known among programmers and scientists alike that most code eventually contains errors and idiosyncrasies, for which we must remain constantly vigilant.
Far, far simpler models than Ferguson’s have ended up containing huge errors that have drastically altered their conclusions.
A pseudonymous post on Lockdown Sceptics has done just that preliminary analysis on a version of Ferguson’s code that has been cleaned up by Microsoft and others.
It raises a series of concerns that the published version of the model introduces randomness where it shouldn’t. Such models are intended to include some randomness – the concept is that they are run many, many times and we take an average, given that the path of the spread of a virus is itself is subject to chance.
But factors like the computer type upon which the code is running should not affect the result – and when those developing a model can’t rule out systemic errors (as they don’t seem to know what’s behind them at all), that should worry us. No model should be above questioning.
We should, though, pause well short of that article's conclusion, which suggests throwing out all papers based on the code should be retracted and 'all academic epidemiology be defunded', which risks putting one and one together and making 11,000.
Ferguson’s model has not led the UK down a drastically different path from that of many other countries – indeed, it only recommended lockdown relatively late versus those used by other countries. It likely contains errors, but it’s hardly a huge outlier from the international consensus. Those looking for anything to show lockdown is an error should search for another straw to grasp.
We should, though, welcome the efforts to test and even to tear down the Imperial model. This is what the scientific process is – a spirited and often fractious public debate, a battle place of ideas. It is rarely as high-minded and public-spirited as those who place it on a pedestal would hope.
Peer reviewers savage a paper because it contradicts their own research, or because they’ve guessed who the author is and can’t stand them. Institutions battle for fame and for funding. People hold grudges. Personality, like politics, doesn’t stop at the water’s edge – good work comes out of dubious motivations.
Science also doesn’t stop at the journal or at peer review. The disastrous MMR study on autism by Andrew Wakefield may have been boosted by supporters in the media, but it was published in a peer-review journal. The drug thalidomide passed all appropriate scientific and medical checks. Continued scrutiny might not be nice, but it can save lives.
We should be grateful to anyone stepping up to try to help tackle coronavirus. But that shouldn’t stop us for a second in holding their feet to the fire either.