Published:

Coronavirus, FAI Publications

What can we learn from previous pandemics and from the response to Covid-19 so far?

Jeremy A. Lauer, Professor of Management Science, Strathclyde Business School, (formerly) Economist with the World Health Organization

If you would prefer, you can read or download this article as a PDF

In spite of being the most foreshadowed global catastrophe in recent history, the COVID-19 pandemic has managed to catch all of us by surprise. Comparisons with the 7 December 1941 attacks on Pearl Harbor, and with the destruction of the World Trade Center by terrorists on 11 September 2001, are instructive. Following those attacks, and after a careful, investigative study of a broad range of signals and human intelligence, it was possible to reach the conclusion that they were foreseeable but that the pertinent signs had not been recognized, and that a sufficiently recognizable pattern therefore did not emerge in time to allow for mitigating action. This is different than the present case in important respects, but, as we shall see, it is also the same. History repeats itself in a different form.

Far from being an emergent phenomenon requiring the special exercise of intelligence capacities in order to be perceived, the current pandemic has been announced–plainly, repeatedly, literally and in detail–by a multitude of scholars and thought leaders for decades now. Academic journals exist dedicated to the study of emerging infectious diseases and zoonoses. In the first two decades of this century, the world has experienced no less than 5 pandemics:

  1. SARS 2004 (Coronavirus, encapsulated RNA virus)
  2. H1N1 2009 (Influenza A virus, encapsulated RNA virus)
  3. MERS 2012 (Coronavirus, encapsulated RNA virus)
  4. Ebola 2014 West Africa (Ebola virus, encapsulated RNA virus)
  5. Ebola 2018 Kivu, DRC (Ebola virus, encapsulated RNA virus).

Excluding the present pandemic, no less than 2 of them were caused by coronaviruses, and 4 have involved bats as a reservoir animal.

Moreover, almost every government in the world has (at least in name) a pandemic preparedness plan, and this is especially true for the developed western countries that experienced outbreaks of H1N1 influenza (Swine Flu) in 2009. Furthermore, governmental, non-governmental and inter-governmental organisations have made substantial investments in understanding pandemics and in outbreak surveillance, including the BSL-4 lab in Wuhan that was opened in 2015 at the Wuhan Institute of Virology–specifically for the study of the risks of viral pandemics.

Presciently, the Wuhan lab was first mooted in 2003, but its focus on coronaviruses crystallized following the SARS outbreak of 2004 [1]. In 2018, it was estimated [2] that between 1 and 7 million people in the rural Yunnan province could be being directly infected (i.e. without intermediate hosts) with bat coronaviruses each year [3]. We have known for some time that the process of natural selection, at the molecular level, is akin to a giant computer ceaselessly exploring for novel solutions to the problem of replication (reproduction), and we have likewise known that globalizing patterns of human activity are relentlessly increasing its CPU speed. How then were we surprised?

The last pandemic age began approximately 10,000 years ago, at the dawn of the agricultural revolution, with the commencement of settled human habitations and the increase they entailed in human-to-human contacts, along with the concomitant domestication of animals and the corresponding increase in human-to-animal contacts. Human health, along with broad measures of human welfare, directly suffered as a result during thousands of years [4]; with a few exceptions, however, the role of pandemic zoonoses in this history has not been a primary focus [5]. Yet all our most-feared pathogens were originally zoonoses: measles, smallpox, yellow fever, cholera, rubella, mumps, rabies, tuberculosis, and plague all arose from diseases carried by animals, from increased patterns of contact with domestic animals, and from humans’ increasingly sharing habitats with a number of mammalian orders rich both in species and in viruses [6].

Many members of Mammalia exhibit important biological features (such as the ACE2 receptor) in common with humans, and the class features at least 7 great species-and-virus rich orders: bats, primates, rodents, carnivores, lagomorphs (i.e. hares, rabbits and pikas), and two orders of ungulates (i.e. mammals with even- and odd-toed hooves). Five of these 7 orders are directly associated with the agricultural revolution and the domestication of animals: rodents, carnivores, lagomorphs, and ungulates. There are two exceptions to this rule: i) primates (the order in which Homo sapiens is located, and from which HIV arose in the last century as a result of bushmeat trade in chimpanzee meat combined with the increasing movement of humans between remote regions and urban zones) and ii) bats.

Save primates, bats are the only species-and-virus-rich mammalian order whose member species are not susceptible of domestication. Unlike primates, bats are however well adapted to co-habitation with humans in shared ecosystems. For these two reasons, bats are likely to be a key player in the next pandemic age. It is becoming increasingly clear that this age is one whose threshold has already been crossed: in addition to coronaviruses, bats harbour rhabdoviruses, paramyxoviruses, bunyaviruses, togaviruses, flaviviruses and others. More than 10,000 years after the skyrocketing growth in human populations and in human-to-animal and human-to-human contacts following the agricultural revolution, it has been estimated at global scale that about 30% of zoonoses are still “missing” [6]. At current pace, we seem to be finding these missing pathogens at a rate of one every 3 to 4 years, so the present pandemic age promises to be shorter than the last, should we survive it.

Harbinger of this age, HIV is estimated to have crossed over to humans in the 1940s or 1950s. Throughout the twentieth century, repeated global influenza pandemics (with birds and pigs as reservoir species) were the prelude to it. The three coronavirus pandemics of the current century comprise only the first scene of the first act of an unfolding evolutionary drama. It is worth noting in passing that there is no definition of “pandemic”. In international law, under the International Health Regulations (IHR), the only official designation is that of a Public Health Emergency of International Concern (PHEIC). Such an emergency was declared by the World Health Organization (WHO) on 31 January of this year, and (seemingly in desperation at its failure to mobilize sufficient global action) WHO started to use the term “pandemic” on 11 March, at a time when 114 countries had reported cases of COVID-19 and any doubt concerning its applicability was no longer possible.

According to genomic analysis [7], it is estimated that the first case of COVID-19 emerged at some time between 6 October and 11 December of last year. By 31 December of last year, international surveillance networks had already been alerted to the detection of clusters of atypical pneumonia (with cases numbering 44 on 3 January). By 7 January, the virus had been isolated. By 12 January, when reported cases worldwide were still below 300 and no international spread had yet been reported, Chinese authorities publicly shared the genetic sequence of SARS-CoV-2. This is both remarkably fast and unacceptably slow.

Compare the present case with that of HIV. Originating as early as the 1940s, scientists did not start using the term Acquired Immunodeficiency Disorder until 1982, did not discover the virus until 1983, and did not identify the host species until 1999. What took more than half a century for HIV was accomplished in 2 months for SARS-CoV-2. In spite of the blame-slinging, this is remarkably fast; the timeline proves, moreover, that the lessons of the past 100 years, since a pandemic of H1N1 influenza killed upwards of 25 million people worldwide, have not been entirely lost. Yet 2 months was never long enough to stop a virus capable of doubling its infections approximately every 3 days (in comparison with every 3 years for HIV). Two months turned out to be far too slow, therefore, to stop COVID-19 from instigating the largest durable disruption in human activity since the last world war, a disruption whose ultimate magnitude is still unknown, though it promises to be one of those epoch-making events that, like an ice age or meteor, will be legible in the future fossil record.

The first lesson to emerge is that human institutions are not engineered to respond in pandemic time. What looks remarkably fast (i.e. 2 months from detection to declaration of a PHEIC, quicker than required for most bureaucracies to approve a routine procurement contract) turned out to be too slow in pandemic time by a factor of 2. Had we known at the beginning of January what we knew by the end of that month the pandemic might have taken a very different course. This conjecture is supported by the experience of countries, such as New Zealand, which, reporting its first case on 28 February, did benefit from an additional month in comparison with the UK and countries in Europe.

Yet squeezing extra speed out of the system will require more than an empowered WHO, i.e. more than the authority to enact sanctions on countries who are slow (by what standard?) to implement their obligations under the IHR. My personal view is that, though there were avoidable delays, China moved about as fast as it could have reasonably been expected to, and that it moved demonstrably faster than nearly any other country in the world has done once the magnitude of the problem became apparent. It hardly bears noting, moreover, that, once the world was seized of the magnitude of the threat, we squandered precious time. That needed additional month will not, in my view anyway, be won purely by better international cooperation, or even through the actions of more responsible or altruistic political leaders. In my view, gaining the necessary time at the onset of the next pandemic will require significant changes in human institutions.

For example, pandemic policymaking, in particular in the UK, has been far too focused on a limited set of salient (available) epidemiological variables. We have taken our bearings from those areas of scientific knowledge where the unknowns we face are relatively more known. While understandable, this has resulted in the minimization of other key inputs equally if not more important, such as understanding what kinds of policy publics can and will support (where the science is perhaps less advanced), or the likely timing of the development of game-changing technologies (probably still more unknown). Curiously, despite the importance of these areas, we have mainly been content to employ a set of ad hoc assumptions bolted on either at the front- or back-end of epidemiological models.

This is a fair description of how the Ferguson et al. report [8] influenced UK policymaking on and around the week of 16 March. As has been widely reported, one of the assumptions underlying the analysis was that social distancing measures would be impossibly hard to sustain. Another was that it would be necessary to maintain these restrictions for a period of at least 18 months, until a vaccine could be produced. Neither the ex ante assumption nor the ex post one were motivated by anything but a cursory appeal to anecdote. Yet this did not prevent the stitched-together Frankenpolicy that resulted from being presented to policymakers.

The disjunction at the level of knowledge is mirrored in the structure of our scientific advisory process, in which (in the UK) the findings and recommendations of at least three separate expert committees (Nervtag, SPI-M, and SPI-B) have to be integrated, or “joined up”, at the level of the SAGE, which then makes recommendations to COBR. In hindsight it is safe to say that the SAGE, which has to deal with every kind of emergency from floods to nerve poisons, is not well tuned for the formulation of pandemic policy. Indeed, joining up disparate pieces of information seems to have been an ad hoc and informal process, with the result that the SAGE, and government policymakers, have been at times badly out of synch with the views of scientists on the SPI-B, both in terms of implementing and relaxing the lockdown.

In the failure to integrate disparate pieces of available information about the coming pandemic, it was possible for COVID-19 to “hide in plain sight”, as the Pearl Harbor and 9/11 attackers could. In the failure to formulate a strategic vision for mitigating actions, however, our response followed the paradigm of “planning to fail” [9]:

In the Vietnam War, like its successors, US decision makers struggled less than they should have when conditions permitted good choices, and then struggled more than could matter when conditions left them with only bad choices … policymakers are myopic. They craft policies around visible elements as if they were the whole picture. Because overarching policy goals are distant and open to interpretation, policymakers ground their decisions in the immediate world of short-term objectives, salient tasks and tactics, policy constraints and fixed time schedules. In consequence, they exaggerate the benefits of preferred policies, neglect their accompanying costs and requirements, and ignore beneficial alternatives.

An ad hoc approach to incorporating less salient, non-epidemiological evidence into policymaking has been instrumental in shaping the chaotic experience with policy implementation in the UK, one marked by false starts, u-turns, the defection–or ejection–of prominent advisers and the progressive narrowing of margin for manoeuvre. The lost time between 13 and 23 March, when the UK finally went into lockdown, cost tens of thousands of lives and is likely to cost many thousands more over the coming year. The failure of policymakers to represent, or even indeed adequately to conceptualize, the complex value chain of pandemic policymaking, and to distinguish thereby between the importance of a given body of knowledge for policymaking and the state of development of science in that area has been a costly error that has led to predictably dismal results.

The first is that it has been impossible to communicate sensibly, either to the public or to policymakers, about scientific uncertainty. It is impossible in the first place because we have not adequately recognized or made explicit the limits of our own knowledge, in particular regarding how slightly better-known unknowns in fact depend critically on a set of more-unknown unknowns (such as the relative difficulty of obtaining compliance with social distancing). Knowledge is nothing if not contextual, yet the assumptions underpinning claims (such as “the public will never accept that” or “this is not consistent with our values”) are easily overlooked. Another example is the idea, likewise unexamined, that “the public needs clarity”; with as a result the dumbing-down of policy to the lowest common denominator.

While this has the presumed advantage, at least in the calculations of policymakers, that policy will thereby “successfully land in the cheap seats”, i.e. in the ears of those seated farthest away from the podium and whose minds can be reached, apparently, only by the scientific equivalent of shouting, it unfortunately results in policymakers’ regularly making claims that are proved self-evidently wrong by facts. Politicians like to say that their policy is evidence based; scientists evidently appreciate the position of influence that their role in policymaking affords. Sadly, scientists and politicians have made what amounts to a forced marriage, one which, at least after the initial nuptial excitement has worn off, can be seen to have worked out badly for both parties.

The second major fault line that emerges is that any pre-existing consensus around a broad set of social goals starts to evaporate. Publics begin to identify trade-offs between the explicit, stated goals of policymaking (such as between biological and economic health). When the credibility of policymakers is undermined enough, these same publics will seek their own interests and cease to be willing to make sacrifices in the pursuit of collective social goals that benefit everyone. Individuals develop fatigue with policymakers’ lack of credibility much more rapidly than they do with the difficult choices imposed by maintaining social cohesion around a set of shared objectives.

Nevertheless, trade-offs between goals of policy can be real, and the incidence of their benefits and disbenefits not equally shared. When policymakers are incapable of credibly articulating a unifying strategic vision, individuals are hardly to be blamed for making private calculations. The cynical may suppose that the aim was always precisely thus: to fracture social consensus sufficiently to create a state of disorder in which the raw exercise of power would be relatively unconstrained. Howbeit, bad pandemic policymaking inevitably undermines the social contract, and it eventually can destroy even the possibility of cooperation.

This points to the third and last fault line that I wish to mention. In pandemic policy, in contrast with, say, fiscal and monetary policy, government is not the implementer. The entirety of distributed individuals and communities, who each decide every day through a multitude of actions–as banal as visiting family members, cancelling football practice, or making trips to the store–either to express loyalty to policy or to defect from it, are the true policymakers. Though formal elections are the usual means by which individuals express their consent to be governed, their adherence (or not) to social-distancing rules amounts to the institution of a continual de facto election throughout the duration of the pandemic.

Adherence to policy has important costs and benefits, yet there has been little attempt to engage in policy formulation those who are directly affected by the outcomes. This is not the case, for example in the Netherlands, where an empirical study into the public’s views about the trade-offs between health impacts and other effects of lockdown has been conducted online with members of the public [10]. I think it will be very hard, especially over the medium and long term, to formulate effective pandemic policy without involving those who are directly affected.

Comparatively little thought has been put into thinking about how to empower individuals and communities to make better choices in the diverse circumstances they face, rather than following a minimal set of absolutist pronouncements. Although the failure to implement a successful test, track, and trace (TTT) system is both inexplicable and inexcusable, I believe that the failure to engage distributed publics more effectively and systematically in policymaking has been the over-riding strategic failure of pandemic policy so far. This is one area where we would like to make some impact at Strathclyde, if we can.

 

Author: Jeremy A. Lauer, Professor of Management Science, Strathclyde Business School, https://www.strath.ac.uk/staff/lauerjeremydr/

Authors

The Fraser of Allander Institute (FAI) is a leading economy research institute based in the Department of Economics at the University of Strathclyde, Glasgow.