An excerpt of a longer manuscript currently under preparation
Mathematical models have occupied unusually prominent place in the media, political and public discussion over the past several weeks. Taking just the example of the United Kingdom, the early models of the SARS-CoV-2 spread in the population that supported “natural” building of herd immunity had to be corrected, as the large death toll came to be understood. New models were then applied to support the policies of restricted population movement.
The development of new models, as well as refinement of existing ones as new data is collected, are part of normal science. Yet, while under regular conditions models are first validated within the scientific community and testing of the assumptions made, before their integration into the cannon. But these discussions are now taking place in the public – on social and mainstream media. To both the public and policy communities it appears as if a competition of whose model is the best model seems to be playing out. Projections drawn from provisional modelling rapidly feed into policies, but also affect public trust, understandings of decisions and personal decisions having enormous implications to human lives and livelihoods.
Here we utilize insights from philosophy and social studies of science to discuss data and models; scientific uncertainty and its relation to facts and scientific consensus; scientists’ responsibility; and how these insights impact on policy and decision-making in times of acute crisis. Our aim is not to criticize but rather to contribute to the understanding of the problem and offer some practical recommendations.
Part of the scientific process involves the agreement on what constitutes a scientific fact, meaning which features of reality need to be observed and measured. Together they constitute a certain segment of reality which we can describe as a system: a collection of entities which interact with each other and their contextual conditions by a limited number of relationships.
To understand a system we are interested in, and to predict the system’s future states, we can construct a model. The model requires comprehensive and reliable input (data and parameters of the system) in order to have a chance to generate reliable output. However, even with good input we do not always get a highly reliable output, as we can construct models over segments of reality in various ways. In epidemiological models some of the variability is reflected in distinction between best- and worst-case scenarios.
But more basic uncertainties hide behind this. They include natural variations in climate receptiveness of the virus, variations and constraints in people’s reactions to containment measures, elementary unknowns of the prevalence of the virus in the population and the reliability of various testing kits, abstractions in the construction of a composite indicator of stringency of measures, under- or over-reporting of health care units due to external factors, and so on. This of course does not mean that models are not useful. But they do require the understanding of the gross abstraction from reality that every model needs to build upon.
In a pandemic, decision makers typically want numbers. Some numbers can be given with a reasonable confidence (e. the number of infections given a certain number of tests, provided the tests are reliable and reliably administered and considering the extent to which the whole population is or can be tested).
Yet numbers (and especially graphical representations) are powerful rhetorical tools, as the wide usage of graphs to communicate public health messages, support or criticize certain approaches, show. But numbers can also be used to drive down an argument and convince decision makers even if they do not necessarily grasp how the number is generated in the first place. Therefore, the temptation is big to extend the use of numbers into areas where uncertainties are overwhelming.
Of course, communicating uncertainty in the time when rapid decisions are needed is not without challenges. While a frank admission of uncertainty preserves scientific integrity in not passing mere guesswork as scientific information, it simultaneously hands to politics a clear necessity to prepare for the worst imaginable case, else the political price will be too high. So how should scientists behave when producing and communicating science for use in policy and political decision-making in crisis times? Below we discuss how responsibility and transdisciplinarity come into play.
For much of the twentieth century the division of moral labour between “science” and “society” entailed scientists focusing on their research, and lawyers, politicians and regulators looking after the ethical, social, political side. This division of labour had begun to change towards the late 20th century (e.g. the “responsible research and innovation” framework) with an increasing demand upon scientists to take greater ex-ante responsibility for their ideas and actions by thinking up-stream through the potential impacts and unintended consequences of their science.
This social responsibility of scientists must extend to the times of crisis. For a scientific expert a gross error in their estimates may, at worst, lead to the loss of reputation and trust in their judgement. But the political decision-makers are accountable to society. Even in a pandemic, they cannot base their decision on scientific information alone. All measures have large economic as well as psychological impacts, and side-effects in areas not usually explicitly addressed in scientific recommendations.
In short, the complexity of factors entering a good decision should not be underestimated. At the same time, there is an argument to be made for unity among the scientific voices. Display of deep disagreement among experts is prone to undermine the support of measures taken by the government; confrontation and critical assessment of alternative views and models routine in normal science do not translate well into the public arena.
Responsible science differentiates between the different fora it addresses and presents its contributions accordingly. The most effective strategy to come to robust input into governmental responses to a pandemic is to have their ideas vetted in transdisciplinary fora before they are handed over to politics and the public. And between science and policy there needs to be competent formally established brokerage.
The much criticized “herd immunity” model, with its implicit utilitarian and even eugenicist ethical framework, is a good example of how mathematical models without the input of other disciplines are not only impoverished, but possibly dangerous.
But other, seemingly more benign tools can be equally problematic. One such tool is QALYs, attractive to the modellers because it quantifies effects and thus provides the basis for explicit trade-offs. Yet, QALYs sits firmly within utilitarian ethics. It does not include deontological principles such as respect for life and human autonomy. Deeper issues such as the question of what kind of society we want to live in – is a society where lives of some are risked to protect the lives of others acceptable? – must be considered.
Another example where a transdisciplinary approach is needed, are the assumptions of which types of measures may be palatable in different societies. Casual grouping of East Asian countries together ignores differences not only in the approaches that they take, but also political and social conditions as well as demographic characteristics. For New Zealand in particular, the assumptions of individualism (understood to be inherent to “Western democracies”) directly negate the bicultural foundations of the society and the strong collectivist tradition among both Māori and Pasifika peoples, alongside other ethnic and cultural groups. Here an input of social and political scientists, not to mention indigenous groups, would be of utmost significance.