An abridged version of this text was presented by Sir Peter in his address at the New Zealand Association of Scientists’ Annual Conference, “Speaking Out: Going public on difficult issues” in Wellington New Zealand on 10 April 2015.
We live in a world that is sometime called a post-trust society. With today’s nearly boundless access to news and information about science, many of the claims and counterclaims can be confusing. Yet at the same time, there has never been a more vocal public call (and need) for an active role for scientific expertise in the business of developing societal consensus, governing and law making. Science has truly become a necessary tool of democracy.
So how do we reconcile these tensions and ensure an appropriate place for science and scientists in societal decision-making and public policy making where it must have a critical role? How do we ensure the scientific community can identify and address those issues that have the potential to undermine trust?
In this essay, I argue that it is best done by playing it straight, being consistent and recognising and labelling our own limits and biases as scientists. Above all, it is by recognising that trust – of the public, of policy makers, of politicians and of professional peers – in science must be earned and actively maintained. This is not always easy in a competitive and sceptical professional environment which privileges independence, peer recognition. And yet, now more than ever before, we must be cognizant of our public roles as scientists. The social contract for science has changed and continues to change dramatically, and we scientists should be actively involved in its reshaping since, by any measure, it includes a considerable public role for scientists.
It is instructive here to consider what the public role of science has been until now. For much of history beyond the classical period, the answer is a simple one: little or none. Or so it was, at least up until the modern inter-war period, and even then it was rather limited until perhaps the late 1980s. Before then the scientist with a media profile was, too often, looked upon with suspicion by his or her colleagues.
In the 17th and 18th centuries, the professional scientist was non-existent. Scientists were amateurs and tinkerers, often of the leisure class or having a patron who would fund their idiosyncratic ‘hobby.’ To be sure, universities existed, but far from being hotbeds of scientific exploration and the advancement of knowledge that we like to see them as today, they were in fact the conservative bastions of tradition, of faith, and generally saw scholarship as received wisdom.
It wasn’t until the late 18th and early 19th centuries that we see the advent of the professional scientist, a term invented by William Whewell in 1833. This was the beginning of the period that, over the next fifty years, saw immense global activity in truly developing and refining many of the norms and operational standards of science as a self-regulating professional activity. It is on the basis of these enduring norms and standards that society could begin building its trust in science.
About 100 years ago was the start of the public funding of science as we know it today. The 1918 Haldane report in the UK was central to establishing the crucial way in which science has evolved free of political interference. However, it also had the effect of reinforcing the separation between science and society. It established scientists’ science-centric view about how decisions on science policy should be made. The work of Robert Merton[1], a pioneer of Science Studies, was influential in this period in cementing the vision of an autonomous culture of science, standing apart from the rest of society, while also instructing it. In today’s context, this patronising vision is now considered to be an extinct conceptual model. But that doesn’t stop it from being a tempting characterisation that scientists can easily revert to.
It wasn’t until the mid to late 20th century that we see the beginnings of a real relationship between science and society. War-time and post-war science ushered in the concept of ‘Big Science’, which was about big public infrastructure, technical grand challenges (putting a man on the moon), and eventually about speeding up and globalising scientific production, with the birth of the Internet. Let’s not forget that the Net itself originated in Big Science through CERN.
Which brings us to where we are today; Sometimes called the age of ‘post-normal science’ – a term first used by Ravetz and Funtowicz[2] in the early 1990s – today’s science is characterised by new and unprecedented operational and methodological realities which embrace uncertainties, contingencies, interdisciplinary approaches and the co-production of knowledge. There is an increasing acknowledgement that science is a societal endeavour, which can mean working more with knowledge end-users to make our science useful; democratising the research agenda; and listening carefully to public discourse about technology and social license, among other things. But in doing all this, it must also mean protecting and upholding the standards and practices that make science trustworthy in the first place.
So what can we learn from this woefully abridged history lesson? Cliché though it may be, by understanding our journey, we can better see where we are and understand that the place of science in society is dynamic and continues to evolve. Indeed we are all actively involved in shaping it as much as it is shaping us as scientists. It is easy to forget that science systems have not been fixed but have themselves changed dramatically in the last 50 years. As I have described elsewhere, public science systems are now undergoing a period of particular instability and change driven by both extrinsic and intrinsic factors. Scientists will both have to play a role in designing these changes and accept that change is inevitable.
The brief appeal to history also serves to remind us that any scientist practicing today, depending on their age, either has lived through an unprecedented transition in the way that science interacts with society (including through public policy and industry), or has been professionally ‘born into’ this structure and therefore may take this relationship for granted. The older group may be indignant about the new societal demands foisted upon science and the younger group may simply not see it for the revolution that it is. Either way, as scientists we are not prone to voluntarily unpack it and truly understand its imperatives, its opportunities and – indeed – its challenges.
Two of those challenges are the evolving, highly contextual and potentially conflicting perceptions of the role of the public expert on one hand and the public’s attitude toward science on the other. At times there is public skepticism towards new and controversial technologies, while at other times there is high public support for more science-informed decision-making in the public sphere. Sometimes these can be in overt conflict. In both cases, the new public role of science means that it is now a critical public and political resource across the spectrum. And that is what makes trusting the scientist complicated.
I often use the definition of science that social anthropologist Jonathan Marks[3] formulated in his book Why I am not a scientist. Marks and others suggest that science provides the only processes by which we can gather relatively reliable information about our world. It is important to note that this definition sees science as a set of processes, not facts, and accepts that science cannot provide all the answers. Protecting and promulgating the integrity of these processes is the key feature that legitimates the expertise of the scientist.
And yet even “expertise” is not immune to critical questioning. How do we know when we have attained it in sufficient measure so as to advise others? How can we ascertain its boundaries when we speak about our work to non-scientists? Jurgen Habermas[4] was the first to problematise the concept of expertise. Curiously, much of the sociology that followed Habermas’ work served to denounce the role of the public experts, calling this out as elitist and counterproductive to democracy. This is perhaps in keeping with the tendencies of 1960s’ sociology, but this research did have the effect of ushering in a wider academic and public interest in the role of the expert. The influential empirical studies that followed throughout the 1980s began to view the legitimacy of expert advice as a combination of authority, built on access to specialised knowledge, and – importantly – trust.
In building and maintaining trust, two concepts that I have already mentioned from studies of the sociology of science stand out and, I think, really shape the way we should think about our jobs as scientists today. These are: 1) the rise of post-normal science and 2) knowledge co-production.
Post-normal science increasingly deals with complex and interdependent systems and feedbacks, with uncertainties, and with a probabilistic rather than mechanistic approach. It is also characterised by research in areas of high public interest and urgency. Social issues, climate change, biodiversity are all examples of post-normal science. Virtually every issue of public and political contention in which science is involved fits this definition. Indeed, as science has engaged with more complex issues as a result of both analytical and computational progress, it becomes increasingly post-normal.
So it can be complicated to communicate legitimate expertise in the context of post-normal science. It is too easy for uncertainties to be exploited or for information to be cherry-picked to support biases. And as the science of communication and decision-making becomes clearer, we understand that inherent biases are not easily overcome by simply presenting science-based knowledge, which adds a further challenge.
The second concept, knowledge co-production, originated with scholars of Science and Technology Studies (STS), notably Sheila Jasanoff[5]. It recognises that the institutions of science and those of society are actively shaping one another. We live in a world increasingly defined by the products of science. But science and technology are themselves influenced by social, cultural and political institutions – and so it goes in a continual iterative cycle. The better we understand this process, the more able we are to be deliberate about co-production in the establishment and maintenance of trust in science.
This means making a space for the public voice in the scientific enterprise. Most commonly, this is done through elected public representatives involved in setting the research agenda through establishing science policy. At a more grassroots level, we see deliberative public dialogues taking place around controversial questions with science at their heart: either as source of concern or possible solution.
Central to all of this work is to clearly delineate the role of science and the role of public values. In her book Science Policy and the Values Free Ideal, Heather Douglas[6] points out that science can never be values free because there are some critical points where values enter into the production of knowledge: what to ask; how to ask it; the ethics surrounding it; and the judgment needed to assess whether there is a sufficiency of evidence on which to take action. None of these steps is the domain of the scientist alone. But scientists must protect as much as possible from our own and anyone else’s values the collection and analysis of data and the robust formal processes of science. Even in the context of today’s post-normal science, the enduring standards of the scientific process and the checks and balances of rigorous peer review are what give science its legitimacy and secure its privileged place among epistemologies. They are the foundation of public trust in science.
Human values inevitably will and must enter into the question of the application of science. This is fitting because science is a public tool and a common good. Indeed the issue of ‘social license’ for scientific and technological innovation is enormous, yet arguably the scientific community has been insufficiently attentive to this in general. The greater public access to science-based knowledge has allowed for far a broader societal conversation, but the quality of that conversation is highly variable depending on how it is conducted. Too often, science can be co-opted for a values-laden debate rather than informing the public discussion on contentious topics. Such tactics render impossible any meaningful engagement about the uses and limits of technologies and the inevitable trade-offs that must be considered.
The core theme of the 2015 NZAS annual meeting (and the impetus for this essay) is fundamentally one of the communication of science and the rights and responsibilities of scientists. Here the very nuanced boundary between the appropriately relatively values-free content of science and the appropriately values-rich use of science create real challenges for both scientists and the publics they ultimately serve. And this creates major issues in thinking about trust in science and scientists.
Trust is fragile. Sloppy, fraudulent or – to use Ben Goldacre’s term – ‘bad science’, and the misapplication of science without regard for social license, increasingly gets media attention, giving rise to justifiable concerns to which not enough attention is paid. This, combined with societal responses to the pace of technological change and the online ease of access to pseudo-science, leads to a sizable percentage of the population developing unease or frank distrust of science.
The recent public survey commissioned by MBIE (and in similar surveys internationally) shows that, while the majority of respondents consider science to be important, nearly half think the science they hear about is too complicated or, worse, that it is too contradictory to understand. In my role I have to address some of these issues; No matter what the science says about fluoride or vaccines or reproductive technologies for instance, they will always be abhorrent to some in the population for a variety of philosophical and other reasons.
To be sure, society has a right to override science in restricting the use of any technology but society is best served when rhetoric and hyperbole does not drown out either the measured and evolving scientific discussion or its ability to properly inform public debate.
Yet the nature of the modern media and issues-based advocacy is such that the very issues that should be discussed dispassionately often are not. This is perhaps most exemplified in the shifting use of the meaning of precaution and the precautionary principle, from one of adaptive management of risks based on our growing knowledge base, to one of total inaction. French STS scholar Michel Callon[7]and his colleagues first illuminated this usage shift some fifteen years ago, reminding us that the precautionary principle is a framework for measured action, not abstention, while science continues to reduce uncertainties. But its strategic use by all sides of ideological debates has entrenched polarised positions in which science can be co-opted in ways that both undermine public confidence in science and limit its future value to society.
In finding a way through this, it is helpful to recall the 4 types of interaction between scientist and society that Roger Pielke[8] developed as a heuristic in his book The Honest Broker. I will focus on the two outward facing constructs – the Issue Advocate and the Honest Broker.
The Issue Advocate is the scientist who collects and presents data with a view to servicing a cause. While it should be incumbent on such a person to apply standard scientific practice and to reflect the scientific consensus, conscious or unconscious filtering can often occur such that the scientific argument directs a particular course of action. Yet the Issue Advocate is the role that many scientists can and should play in the public arena. And it is indeed an important role in elevating issues in the public mind. However, the difficulty is when the distinction is lost between presenting the scientific consensus and actively advocating. This can compromise the integrity of the science, and undermine the possibility of its arguably privileged status as input into policy.
But of course scientists are also citizens with absolute rights as citizens to be active and engaged actors in issues about which they feel strongly. The challenge is to manage the tension that may arise between our private and public faces.
The Honest Broker tries to identify and overcome biases to present what is known, what is not known, what is the scientific consensus, what are the implications for policy and action and the trade-offs of various options. This is the role that science advisors – whether committees or individuals are expected to take.
These distinctions are not new. What is new, perhaps, is how important they have become to the wider public discourse. One measure of their public salience is the recent cover story by the American popular periodical, National Geographic Magazine, dealing with public mistrust of science. In this piece Washington Post science writer Joel Achenbach[9] highlights the thoughts of noted science communicators, which are worth quoting:
“Some environmental activists want scientists to emerge from their ivory towers and get more involved in policy battles. Any scientist going that route needs to do so carefully. That line between science communication and advocacy is very hard to step back from. In the debate over climate change the central allegation of the skeptics is that the science saying it’s real and a serious threat is politically tinged, driven by environmental activism and not hard data. That is not true and it slanders honest scientists. But it becomes more likely to be seen as plausible if scientists go beyond their professional expertise and begin advocating specific policies.”
So with this in mind let us reflect on the various types of scientist-societal interaction within the NZ science community.
What is important is to recognise the multiple roles and their relative positioning within the public science system. I am not going to comment on the growing profession of science communication but I think many of the same principles I have addressed apply there too.
For those such as myself and the growing number of departmental science advisors, our roles and obligations as Honest Brokers are clear whether we operate alone, within committees, or by establishing working groups to address particular questions more deliberatively. Indeed, this is a model that I’ve adopted recently with the Royal Society of New Zealand. I will not comment on those roles further as they are well described in documentation stemming from the Auckland international conference on science advice to governments, available here and here. However, I will note here that the primary role of my Office is to improve the use of evidence in policy formation and the use of science to benefit NZ’s interests rather than to advance the interests of the science community per se – this is a distinction that is not always fully appreciated.
Then there are the many scientists employed within government agencies – primarily in ministries such as DOC, EPA and MPI – who deal with policy and regulatory issues, and often also conduct research. By virtue of their employment, such individuals are bound by the rules of State Services that generally require the consent of management to communicate their research outside of their workplace. To be sure, some agencies such as DOC encourage their staff to be part of the scientific community and promote openly their scientific conservation efforts. But fundamentally, no civil service anywhere gives employees the right to be free agents outside their employment on the very matters they are employed to deal with. This is normative in the operation of a public service.
University academics, for their part, are in quite a different position. They operate under the principle of academic freedom that is enshrined in New Zealand law within the role of the university. However this principle has its own parameters. The notion of academic freedom is the pursuit of a line of scientific enquiry unfettered except by the provisions of research ethics and scientific integrity. Yet the function of peer review ensures that even academic freedom has (indeed must have) its own bounds; it is a privilege extended on the basis of individual expertise that is not shaped by a particular bias or perspective. Society treasures that academic freedom and, in general, university staff have neither abused it nor ignored it. As a result, the excellent profile of many academics in the media is well deserved.
The only caveat I would make is the growing evidence that university press offices tend to over-hype research success stories, which can undermine confidence in the science and the scientist. One scientist cannot cure diabetes 5 times in their career.
Academics are also increasingly being engaged by governments in advisory processes and clearly this is highly desirable for a well-functioning democracy. But it is important that any academics speaking out to government or the public delineate the limits of their expertise. When I am asked to advise on specific issues, I identify the subject matter experts and serve as a conduit for translating relevant information to the government. But importantly, there needs to be consistency between what individual academics say in public and when part of an advisory process. Where there is inconsistency, distrust rapidly arises.
Perhaps the biggest emergent issue for the academic, however, is that of real or perceived conflicts of interest. These exist everywhere but are particularly apparent in small countries. Most can be easily handled in a transparent manner. But the conflicts that create most difficulty are those that arise because of sources of funding – whether from the private sector or from civil society organisations. Transparency is critical but there is no doubt that the increasing drive worldwide to engage the private sector with universities is creating tensions. Arguably, less attention has been paid to scientists who are supported by issues-based NGOs. But this too gives rise to conflicts and increasingly journals are expecting all such interests to also be declared. This whole area is complex and the growing dependence of academics on co-funding means that these issues provide an easy target for criticism which is sometimes justified and sometimes not. Parts of the academic community remain suspicious about the quality of science produced in partnership with the private sector. This is a mistaken generalisation: such research consistently tends to be the most highly cited in peer reviewed journals. It will require ongoing discussion within the community of scholars and with civil society to get beyond kneejerk reactions and find realistic solutions.
New Zealand is unusual in that half of its publically funded scientists operate in government owned research institutes outside the university sector. Excluding defence related activities, equivalent arrangements in many other countries, with some exceptions within Europe, are far more modest. Here again, history is instructive: NZ science largely occurred within the Department of Scientific and Industrial Research (DSIR) which emerged out of the war effort until our universities in the 1960 and 70s started to promote research degrees. DSIR and then the CRIs, along with research teams in some government departments like the then Ministry of Agriculture were designed to prioritise the research effort in areas where the universities were not seen to have a primary role. But what distinctive was that, in 1992, CRIs were set up as state owned companies with the multiple missions of conducting research that government needed, public good research, supporting and assisting private sector research and making a return on investment. This range of roles can create ongoing tensions and angst.
Many University-based researchers have partnerships with the private sector supported by contracts and agreement. Academics are careful not to compromise these contracts, which may establish some parameters for public communication. But University technology transfer offices (TTOs) generally are very good at restricting such clauses that limit communication, and beyond this, Universities leave decisions over public communication to individual academics.
So it is understandable that some CRI employees would prefer to operate under the same rules as academic staff but this is not the case legally. They are employees of a Crown owned company, and with it come different roles to those enshrined in the Universities Act. In effect this means less autonomy over public communication. Whether these restrictions are always necessary to the extent that some perceive to exist is another matter.
Given that so many of our scientists are employed in CRIs and in key areas of public interest, this is an issue that really merits reflection. But in so doing, let us also remember that the practices governing the conduct of CRI scientists were adopted directly from predecessor agencies. The rules have not changed. What has changed, perhaps, is nature of the relationship between science and society and the societal context in which CRIs operate.
Have CRI managers become too risk adverse in managing the public role of CRIs? Should they encourage their staff to engage more in public communication, particularly in the role of Honest Broker? As I understand it, the issue seems to be an assumption that enhanced public communication could harm the ability to get commercial contracts or contracts from the Crown. Yet there is no fundamental reason why this should be the case provided that sensitivities are appropriately handled and contextualised as University TTOs have done.
Thus, while tensions would likely arise with management for a staff member acting as an Issue Advocate given the mix of functions that CRIs perform, there is real merit in a constructive dialogue with CRI management over enhancing the public sharing of expertise. I have started such a discussion with Science NZ on this very matter.
Crises and emergencies are a special case where the responsibilities and obligations of science to society become particularly acute. As UK Science Adviser Sir Mark Walport pointed out in his address to the global meeting on science advice to governments held in Auckland last August – during crises, the scientists close to government (local or central) effectively become much more influential on decision makers, rather than simply acting as advisors. Scientists with these roles thus have particular responsibilities in how they package their advice.
And these issues spill over into the public domain for other scientists who engage in such situations. Indeed, it has been noted that non-specialist policy advisors are often influenced by their reading of the media, ideas from which will often flow into the totality of their policy advice. Thus, it is imperative to ensure that the scientific message carried through the media in times of crisis is appropriate to the situation, which of course is fast changing.
It is not surprising that events such as L’Aquila and Fukushima have exposed issues about scientific trust and advice, particularly in crises, and have led to a lot of soul searching by the scientific policy community. Many august bodies including the Global Science Forum of the OECD have been reviewing their guidelines relating to trust and integrity in crises as a result of such events. This conversation has expanded to a broader consideration of the responsibilities of scientists in public communication.
Perhaps the most important declaration is Singapore Statement on Scientific Integrity published in 2010. This statement was the result of the world conference on research integrity which involved more than 50 countries and included researchers, funders, representatives of universities and research institutes, and research publishers. Article 10 on Public Communication is most relevant to the present discussion:
Researchers should limit professional comments to their recognized expertise when engaged in public discussions about the application and importance of research findings and clearly distinguish professional comments from opinions based on personal views.
Of course such a statement is open to multiple interpretations especially by a researcher who may well have unconscious or conscious biases that conflate what this declaration tries to separate, namely professional comments from personal views. And this is where better dialogue across the scientific community is essential.
Similar statements appear in many guidelines produced by academies. The most recent perhaps is the revised statement of the Japanese Council of Science. Our own Royal Society of NZ has a code of ethics with very similar sentiments within it. However, one does wonder just how many scientists or trainees are aware of such codes and whether the codes can account for – by way of example – new methods of communication and public engagement now at our disposal.
The culture and structures of science thrive on that skeptical and constructive debate but it does us no good to confuse this with our obligations to the public. In the media, this confusion is often manifested in the drive to find a contrarian view in order to present a false ‘balance’ in a putative debate, completely disregarding any overarching scientific consensus and emphasising instead the uncertainties. Here, the media are seeking controversy and we need to work with them to end this practice.
In my address last year to the opening of the Congress of the International Council of Science in Auckland, I suggested that we needed to give much more attention to the issues of civics within science training. Science is embedded in society and all scientists – but particularly emerging scientists – need a greater awareness of the way science is engaged with society and the obligations it places on them.
Clearly the relationship between science and society is changing. It is in the context of this change that the National Science Challenges panel recommended to Government the Science in Society initiative that culminated in the release of A Nation of Curious Minds, which is designed to encourage the science community to reach out more and in many different ways. Within this initiative was the recommendation that the RSNZ membership consider whether its code of ethics is up to date or might need revision for the ever-changing context I have just described. I believe that asking the science community to independently review its own code, which may or may not need any change, is healthy thing. I remain astonished that this recommendation could be misinterpreted – it is clear that the intent of the strategy is to promote science-societal interactions.
I have argued that science ultimately depends on trust and integrity. There is increasing concern about this globally. As the enterprise has grown so too have its controversies. Science communication is but part of the science enterprise but it too must have integrity if the reliability of science is to be protected. It is easy to blur boundaries – when does scientific debate over complex matters stop being a scientific debate and become a values debate? And scientists take different roles in such debates. They can be acting as Knowledge Brokers, or they can be acting as Issue Advocates. In both cases, do they make clear the limits of their expertise? Are they clear about whether they are simply transmitting rigorous and reliable information or whether they are conflicted by virtue of organization, employment or deeply-held belief?
As scientists, we may find ourselves providing knowledge and advice to actors clearly positioned on differing sides of an issue. But if we are to be Honest Brokers, we must be consistent. If we perceive that our evidence is something that an audience finds difficult to understand, or even to hear, then it is not the message that should change, it is the method of transmission that needs to be addressed.
There are no easy answers. As scientists, we want to be active and engaged members of society. But when we use our privileged position to speak to governments or to the public, we need to try and be Honest Brokers of knowledge. Alternatively we must make clear and understood our vested interests if we choose to take on an advocacy role. For those among us who wish to advance a cause that extends beyond the limits of our expertise, this is our right as citizens. But as scientists, we have a responsibility to the public to position our comments appropriately.
Trust and integrity will remain core to our contribution to the advancement of knowledge and its application to create better environments, societies and a healthier economy and planet.
I thank Kristiann Allen for her help with essay and Stephen Goldson for his critical comments.
[1] Merton, Robert (1942) Science and technology in a democratic order. In Journal of Legal and Political Sociology (pp. 115-126)
[2] Funtowicz, S. O., & Ravetz, J. R. (1993). The emergence of post-normal science. In Science, politics and morality (pp. 85-123). Springer Netherlands.
[3] Marks, J. (2009) Why I am not a scientist. Berkeley: University of California Press.
[4] Habermas, J. (1970). Technology and science as ideology. Toward a rational society, 81(122),
[5] Jasanoff, S. (Ed.). (2004). States of knowledge: the co-production of science and the social order. Routledge.
[6] Douglas, H. (2009). Science, policy, and the value-free ideal. University of Pittsburgh Press.
[7] Callon, M., Lascoumes, P., & Barthe, Y. (2001). Acting in an Uncertain World: An Essay on Technical Democracy, translated by G. Burchell.
[8] Pielke, R. A. (2007). The honest broker: making sense of science in policy and politics (p. 188). Cambridge: Cambridge University Press.
[9] Achenbach, J. (2015) The age of disbelief. In National Geographic. March 2015 (pp30-47).