The scientific endeavor is now a more public exercise than ever before, with high expectations on the part of an increasingly science-savvy (and often sceptical) public. This shift to a more public face of science has multiple and interacting impacts that, arguably, the science community is only just beginning to understand at a practical level. Far beyond Gibbons’ tacit “social contract”, science and society (in the most pluralistic sense of both of these terms) are now engaged much more explicitly in an ongoing dialogue and negotiation about: what research to fund; how to fund it; what funding and research trade-offs are made; what to do with the knowledge produced; and whether it is appropriately accessible to meet public and various end-user expectations.
As evidenced by the recent oversubscribed conference of Science Advice to Governments in Auckland, associated with this increasingly public face of science is an urgent need to create and implement tools and mechanisms for mobilising the knowledge so that science can inform public policy, address societal challenges and assist in discussions of social license for applying (or limiting) new technologies. This increasingly public nature of science is made manifest in a variety of ways such as the establishment of new public roles for scientists, ranging from communicators and expert commentators to formal governmental advisors.
But this welcome – and overdue – transformation is also exposing a set of professional challenges that the science community needs to consider. Superficially one would imagine that the rules of engagement between the science sector and the public are clear, but in reality the issues are very nuanced and are ever-evolving. There are also somewhat different perspectives across different societies reflecting various cultural and societal histories.
Firstly, it is important to note that scientists in different roles have different obligations and expectations on them. This has been given a high-level taxonomy by Roger Pielke, but a more detailed analysis may be needed to consider some of the emergent issues. Scientists with formal public appointments, such as science advisors, are generally expected to be honest brokers of knowledge (to use Roger Pielke’s terminology). That is, to transmit and interpret knowledge by helping to clarify what is known and not known about an issue in its context, and what the evidence can say about the implications of the range of options for action in response to it. This must be done without becoming advocates for any position beyond that inferred from analysis of the data according to the standards of scientific practice. Expert advisory groups and academies are generally expected to take on similar roles. But the latter sometimes also take on a more overt advocacy role, which is in accordance with the conventions of academic freedom, but which can create confusion. Academies around the world are considering this challenge as they take a greater role in both policy and societal dialogues.
Whereas the public scientist appointed to a formal role by a governmental agency as panel chair or advisor should be primarily expected to act as an honest broker of knowledge, individual scientists can and should exercise their democratic rights to advocate for what they believe in while at the same time informing that advocacy by their own expertise. But they may also use their status as scientists to give their advocacy greater weight. It is this latter case that presents the greatest challenges both to scientists who may struggle to convince sceptics that their expertise is not biased by their convictions, and to publics who may question whether to trust a seemingly over-zealous scientist. Even more complex situations can be created by those scientists who are employed by an interest group. In each of these cases the boundary between expertise and filtered advice can be easily blurred.
To be sure, individual scientists play a critical role in transmitting their knowledge to the public in a number of ways – be it by publicising their research, giving public lectures, writing books or commenting on complex issues of public concern. This increased visibility is essential and plays an important part in changing the relationship between science and society. But it brings with it a number of communication and knowledge translational challenges that emerge in part because much of this work is undertaken informally, seizing opportunities where they arise, and most often in the absence of professional tools or guidelines. Publicly funded scientists need to be applauded and encouraged to engage publicly as much as possible – and the need to help them do this in a clear, constructive and consistent way is becoming increasingly evident, particularly where individual scientists may be acting in a variety of public contexts, such as panelists, commentators, expert advisors etc..
Inherent in this discussion is the absolute need to maintain the integrity of and public confidence in the body scientific. Indeed, how the public discourse is conducted must influence how the public (and the body politic) perceives the scientific endeavour – both in general, and within specific cases (climate science, being perhaps the most obvious example). Arguably, democracy is hurt when science becomes a proxy for debates that should legitimately occur in other spheres. Taking advantage of scientific complexity to justify inaction on climate change is but one example of where the real discussion that is needed is not about the science but in this case about risk management and intergenerational public investment. Similarly, is the debate over genetically modified food really a scientific debate about safety and risk, or is it primarily an ideological debate about corporate influences on the food supply and over philosophical definitions of what is natural? It may well be both, but let’s be explicit when shifting the frame of discussion from one to the other.
Science’s key role in society depends in no small part on its claim to a privileged place in knowledge production and hence in the processes of public reason. When that is threatened by scientific misconduct, hubris and bias, or when advocacy extends beyond the reasonable boundaries of inference that the data allow, the explanatory power of science is jeopardized[1]. Career pressures on scientists to publish and to patent and the rise of scientific celebrities have introduced new pressures and opportunities that can be as destructive as they are constructive to the public’s relationship with and trust in the science sector.
As scientists, we must be willing to acknowledge the limits of science and avoid hyperbole and exaggeration in reporting it. As Heather Douglas pointed out in her book Science, Policy and the Value-Free Ideal, there is nearly always an inferential gap between what the data show and what is concluded. Over-confidence and unexamined bias in the scientific decisions we make pose a serious threat not only to robust knowledge production, but also to any hope of societal consensus on the application of new knowledge.
As the public funding of science is also rightly seen as an investment from which to expect societal economic returns, these issues are compounded by many governments now also expecting academic researchers to engage more with industry. Here there is potential for both real and perceived conflicts of interest. But even with the most robust policies to manage these risks, there will remain complex questions such as whether the presence of a declared conflict automatically discredits expertise or whether ties with industry are any different than ties to a particular advocacy group? Indeed, many scientific publishers and granting bodies now require specific declaration of engagement both with for-profit entities and with those in the charitable sector. Yet arguably, the scientist-advocate often experiences more public trust than the merchant-scientist or the scientist-entrepreneur. In all cases, it is the transparency of data, the reproducibility of results, the quality of peer review and the integrity of the scientist that provide the fundamental assurances of trustworthy science in the public sphere.
In the end these issues are not black and white and there may be important societal differences across cultures and across borders in what the public expects of the scientists it funds. There will always be an ambiguity to the boundaries that are drawn around the role of the public scientist, and perhaps even more so as most advanced economies are moving toward increased integration of publicly funded science with private sector interests to assist economic goals and expectations. But ultimately society needs to have confidence in its science sector, and the integrity of the science is the first – but by no means only – step to ensuring the integrity of its purpose. When scientists can communicate their work accessibly and responsibly, there is a greater opportunity for the public and decision-makers meaningfully to consider it, and for it to have impact.
This paper suggests that the scope of issues in today’s science culture is expanding and becoming more complex and more apparent. Their potential impact has been poorly considered by scientific academies worldwide. One exception is the recent revision of the code of conduct for scientists by the Japanese Council of Science which gives high profile to the relationship between science and society including the importance of integrity and trust by all stakeholders. This code emerged from their post-Fukushima deliberations when the Japanese scientific and policy communities reflected extensively on issues that emerged in the wake of the Tsunami. In New Zealand, the Government will be referring these matters to its national academy for consideration as part of an initiative to promote the interaction between Science and Society.
[1] Further reflection on the impact of scientific misconduct is beyond the scope of this commentary, except to suggest that the changing business models of the publishing sector are contributing to this issue