Most of us would hopefully accept that governments will make better decisions if they use well-developed evidence wisely. At the same time however, evidence can be ignored, manipulated or even falsely constructed for particular ends. The ability for misleading information to become the basis of political advocacy, strategy and policy making is not new but it has now become much more apparent and is creating great concern. Nor is this a crisis of knowledge or expertise as some would argue. Rather, what has changed is the nature, speed and pervasiveness of communication and the ease with which individuals can themselves generate and transmit information, whether it is true, altered or false.
There is a paradox here. On one hand, the speed and democratisation information and communications has offered an unprecedented public platform for policy-relevant discussions that were once the sole domain of technocrats and decision-makers. On the other hand, however, the deliberative enquiry and verifiable facts that are assumed to be the basis of reasoned arguments underpinning our democratic processes can no longer be taken for granted when misinformation can now gather momentum so easily. Moreover, where societal conditions are ripe, we have seen the populist tools of social media now serving to call out apparent ‘elitism’. This accusation spares no sector in the process, and science is no exception, but we should consider precisely why this is so.
To be sure, the social media age has magnified the difficulties of addressing misinformation, which is difficult to convincingly correct once posted. Even with retractions or well-researched counter-arguments, decision theory tells us that people will have made up their minds, hearing what they want to hear because of inherent confirmation biases. Indeed with our current sources of news and information increasingly revolving around social media, many only receive information that is preselected to their biases.
With the enormous amount of accessible information (much of it confused and confusing), a popular temptation is to assume that simple access to information is sufficient without regard for verification or skilled interpretation. However, the hubris and narrow focus that can sometimes accompany such expertise can also impact the trust and respect for those who might be best placed to help interpret complex and ambiguous knowledge and to assist in sorting reliable from unreliable information, indeed separating fact from fiction.
As I have discussed previously[2] the genie is out of the bottle – the digital world has a dynamic that appears beyond the reach of traditional instruments of democratic government or governance. The very design of liberal democracies may be changing as a result of the big shifts we are experiencing in the relationships between citizens, media, elected representatives, interest groups and experts.
Indeed, what some theorists once called our ‘post-trust’ society, now appears to have taken a step further towards what has become a trendy term: “post-truth.’ In this evolution, knowledge is considered irrelevant in the face of personal beliefs. Cliché and somewhat misleading though it is, the popularity of the term is nonetheless a useful, if perhaps alarmist, shorthand to describe some of the changes at the science[3]-policy and science-society interfaces.
Science is caught up in this new and challenging dynamic because – as the primary method of obtaining relatively reliable of knowledge of our world and ourselves, it is inevitably intertwined with the functioning of democracy itself. The problem is given added urgency by virtue of the fact that virtually every challenge a government faces has a scientific dimension. The issue is whether robust science is available to assist in their policy-making and is used well or is, misused, manipulated or ignored.
So we return to my initial proposition that governments will make better decisions if they access and use knowledge wisely. The challenge to be addressed is: what processes and safeguards will be needed (and are available to us) in the 21st century against the background of what appears to be now evolving as a post-truth/post-trust/post-expert dynamic?
The issue is complex: At a time when it seems that scientific input into public policy is more important than ever to address complex domestic and global issues, there is also an increasing acceptance that science-derived knowledge is but one input into policy and that much science can provide only probabilities rather than certainties. Added to this is the current trend toward populist politics and the new (or perhaps not so-new) tendency to equate ‘experts’ with ‘elites’ and then vilify them both. Thus a solution that seeks simply to rebrand experts and rebuild trust in science will be insufficient if it assumes that applying more science will alone solve our problems.
It is worth exploring this point. Consider the popularity of the term ‘evidence-based policy’. This oft-invoked label has become a rallying cry for the use of evidence in public policy decisions. But if you consider what these words imply for the practice of policy making, you see that they are presumptuous and not necessarily helpful. A more realistic framing is policy informed by scientific evidence. This may seem semantic but in explaining this shift in language, some important points can be made.
First, ‘evidence’ to the non-scientists, including policy makers and politicians, is not only about the scientific process. Evidence is more often a catch-all for knowledge that might come from tradition, religious belief, local knowledge and personal observation. And personal observation and anecdote are by far the most compelling forms of ‘evidence’ to many non-scientists including those imbedded in the political process.
Yet the plural of anecdote is not data. Scientific evidence has a different basis – it is (hopefully) based on formal processes designed to take a number of biases out of the collection and analysis of data and the application of such data in an ever-sceptical way to produce evidence. That is not to say science is values-free[4] but the whole point of the scientific method is to try to identify, address and avoid biases in the data collection and analysis. Bad science is defined simply by a failure to do so, which can happen at multiple points in the process – including the way the questions are framed in the first place. By contrast, the eventual use of scientific knowledge, that is the application of the evidence, is entirely values laden and we shall return to this point soon.
With scientific standards and protocols in place, an assumption of many scientists is that scientifically derived evidence should directly determine what the policy settings should be. This is plainly hubris and it can be a significant barrier to the effective uptake of much science by policy practitioners and decision makers. In reality, both policy-making and the science to support it are far more complex than saying the evidence shows that intervention A reduces the risk of Outcome B and therefore the government must invest in intervention A.
To parse this issue in some detail, first consider what is the evidence that proposed intervention A works? Is it based on one trial, or simply normative argument? Is it based in contexts that matter (that is, just because it worked somewhere, will it work here and with this population)? Is it scalable? What is the effect size? What is the evidence of effect compared to alternate strategies? What is the counterfactual? Those are all scientific questions.
And then there are the values-based ways to then weigh up the evidence, which are no longer pure scientific questions: What is the cost? What are the risks associated with intervention A and to whom (a matter I shall return to)? What are the trade-offs? Is B a priority in the minds of the public or the politician? Clearly decisions about implementing a so-called ‘proven’ intervention are not the domain of scientists alone.
The evidence based policy language has echoes of the description of science made by Robert Merton in the mid 20th century as standing apart from the rest of society, yet informing it as though from a pulpit. Hopefully we are well past that; science does not make policy – it can only inform policy makers in their considerations.
In his recent and insightful book despite its title, (the Politics of Evidence Based Policy Making)[5], Paul Cairney offers an important insight: policy-relevant science is very good at problem definition but not so good at finding policy-acceptable, scalable and meaningful solutions. Be it climate change or obesity, this is a real challenge for the scientific community – moving from tightly controlled interventions, experiments and modelling to real world solutions that work in complex systems has many confounding considerations as alluded to above. In part this is because the kind of science that leads to problem definition (e.g. climate science) is very different to the kind of science needed to respond, which involves disciplines from behavioural science, economics, many non-climate technologies and thus engages many stakeholders creating enormous political complexity. Ian Boyd has written about this well in a recent essay in Nature[6].
There is a growing interest in comparing evidence-based policy making to evidence based medicine (EBM) but this is potentially misleading. EBM asks very specific questions usually about intervention for precise situations through meta-analysis. But at the heart of EBM are studies of various types – particularly cohort studies and randomised controlled trials – undertaken by essentially the very people who will interpret the data and judge their utility. Where such an approach is applied to public policy, it is done best in equally narrow micro-level interventions – as for example in the work of ‘What Works’ centres and Behavioural Insights Units. These have value particularly at the level of service providers but are less likely to really impact on broader policy objectives. This issue is expanded upon in Justin Parkhurst’s new book The Politics of Evidence[7].
At the other extreme we are seeing an increased potential for the use of big data analytics in policy formation. There are no doubts that there is enormous potential here and indeed the New Zealand policy system is at the cutting edge in this regard[8]. But as anyone with expertise in big data knows there are many traps in establishing causality, and many challenges to be worked through in establishing this process. My office is charged with exploring these issues as this approach is further developed and applied to the government’s social investment in NZ. It is certainly not a question of promoting blind faith in data, even at a time when it seems most urgently needed!
But even if we could simply wave the baton of science, there is then the issue of how and where to apply it in the policy process. Democracies by their very nature create messy policy-making processes. The policy cycle is generally not a logical stepwise and hypothesis-driven journey based on robust evidence. Rather it is a much more messy process involving multiple actors and coalitions of actors, informal and formal, elected and non-elected[9]. While the detail may vary according to the constitutional arrangements of executive and legislative power, most policy making in a Westminster system occurs via the executive, with the legislature providing input when there is legislation required (and this is generally at the end of the process), or via oversight roles through parliamentary select committees. But the meat of every-day policy-making (as opposed to law-making) is ultimately steered by the executive branch and that is why there is increasing focus world-wide in looking at knowledge input available to the policy makers in the government of the day. That is not to say that parliament as a whole does not need the same robust science-based knowledge, but this is of a different nature, timing and purpose generally to support select committee processes or to provide parliamentarians with an information service – this may be done by parliamentary libraries and their research staff or by specific units such as the UK’s Parliamentary Office of Science and Technology (POST).
Policy makers, like all of us, have limited bandwidth, with many competing priorities to juggle. They cannot pay attention to every issue and so tend to lurch to problems driven by the exigencies of external influences such as public opinion, media attention or the impending ballot box. To be sure, there are policies that evolve slowly over a time, with deliberation and which are somewhat closer to the idealised policy cycle but these will tend to get submerged by the more urgent events that arise. The changed nature of the media cycle, and the impact of social media and the digital world more generally, are making that longer–term focus harder for the policy community and this is a growing challenge that cabinet offices around the world are facing.
Because of this accelerated and complicated policy environment and the realities of complex science, robust and established knowledge for policy is almost always an illusory ideal. Instead, the available knowledge is generally incomplete and often ambiguous. Yet, for the policy maker, decisions are urgent regardless of the state of knowledge. Policy makers live with uncertainty and ambiguity in every decision they make. Here is a major difference to the science community which often feels unable to go beyond their mantra of “more research is needed’ in such situations. Those of us engaged in knowledge brokerage must deal with this reality.
In a brokerage role however, scientists need to recognise that in a democracy, policy makers have the right to ignore the evidence – even if it is, in our view, unwise, counterproductive and sometimes dangerous to do so. The reasons for doing so involve the many values-based considerations that scientific knowledge might inform but cannot resolve. The very nature of democracy means that there are multiple trade-offs in play in every decision made at every level of government from local to global and different stakeholders have very different perspectives.
So is scientific advice of any value at all – if experts in a troubled ‘post trust and post-truth’ era are marginalised as elites and if they can’t fully solve our problems anyway? I would argue emphatically that scientific advice is now more important than ever before. However to have effect, advisors and advisory structures need a particular sensitivity in how they provide advice and evidence, and this is not always obvious.
Part of the issue is that democracies work because of an entanglement of both formal and informal sets of relationships and interactions between the political apparatus and policy actors with formal and informal roles. Scientific input must somehow be inserted appropriately into this entanglement, with due recognition of its limits, but with fidelity. However, the nature of those relationships and interactions themselves have changed. This is due to several factors, which will be briefly outlined below because they help to define the post-truth/trust/expert context that is our main concern in this essay. And these changes are broader than simply looking at the societal, media and policy dimensions.
Changes in knowledge production
First, there is the enormous expansion of the scientific enterprise. Last year over 3 million scientific papers of variable quality were published in some 30,000 scientific journals – also of variable quality. The reasons for this expansion in scientific output over recent years are complex. In no small part they are driven by the expansion of the tertiary education sector worldwide and by the greater utilitarian expectations that governments place on the science enterprise – matter I will expand on later in this essay. There is also the ‘bibliometric disease’ that has in effect replaced proper performance assessment in many universities and research institutions and has a dominating effect on the culture of science[10]. Much of the knowledge produced has minimal real world impact beyond that on the authors’ CV and yet somehow knowledge users in the public, governments and businesses have to work through that morass to separate the reliable from the unreliable science. Issues around reproducibility, not to mention interpretation abound.
Second, enormous computational and statistical advances in most scientific disciplines over the last 20 years now allow for non-linear and multi-dimensional data analysis and this has changed the nature of the scientific questions that can be asked. As a result, today’s science is more about probabilities than certainties, and reductionist interpretations are replaced by systems-based approaches. Scientific findings are now being placed more and more into the context of social, environmental, ecological and human systems, where there remain many unknowns. Yet it is dealing with policy relevant questions where decisions are often urgent and values are in dispute. This is the realm of what some call ‘post-normal science’[11] and it is exactly where knowledge brokerage between the knowledge production community and the policy community is most needed.
Third, this post-normal context invites mischief and misuse by cherry picking from the inevitable variability in results or exploiting remaining scientific uncertainties and injecting these into complex societal debates. We have seen this in the arguments over the safety of genetically modified foods where the issues are philosophical and economic, not scientific. Similarly climate change denial is largely not about science, it is about economics. But in these and many more examples, it has been easier to hide behind apparent scientific complexity or uncertainty rather than debate values. Democracy is short-changed when science is misused in this way.
A fourth shift in the scientific enterprise is that it is now viewed in much more utilitarian way. Thus many governments have invested more in science over recent years and in turn they expect more in return both as policy makers and as representatives of the taxpayer and citizens. Much of the argument has been about economic return and of course science communities have used this argument extensively in their often-successful lobbying for increased funding in the last three decades. A part consequence of this shift, although grounded in other cogent arguments as well, as the proposition put forward especially by post-normal science suggest that concepts of knowledge co-design and co-production must now become inherent and normative to ensuring that the science that is done is trust-worthy and in the interests of society. I shall return at the end of this essay to this point.
Changes at the interface
For all the changes to the system and nature of knowledge production, society too is changing in dramatic ways, some of which I have already alluded to. The result is a rapidly changing relationship between science and society.
Different stakeholders in science have different expectations and wants. NGOs, advocacy groups, philanthropic foundations and of course the private sector now also are major investors in science and technology, alongside the public sector. Indeed in some sectors they are the dominant funders though it is clear that the state plays a major role. These changes to funding science create new tensions and the real and perceived conflicts of interest must be addressed to maintain the integrity and credibility of science.
The range of responses within the scientific community that followed the publication of Dan Sarewitz’s recent paper in the New Atlantis[12] shows the many differences in opinion as to whether mission–led science fosters or inhibits intellectual enquiry. This latter point is not metaphysical – it has major implications for how the funding of research should be organised. And as we have seen, any shift in funding models inevitably leads to reaction within the academic community, and the engaged public.
Thus, although there are tensions within the science system itself, it is their impact on the interface with the rest of society that demands our attention. This impact is increasing – in part because the urgency of the global condition is better understood, as encapsulated in the SDGs.
Impact of the digital world
Arguably the biggest factor driving changes at this complex interface between science, the rest of society, policy and politics is the networked world we now inhabit. The internet is a double-edged sword: it has enabled a productive interface between science and society through democratisation of scientific information and access to expertise. However, through social media especially, it is also responsible for some of the most troubling trends at the interface. This tension is a reality and cannot be ignored as we explore how to ensure the role of robust knowledge in societal decision-making.
The manifest benefits of the internet on our networked culture and economy are obvious. But it has not been a free lunch. Increasingly issues of the digitalised society are emerging – from loss of power if not authority of the nation state, to loss of personal privacy and autonomy to altered social structures and perhaps even altered brain development and human evolution. The interconnectivity of our world and the changed way individuals and institutions interact puts a lot of power and responsibility (and trust) onto the digital platform providers that increasingly appear to operate beyond any traditional jurisdictional control. I have written about these dimensions elsewhere and will not expand on them further here[13].
But I will focus on those elements at the science-policy-society intersection that are salient to the topic of this essay.
The rise of the internet and social media has been accompanied by a general decline in the traditionally largely objective role of the fourth estate[14], and particularly the conventional media. Increasingly our news sources provide us with the type of information that we have demonstrated that we wish to receive. Twitter has promoted the rise of ad hominem attack rather than a healthy debate of ideas, not to mention a selective echo-chamber which only serves to reinforce confirmation biases and reinforce prior-held views.
To be sure, the internet has greatly increased access to information but that information is increasingly unmediated – thus both reliable and unreliable information can receive equal weighting or by algorithmic manipulation it is possible for erroneous information to dominate. This context has created an unprecedented platform for interest groups; some of it can be very sophisticated yet absolutely false.
An engaged citizenry today requires new skills to filter information and weigh arguments. In the absence of reflective and critical appraisal, expertise can become derided. It is of course more likely to be so if it is perceived as arrogant and elite. Hubris is the enemy of effective knowledge communication.
Into this mix the ability of interest groups and individuals with agendas to manipulate the information flow has grown enormously. Undoing damage done to truth on the internet is hard if not impossible given the speed and pervasiveness of social media.
And we have seen in the past year how this kind of environment has impacted on multiple political processes. In a sense this is nothing new in that interest groups, politicians and others have always tried to manipulate views. The tobacco and chemical industries manipulated information long before the advent of social media. The dairy industry was doing it in the 19th century regarding the alleged risks of margarine. The practice simply has become easier, faster and more overt and pervasive.
Diverse views of knowledge, uncertainty and risk
Then, to any information source, manipulated or not, people apply their inherent confirmation biases, particularly where scientific information is uncertain (which, by definition, is nearly always the case). For the knowledge broker, this means that it can be counterproductive to simply push more scientific information on the naive basis that there is a knowledge deficit underpinning a person’s view and addressing that deficit will change their view. There is now a considerable body of evidence to suggest that pushing more knowledge does not change minds; it actually drives the wedge more deeply between those of divergent viewpoints[15]. Science literacy alone will not address such issues.
Beyond this issue of cognitive biases[16] is another critical consideration. At the heart of this matter is the notion of risk. Individuals and societies can hold different perspectives on it and on how to deal with uncertainties[17].
Risk has different meanings to different stakeholders. There is an actuarial understanding, but most people including most scientists have cognitive biases on how they perceive cost and benefit and thus how they perceive risk. In turn, this means that the same data may be interpreted in one way by one group and exactly the opposite by another. This tendency is reinforced by the echo chamber of the digital world in which people now tend only to interact with those of similar beliefs rather than exposing themselves to diverse views in a constructive manner. Worse, the machine learning algorithms of our online news feeds now offer a prioritisation of stories and perspectives that reinforce what we already think in the guise of objective news.
There is great diversity in risk perception and risk framing among individuals and between groups, as both Juma[18] and Jasanoff[19] have written so well. This impacts on how they view complex and ambiguous and uncertain scientific information. A further challenge that has emerged as Lofstedt wrote about in his book on risk management in a post-trust society[20] is that advocates for a position can significantly advance their cause by undermining trust in regulatory science if the regulator takes a different position to that of the advocate.
So in this context how can we assist progress on my basic assumption that scientifically developed knowledge can lead to better public policy? How can this be achieved given the formidable challenges of changing knowledge production, complex policy processes and the shifting societal norms and expectations? In the final part of this essay, I would like to consider to some promising strategies. There are several key elements to making progress.
Evidence and knowledge brokerage
The first step is to acknowledge the complexity of the task: The cultures of policy-making and of scientific knowledge production are distinct, the drivers are distinct, and the goals and roles are distinct. I think the interface between these cultures is so nuanced and yet important that it is now unrealistic to see it left to ad hoc interactions between policy makers and the scientific community. The post-truth dynamic makes it even more so. Further scientific hubris can get in the way of an effective interaction. So too can the impression that the interaction is but a disguised lobby for more science funding. Appropriately skilled brokers are needed at that interface – and given the complexity of the interface multiple forms and structures of brokerage are needed.
What is meant by brokerage? This is distinct from advocacy because the point of brokerage is to be a translator, an interpreter. Brokerage is about ensuring that relevant and appropriate questions are asked and answered. It is about summarising what the science can tell us, what we can be reasonably certain about, what are the gaps in our knowledge and what are the implications of what we know (and do not know). It is about the caveats that might have to be placed on that synthesis. It is about the policy options that then emerge and the scientific implications of each. It can highlight the spill-over costs and benefits of each option and it can indicate the policy implications of each and the societal values issues that are in play but it cannot choose among them. The goal of brokerage is to enable more effective decision making, but not to make the policy decisions.
There is no single unitary model of effective brokerage. Brokerage has multiple dimensions – it must be able to deal with instant input in an emergency on one hand and assist with fore-sighting and horizon scanning on the other. Authoritative yet accessible, it must be able to rise above the cacophony of competing claims and social media trends.
There is a growing consensus that at a national level two categories of brokerage are desirable. First there need to be those within or associated with the executive loop who can have that informal input at multiple points in the policy process. These individuals or structures can show where science can help and can act as an interpreter when needed, a conduit to the science community or the policy community when needed and it can look to where the integrity of scientific input needs to be protected. This is the role of science advisors or advisory mechanisms, either within ministries or to the prime minister or president.
Scientific input of a more technical and deliberative nature needs to come from the broader academy – either through expert committees, professional bodies or a national academy. The principles of academic integrity and independence must protect the deliberations of such groups, but they are inherently limited in that they are somewhat external to the policy process and often can interact only at one point in it, often after the policy framing is effectively in place. Nonetheless, this input is indispensable, particularly for long-term issues. Clearly this work is done more effectively if the relevant academic community can understand the policy process and find a match between the expectations of the policy community and their own.
Advisors who are internal and those who are external to the policy making machinery must have points of trusted interaction even if their roles and approaches are somewhat different. I will not go on further about what I think are promising structural arrangements for government science advisory ecosystems as these are extensively discussed elsewhere[21].
The goal must be to separate good science from bad, to distil the overwhelming amount of information, to interpret confusing claims without alienating the audience and to protect trust in the scientific system and its processes. In this, it is also, inevitably, about recasting the ‘expert’, not as an authoritarian ‘elite’, but as a reliable and authoritative voice with something valuable to contribute.
Scientist training for the 21st Century
As scientists trained within a largely 19th Century model of universities, most of us are ill-equipped to deal with 21st Century expectations on our work. And so the second step is for us as scientists to learn how to be more proactive about issues of policy relevance and to use the tools of modern communication. Similarly, if academies are to take a greater role in policy development, they must meet the needs of the policy community in the structure, relevance and timeliness of their work. This can create some tensions for the independence of academies, which must be addressed. It is independence of thought and advice that must be protected; irrespective of the actual constitutional arrangement of any academy and its funding.
If we follow this argument through, it means that all professional scientists[22] need a more critical understanding of the place (and history) of science in society. This can be helped by a solid grounding in some of the concepts of science, technology and society (STS) studies and they need to understand the contexts of application of their work. This has implications for doctoral and perhaps postdoctoral training. There are great examples of emerging scientists who are deeply interested in this interface and who are seamlessly intertwining within their careers. In this, members of youth academies, especially the GYA, and structures such as the Science Policy Exchange in Montreal, are having some considerable impact. As well, work by Science Media centres globally are a great resource to better equip scientists and the responsible elements of the media to dampen the flames of ‘post-truthism’. These efforts should be expanded.
Co-design and co-production of knowledge
A third step is pointed to in the work of scholars such as Rawetz, Funtowicz, Sarewitz, and Jasanoff who highlight the increasing importance of co-design and co-production of knowledge within societies[23]. These words have evolving meaning but they highlight the need for greater transparency within science and its processes. And certainly these approaches will be an enormous defence against raising mistrust in science. True engagement will be hard and challenging – what does it mean to include non-scientists in peer review and in science governance? How do we link better with other sources of knowledge and in particular indigenous and local knowledge? Changes are gradually emerging and becoming accepted in some elements of the science system in various countries. But over time this may mean more fundamental shifts in the structure of the scientific apparatus than is generally envisaged.
Increasing public ‘science capital’
A fourth step is to improve general science literacy and critical thinking, though it must be reiterated that this alone is not an effective defence against ‘post-truthism’. Among the most promising approaches is the expansion of citizen-science. Done well, it is neither the exploitation of free data or labour, nor action-research. Instead, it occupies a space that can engage citizens on questions relevant to them, while also imparting a scientist’s scepticism and meticulousness of process. Limited evidence suggests it is an understanding of the processes of science that is the best defence against post-truthism rather than trying to address a presumed knowledge deficit. In New Zealand we have embarked on piloting a program of participatory science with such a model.
The role of individual scientists
While I have focused above on the institutions of science and science advice, individual scientists have a major and critical role to play. Scientists can choose to be knowledge brokers and/or advocates – certainly they must be advocates for the use of objective knowledge and often for their own field and findings. But it is a complex responsibility when they advocate for their science to be applied. How they do it will have important implications. Done poorly, such communication can be confusing and undermine confidence in science more generally[24].
Again for all these reasons structured brokerage is valuable – between science and policy, science and society. It is a complex ecosystem of advisors, academies, and academic and professional organisations.
Shared principles
The strategies outlined in this paper are diverse in their approach and their audiences, but they share a common goal: developing and maintaining trust in scientifically derived knowledge. Simplistically effective knowledge brokerage requires sustaining different dimensions of trust, including: that of the politician, the policy community, the scientific community, the wider community and media. Maintaining this equilibrium is not easy and this is what necessitates an ecosystem approach that can apply a set of shared principles for science advising[25].
The International Network for Government Science Advice (INGSA) has been working with the International Council of Science (ICSU), UNESCO and the World Science Forum (WSF) towards developing a universal set of operating principles and guidelines that might help in this regard[26]. Any underlying principles must relate to the integrity of the advice and the maintenance of trust, but also acknowledge the contexts in which knowledge is applied. Done well, inclusively and across national boundaries, science advice can hopefully have a productive and constructive role to play in an increasing complex, troubled and changing world.
Footnotes:
[1] This essay is an expanded version of a talk given at the University of Ottawa on January 17th 2017 at the invitation of the Canadian Science Policy Centre as the inauguration of the Science Policy lecture series, and hosted by the Institute of Science, Society and Policy, University of Ottawa. I thank Kristiann Allen and James Wilsdon for their critique and comments on earlier drafts.
[3] The word ‘science’ is used in the broadest possible way in this essay to include all the formal investigative disciplines including the social and engineering sciences.
[4] For further discussion see Healther Douglas: Science, Policy and the Value-free Ideal, U Pittsburgh 2009.
[5] Cairney, Paul. The politics of evidence-based policy making. Springer, 2016.
[6]I Boyd: http://www.nature.com/news/take-the-long-view-1.21189
[7] Justin Parkhurst. The politics of evidence Taylor and Francis 2016
[8] http://www.stats.govt.nz/browse_for_stats/snapshots-of-nz/integrated-data-infrastructure/researchers-using-idi.aspx
[9] Paul Cairney’s book is an excellent primer on this and is a good source for scientists interested understanding public policy making
[10] Mark Ferguson has a provocative anecdote on assessing impact from papers in his recent essay
[11] S Funtowicz & J Ravetz. Science for the post-normal age, Futures 1993; https://www.uu.nl/wetfilos/wetfil10/sprekers/Funtowicz_Ravetz_Futures_1993.pdf; and many subsequent essays by these authors.
[12] Sarewitz, Daniel. “Saving Science.” The New Atlantis 49 (2016): 4-40.
[13]Gluckman P: DES Commentary pdf
[14] Which is named the Fourth Estate because of its historical importance of an independent press to the development of the democratic process.
[15] An example is discussed in Gluckman P: /sir-peter-gluckman-blog-knowledge-values-and-worldviews-implications-for-science-communication/
[16] Gluckman P; PMCSA-Risk-paper-2-Nov-2016-.pdf
[17] Gluckman P: Discussion-of-Social-Licence.pdf
[18] Juma, C. Innovation and Its Enemies: Why People Resist New Technologies. Oxford University Press, 2016.
[19] Jasanoff, S. Science and public reason. Routledge, 2012.
[20] Lofstedt, Ragnar. Risk Management in post-trust societies. Taylor and Francis 2009
[21] See www.ingsa.org for a number of reference documents and and essays on this issue.
[22] The term professional scientist raises a further point of discussion as it is not well defined. ‘Scientist’ does represent a formal profession with its distinct training pathway and regulatory characteristics, though professional norms are assumed and a certain level of convergence in general higher education standards and methods is starting to take shape, though there is enormous variation across disciplines.
[23] For example: Jasanoff S, The ethics of invention WW Norton New York 2016
[24] Gluckman P; trusting-the-scientist/
[25] Gluckman P: http://www.nature.com/news/policy-the-art-of-science-advice-to-government-1.14838
[26] See www.ingsa.org for more details. This work is led by James Wilsdon (University of Sheffield, UK) and Dan Sarewitz (Arizona State University). I particularly acknowledge the contribution of Marc Saner (University of Ottawa, Canada) and Heather Douglas (University of Waterloo, Canada) to the conceptual underpinnings of this work