Before the holiday break, I posted a ‘summer reading’ commentary that explored some of the key changes that we are witnessing in the ways in which science is undertaken around the world. Many of these changes have the potential to, or are already, affecting New Zealand’s public science system in ways that reflect the particularities of being a small advanced economy and geographically somewhat remote.
I am grateful for the comments I received, including a reminder that the use of the term ‘science’ can be misread as not including the important contribution of research not within the ‘hard’ physical or natural sciences. I would like to reiterate the comments I made in my report on the role of evidence in policy formation that “when I use the terms ‘science’ or ‘research,’ I am referring to formal processes that use standardised, systematic and internationally recognised methodologies to collect and analyse data and draw conclusions.” The word “science” can only ever be shorthand for what researchers take for granted – that their work is about asking the right questions, and applying recognised methods and interpretive frameworks that are appropriate for the problem under study and its context.
And context is indeed important. In this follow-on essay, I look more closely at a number of issues I have touched on previously, and some emerging ones. In particular, the issue of ‘impact’ is important to all research, but, its often narrow definition can overlook important social and broader impacts that are the very reason the public is willing to support research.
****
Internationally many academic and government scientists assume that the public science system is inherently quite stable. But what we recognise as the current shape largely evolved in the decades after WWII and it is now undergoing a new period of rapid and potentially dramatic change. A decade from now, the science system will likely differ markedly from that to which we have become accustomed.
As I have written previously, such change is, in part, because of the changing nature of science. But it is also due to changing public knowledge and expectations of both researchers and governments. This has challenged long-held assumptions and is leading to considerable alterations in the relationship between science and different sectors of society. Further, not all the consequences can be foreseen. There is a pressing need not only for social science and policy scholars to monitor and evaluate these shifts in the public science enterprise, but also for the overall science community to be engaged in the evolution. Some of the science community’s most tightly held assumptions and self-definitions are in some need for critical re-tooling.
Accountability for public research dollars means structural changes to the system: are we ready?
Across the globe, even in those countries that have invested heavily in frontier science, there has been quite a shift towards a more utilitarian view of the role of publically funded science. While the non-linear relationship between discovery science, applied science and development is generally acknowledged at an intellectual level, many policy makers continue to find it difficult to understand the relative importance of undirected basic research as the necessary platform for science-based “innovation.” This is in spite of substantive evidence [1] that investment discovery research is key to a strong innovation economy.
Internationally, governments’ continued commitment to invest in R&D is done with the expectation that its expenditure will have discernable impacts on economic, social and environmental development. While discovery science itself is not particularly under threat, there has been a tendency for an increasing proportion of governments’ investment to be aimed at applied and developmental research. Thus the research community needs to become more accustomed to demonstrating the impact and value of its work even at an early stage.
Irrespective of the balance of discovery and applied research, the presumption that the majority of publically funded science will continue to be funded through bottom-up, investigator-led proposals is being (or will be) challenged in many jurisdictions. In no small part, this is because today’s research activities demand bigger, and more multi-disciplinary and sustained teams. Additionally countries, both big and small, want to see public research directed towards where it can have the biggest impact on identified challenges. Thus mission-led multi-disciplinary science, as is envisaged in the National Science Challenges, is increasingly becoming the norm especially in countries with comparatively smaller research budgets.
But as an increasing amount of public research is undertaken by multidisciplinary (often multi-centered and/or international) teams, this has problematic implications for the traditional academic reward and recognition mechanisms. Where once single-authored (or single-lab) papers were the meat of an academic CV, now a researcher may be one of twenty or more authors, and assessing an individual’s contribution becomes more difficult [2]. Further, the move to more mission-led funding will have implications for how science careers develop and how novel questions (and indeed whole new disciplines) emerge and are shaped.
We need to re-define ‘impact’ for scientists and the system that supports them
‘Impact’ is a word that has a specific meaning for academic researchers, among whom it is most often narrowly defined as the influence within their area of study. Within this narrow scope, measuring ‘impact’ has acquired an air of apparent objectivity with the advent of ‘impact factors’ or bibliometric accounting by journal publishers that are driven by the frequency of citations in the literature. Over-emphasis on very ‘high impact’ research outputs can create perverse outcomes disadvantaging some forms of science and indeed the reliance on this type of metric has a number of well-recognised flaws. Not all important science can be or will be published in such journals, yet some institutions and evaluation systems are giving inappropriate weight to publications in such journals.
The concept of impact is also problematic for a second, perhaps more significant reason. For the policy maker, the politician and the general public, the ‘impact’ of research has a very different meaning and relates to what it can do for national or societal benefit. For these stakeholders, impact is meant to reflect the contribution of the research to better social and environmental wellbeing, through public policy, jobs, and knowledge-based innovation within a healthy economy. However, what is not always obvious to external stakeholders is that individual research projects, even large ones, can only rarely be linked directly to a specific impact. Impact (or benefit) comes from integrating and applying new knowledge incrementally, and from multiple sources. Still, all scientists should be able to place their work within such a perspective for the non-scientist. They may not be able to draw a direct causal link between their work and a benefit to society, but at the very least, they should be able to account for their use of public funding by clearly explaining why their work matters – even at the most upstream end of the knowledge-to-action process.
I think that there will be an increasing expectation of an explanation of potential impact a priori in the allocation of research funds. How such assessment should be done, how it interplays with traditional peer review and how it should be monitored are an increasingly preoccupation of policy analysis in science ministries around the world. It is clear that whatever process is developed must involve non-scientists to test assumptions about real-world relevance and impact.
As many other countries’ public research funders are finding, peer review evaluates the scientific merit, but different processes must be put in place to also evaluate the societal merit of research. These are very different types of evaluation, each with a different set of challenges.
Evaluating science
With respect to peer review, I do not want to revisit at length my previous discussions, but concerns regarding the optimal processes of the evaluation of scientific merit – whether at funding or publication stages – are real. Classical peer review has become particularly problematic particularly in small countries where biases and unrecognized conflicts are almost inevitable and where there is a high multi-disciplinary content. An increasingly favoured option is to move exclusively to non-domestic reviewers of science excellence and I suspect this will become the norm in many such countries. But even in large countries these issues are being increasingly considered particularly as issues of impact are now being considered alongside scientific quality – should these assessments be conflated or separated? It is somewhat like the cliché about democracy: we know peer reviewed is flawed but it remains the best system we have. There are an increasing number of novel approaches to peer review and each has its merits and its challenges. But indeed globally there is now a greater willingness to consider new approaches to peer review. Particularly in a small country, we need to be very conscious of the issues that can confound a system and create biases or inhibit innovation.
At the publication stage, the quality of peer review has been thrown into question by the promotion of new online publication formats. We have seen the explosion of e-journals and different funding models for their production – as a result of the open access trend, the burden is shifting from readers paying subscriptions to writers paying to make their work openly accessible. This ‘writer-pays’ model has the unintended consequence of creating questionable new opportunities for authors to buy their way into some of the more unscrupulous journals that directly target scientists. There is now growing evidence that some of these journals do not let peer review get in the way of their cash flow. But the peer review problems with journals are not limited to the newer and lesser quality titles. Indeed even the major established journals are expanding their stable of publications rapidly with a more limited form of review.
Further, the explosion of papers is placing a high burden on reviewers. Thus the identification of quality research is in itself becoming more challenging. This will have impacts on how careers emerge, on the behavior of scientists and on defining reliable information. But as scientific publication through the established journal (physical or electronic) system is changing, the question is how will it look in a decade’s time? How will we incorporate and quality-assure new business models, blogs, self-published books, and social media into our understanding of science? Is a well prepared a blog as much scholarship as a traditional review in a journal or a monograph? The research community is going to struggle with this. As a hallmark of ‘good science’, peer review must be protected, but in what new form(s) remains to be seen.
Societal impact and the fine balance of public-private sector partnerships
Where evaluation of societal impact is concerned, the measures are also complex. This is because societal impact is foremost a question of societal values and priorities. And when something is prioritized, there are invariably trade-offs. For instance, we want our science to help protect the health of ourselves and our planet, and also help us to prosper as a society through innovation. If innovation is a societal priority, then the promotion of public-private sector partnerships in research is essential. But the increasing collaboration between academe and industry has caused tensions and cultural divides in some institutions, with uninformed generalised assertions that commercially focused or partnered research is somehow second-rate or worse, inherently biased.
Indeed, we are certainly a long way from best understanding how the public and private sectors should best engage with each other. Certainly, revelations, over the years, of unscrupulous industry publication practices have done little to inspire confidence in such partnerships (especially in the medical sciences). A recent issue of the British Medical Journal went so far as to suggest that papers authored by private sector researchers should not be accepted, at least where clinical trials are involved. Yet, carried to extreme, the implication of this attitude will inevitably be profoundly negative. As it happens, by far the bulk of global research expenditure is sourced from or used within the private sector. Governments worldwide are encouraging the scientists and academics they fund to work with private sector partners. In fact, New Zealand may have considerably more experience in the good governance of such partnerships than do the research heavy-weights that have only now started to focus on this issue. But if parts of academe are to uncritically exclude such joint research or somehow promote the view that such combinations result inevitably in second-class or conflicted research, then the modern research environment will become very fractured. The danger is that a two-tiered (or parallel) system becomes entrenched and any potential for innovative and creative cross-pollination of ideas between academe and industry will be systemically discouraged.
Of course pre-publication disclosure of potential, perceived and real conflicts of interest is definitely needed and this is gathering steam in the biomedical literature, particularly following the campaign against the unethical publication practices of some pharmaceutical companies and unscrupulous academics willing to front industry papers. But if the academic community is pejorative about private sector engagement per se, I think selective disclosure will unfortunately become the norm, and this does an injustice to everyone.
In addition, we are still a long way from creating an environment where negative results are easily published. Negative results, dead-ends and missteps are intrinsic to science, yet they are still viewed fearfully as career-limiting blunders in academic circles. They are extremely important lessons on the tortuous path to knowledge, yet within the current culture of science, researchers are reluctant or have difficulty in sharing these lessons, thus allowing mistakes and dead-end research to be repeated at great expense to society.
Many of these issues are particularly acute in the life sciences and one may be tempted to draw a distinction between the practices for the medical industry and, say, the aeronautics industry in this regard. But both ultimately affect the human condition and the distinction is somewhat artifactual. In the end, we need a better, clearer and much more contemporary code of conduct for the science community in general. We need a code of conduct that is just and pragmatic, reflecting the reality of where science is heading both within and outside our public institutions. Appropriately, we are quick to identify potential conflicts of interest in the public-private sector interface but we do little about the multiple conflicts of interest that can exist within academia and government science itself.
Talking science
The relationship between the science community and the wider community has changed considerably. An isolated and somewhat patronising approach by the scientific community is being rapidly eroded by the growing understanding that there is a social contract between science and society and that science must be well embedded within the communities it serves. But to deliver on that contract requires scientists to be able to communicate not only with their peers but also more broadly with knowledge users and the general public. And that communication needs to be more preemptive – not only about publicity for “breakthrough results.’” It should not be self-serving, to promote either the researcher or their institution, but must be continually informative and helpful. Not all scientists can or will be great communicators but that does not mean that their science cannot be accessible to the public.
Indeed, such communication is increasingly seen as a core responsibility of the publically funded scientist. But it is not without its challenges. Most scientists are inherently passionate about their work and many scientists rightly feel passionately about the causes that their research can inform. However it is important that scientists do not confuse their role as providers of information to the broader public with their democratic right to be advocates. This can create some difficult tensions. If they wish to advocate a cause based on their expert knowledge of a topic, they must be careful not to stretch their claims to special knowledge beyond what is scientifically accepted and known. I have written about this previously and again refer those interested in this issue to the recent Japanese Council of Science code of conduct which has addressed this issue in some depth. These issues are rising in importance; the guilty verdict for Italian scientists at L’Aquila, citing failure to communicate the risk of earthquakes properly may be an extreme and controversial example, but it serves to highlight what is at stake in science communication.
Social license: the space where scientific merit, societal impact, research integrity and science communication meet
The concept of ‘social license’ for the use of technologies has emerged as a key matter that the life sciences and physical sciences community cannot ignore. These communities have in general been slow to recognise that new technologies cannot be introduced without a parallel consideration of societal perspectives. And such perspectives are not limited to science; typically they are values driven and may touch on perceived sufficiency of evidence, the way in which the evidence is obtained and the governance of the research process, the perceived societal need for the technology and who is seen to be bearing the risk or who is benefiting. Fortunately, sectors of the social sciences community have begun to fill the void with much needed empirical research that can help to elucidate the public values and normative practices surrounding new technologies or new applications of technology.
This is not an easy conversation and requires a much closer integration between what are traditionally called the ‘hard’ sciences and the social sciences and humanities. Here there are real challenges: different epistemic cultures, different jargon and lexicon, different criteria of impact and quality. Indeed the shape of multidisciplinary research in the future will require academics to figure out how to cross these divides.
I began this essay with a response to readers who reminded me that we need to be clearer about the contributions of all research and not assume that the focus of effort should be strictly on the ‘hard sciences’. I could not agree more. Indeed, one of the most important contributions of today’s social sciences and humanities scholarship is to develop our critical understanding of our increasingly technologically driven world and how this structures our relationships, our decisions and the risks and trade-offs we are willing to accept.
A good example of the need for social theory and empirical research is in the area of social license of ‘Big Data’ based research, which is perhaps the most rapidly growing area of technological development in science. To what extent is the public really engaged in what this means? [3] What are the privacy issues that emerge as various data sets from social (administrative/government) sources to genetics get merged in the pursuit of an ever-refined understanding of population and individual health for instance? While we see the research as a laudable goal, the difference between the use of big social data sets and the increasing public concerns about governmental security agency use of big data is blurred. While the former usage is likely to gain social license easily enough, the latter is sanctioned by the court of public opinion. Yet, it is interesting to observe the apparent public lenience for big data collection by the private sector. Far greater discourse and social science scholarship is needed to better understand our social relationship with big data, if we are going to realise its powerful potential to answer some of our most pressing societal issues – from planning better transport corridors to making our health services more responsive to current and future needs.
But the research community also needs to think seriously about its own relationship with big data because it raises epistemological questions that challenge some important assumptions. For instance, whereas science that was not particularly hypothesis-driven was frowned upon a couple of decades ago, our ability to use big data in novel combinations and linkages now brings with it an argument that hypotheses are not needed to drive discovery – just good questions. However, I believe that tools (albeit more supple ones) for hypothesis development and testing will need to be incorporated into big data paradigms. Otherwise, over time, the particular characteristics that give science its epistemological privilege may be lost. This is not about imposing some perceived hierarchy on research methodologies. Rather, it is a simple question of ensuring that research – in any form – is well structured and its methods can be replicated.
Change is inevitable but the pace of change in science is now very rapid. It is affecting careers, it is affecting how research institutions operate, and it is affecting the whole way science is undertaken, communicated, and understood. The challenge will be to protect the best of what we now have while embracing change, the consequences of which are not always predictable.
[1] For a good discussion on this see chapter 2 of Mariana Mazzucato’s book “The Entrepreneurial State”. Anthem Press 2013
[2] I recognise that this shift toward multi-authored papers will vary somewhat between the medical, natural and social sciences. In some social sciences for instance, monographs are still the norm. But many of these disciplines are increasingly contributing talent to multi-disciplinary research teams.
[3] There is a very relevant editorial in Nature on 16th Jan 2014. Vol 505 p261, “ Power to the people”