The word ‘impact’ has come to be the catch-all term to describe the expected benefit or ‘return’ from investing in scientific research. But what the word actually means can vary greatly depending on whom one asks and who does the asking. Indeed, as an increasingly used keyword within the science-policy community, ‘impact’ is used in very different ways by both the scientific community and by the policy and political communities. Inevitably this can spawn confusion.
For instance, governments worldwide are increasingly taking a critical look at their investments in higher education and the research sector, and seeking more evidence that those investments are producing a return in terms that they understand and appreciate. However, their perspective is not necessarily one that is shared by many academics and scientists. To many of the latter, ‘impact’ is often thought of internally as a statement of the quality of the work and/or its value to the scientific domain itself.
Whether from the policy makers’ or the knowledge producers’ perspective, however, our current preoccupation with the concept of ‘impact’ carries the dual dangers of embedding a specific meaning in some part of the system that is not necessarily shared across sectors, and in the assumption that precise and objective measurements are possible. This is not to deny the importance of our concern for impact – quite the contrary. However, it is crucial that we find ways to take into account the full range of meanings for ‘impact’, as each one is significant in the national science system. Thus, in this essay, I argue that we should instead be seeking a more nuanced view of both the meaning of ‘impact’ and the ways we use to measure it.
The nature of the problem: At least two worldviews
The move to a more utilitarian view of higher education and publicly funded research is understandable in policy terms, though it is likely to be perceived as a threat by some within the public research community. Over the past two decades many countries have markedly increased their investment in R&D and higher education specifically because they have been persuaded that this drives economic growth. Indeed, the scientific community itself has played a major role in promoting this argument, knowing that it can help to secure funding.
However, one problem is that scientists know that the development of scientific understanding has never been linear and that there is rarely a short and easy path from research to return. Yet the rhetoric of ‘why we must invest in science’ continues to mask the realities of how complex is the pathway by which socio-economic impacts are realised and can be measured. Such an emphasis by individual scientists (and indeed their academies as a whole) has often reinforced the naive and widely held bias in policy and political circles that there is a direct relationship between a particular output of early stage research and long-term impacts for society.
In medical research this is frequently seen in claims (of which many of us have been guilty and complicit with the voracious media) of over-hyping the implications of a piece of basic research. Perhaps the most obvious among such claims are those that were made about Human Genome Project in its early days. Any palpable returns to health care and industry are only now emerging nearly two decades later, and this, with the help of many researchers who were never part of the original initiative itself. As any scholar knows, this is how science operates, yet at the time it was thought necessary to highlight the putative future ‘impacts’ of sequencing our genome in order to secure ongoing funding against competing claims.
The prevailing academic perspective: Getting the language right
Before turning to a Government’s perspective on the impact of public research (as espoused and operationalised by its funding agencies), let us first look at it from the academic community’s perspective. In today’s competitive academic environment, where universities need handy performance measures for recruitment, and career advancement of faculty members, most academics and institutions rely on publications’ ‘impact factor’ (based on the citations and thus presumably the uptake of a publication) as the primary indicator of scientific currency and relevance. But the name ‘impact factor’ is entirely misleading because it is completely removed from the kinds of socio-economic impacts for which Governments invest in science. All that the ‘impact factor’ estimates is the immediate output of a research project while also attempting to qualify the scientific value of the particular publication by virtue of where it is published. Yet the label and the concept have allowed the academic science community to believe that this is a (if not the)principal impact to consider – indeed even if it were, it is very problematic both conceptually and methodologically.
So deeply have we in the science community embedded our own understanding of ‘impact’ (with its influence mostly limited to other scientists) that we may be blind not only to policy-makers’ much broader understanding of the term, but also to the manifestly problematic method of measurement based on performance bibliometrics alone.
Today the journal impact factor has become firmly embedded in performance evaluation even though its intent and structure were never about individual performance. Yet how many academic performance assessment systems specifically reward investigators for publishing in high impact journals and discount very good papers and sometimes ground-breaking contributions published in other fit-for-purpose journals? It is plainly impossible to make objective what in the end is an inherently subjective judgment on the value of a scientific contribution. The question then becomes how much effort to put into the pseudo-objectivity of bibliometrics, and how much to devote to careful but inevitably subjective performance review of individual contributions to science? This has been the challenge of Universities’ performance-based assessment and reward schemes in jurisdictions such as UK and New Zealand.
But if such assessment is so difficult to do objectively, and if it ultimately tells us little about the kinds of ‘impacts’ that governments care about in any case, then should we even bother? Of course we should, because what we are really talking about here is excellence. In the end, the science system depends on our efforts to identify excellent scientists and to give them the support they need to generate and pursue their best ideas. After all, no one would claim that all academics and scientists have the same potential to advance knowledge, just as their teaching skills can vary enormously. So just because it is flawed and complicated, this does not mean we can abandon our systemic focus on excellence in science. Rather we must take care about how we do it and recognise that what many scientists may consider their primary ‘impact’ (limited, though it may be, to the science community itself) must not be dismissed by policy makers, but instead re-cast as the key feature that feeds a healthy science system: Scientific excellence.
Central to all of this, of course, is to recognise that academics and government officials are often coming from very different perspectives in considering ‘impact’. Complicating this further is that whatever the rationale, both types of impact are difficult to define and to measure, and there is the danger of conflating them or confounding one with the other. In the end we need both, but we must clearly distinguish them and to approach them separately, including in the appreciation of different timescales. Measuring ‘scientific impact’ is one indicator of the quality of science, whereas it is socio-economic impact that helps us to justify our use of taxpayer’s dollars.
The government-funder’s perspective
The first thing to point out is that a nation’s public science system generally comprises multiple funding agencies and multiple funding regimes. There would not be such multiplicity of tools and agencies if ‘impact’ was a singular term. Why a medical research funding agency invests and what the government wants from that funding is very different to what an agency interested in promoting technology transfer might call for. For example there can be rather different objectives operating even between national medical research agencies. Yet all must convince Treasury of the impact of their investments. The point is that ‘impact’ should not be defined generically but needs to take into account the multiple goals within a national science system.
Thus, one should think in terms of the range of impacts that the various parts of a holistic government funding system might want and thus establish the mix of bespoke funding tools accordingly. It is the issue of balance within such a system that challenges every country, and this is particularly difficult to achieve within small science systems like ours. If ‘impact’ becomes fixed in mindsets within a narrow sense – whether in an academic or end-user framework – it can lead to imbalance within a science system. Obviously it is much easier to assess the impact of late stage developmental R&D that focuses on a particular industrial output or late stage translational research that can help refine public policies or medical practices, than it is to assess the impact of early stage discovery research. Yet a science system must have both and there is a danger that the ease in measuring the former can lead to an inappropriate discounting of the latter.
So what impacts should we expect in a balanced system?
Any formal framework aimed at categorising types of research impact is bound to be arbitrary, but it is important that we have one because, without it, different impacts can become conflated, inappropriately narrowed or ignored. Arguably, most research has multiple types of impact. Indeed these spillover effects are a strong part of the intervention logic that leads governments to invest in research. And spillover and indirect benefits can be understood by policy makers if the science community makes the effort; One need only observe the success of astronomers and particle physicists in developing Big Science projects – their arguments are in part about understanding fundamental knowledge but their real selling points in the eyes of many funders have been about new technologies that might emerge from such efforts and they have a remarkable record of doing so.
Below is a list that I use as a framework to help structure the way I think about the range of impacts that a government should expect from its investment in research. It is a rough and personal guide, but currently through the Small Advanced Economies Initiative,[1] we are beginning to develop a more standardised taxonomy so that we can explore some of these issues more formally. In this list, the first six types of impact are fairly well developed in the international literature on research evaluation. However, there are at least three additional types of impact that have been largely ignored in conventional frameworks, but that are of importance to science communities, governments and ultimately to citizens.
Impacts primarily on the science sector:
Arguably, some might consider these first two ‘impacts’ as science ‘outputs’, and indeed, many impact assessment frameworks do just this. But these are important and real impacts of public science funding and keeping them in perspective has major influences on how science evolves for national benefit.
Broader science-based impacts on the socio-economic environment through public policy and economic growth
Impacts that enhance a country’s international standing
The need for innovative metrics and approaches
Just as with the publication impact factor, there is no agreed singular method to measure (or assign value to) this range of impacts that are likely to be most relevant to funders (and governments). Given the inevitable varying timeframes and the various direct and indirect benefits that can be expected, attempts to develop singular quantitative measures of impact can only be proxies for what clearly has a large qualitative component. Mixed methods approaches in which qualitative and quantitative tools are combined are clearly needed to support policy making in the science system.
A key consideration in designing the assessment of impact is: what is the primary purpose of the assessment? This, again, will vary according to audience. For instance, a government agency might want a summative assessment in order to estimate what they “got for their money” in terms that are meaningful to them and to their citizens. But governments can also use impact assessment more specifically as a retrospective calculus to form the basis for how universities are funded (e.g. the PBRF in New Zealand, REF in the UK).
I would argue that a particularly valuable and more strategic use of impact evaluation is to shape researcher behaviour in a more proactive manner. At a recent forum on impact in Melbourne sponsored by the journal Nature, I suggested that the value of impact assessment is very different if ‘impact’ were systematically considered a priori rather than post-hoc. In a sense the fairly recently instituted ‘relevance sections’ of grant applications are intended to do this, but they are notoriously lacking in detail and their reach and utility beyond the actual grant assessment process and into knowledge-user sectors have been essentially nil. They are generally barely a footnote in the grant-holder’s mindset, except perhaps at the funding renewal stage, at which point it is often already too late to shape the research in a way that might foster more relevance and broader societal impact.
A move to proactive mixed-method approaches that consider from the outset the multiple types of impact may permit much more granularity with which to consider impact in a way that can be of value to the funder and importantly assist in shaping researcher behaviour. For instance, if at the outset, researchers are asked to state specifically the expected implications of their work under (say) the taxonomy similar to that proposed above, and to consider how such implications (from research outputs and a marked advancement of knowledge, through to translational impacts in society and the environment) can be properly assessed (both qualitatively and quantitatively), then the onus is placed on the researcher from the beginning to think about how their work will have impact and how this can be validated at appropriate intervals. This approach has the potential to change the mindset of the scientist without creating detailed and unrealistic ‘milestones’, or inappropriately tying the research to unrealistic claims about end-user potential, but highlighting end-user expected implications to end-users when it is appropriate to do so.
A more flexible method of assessment would also allow for the reporting of unexpected impacts that often emerge in research. Such processes seem appropriate for large grants and for grant renewals and can be scaled to the level of centres and institutions. It also avoids the problem of inappropriate metrics being forced on the system or the individual grant holder. It is of considerable interest to watch the progress of Science Foundation Ireland, which has moved considerably in this direction.
In my view, a formal a priori consideration of impact is likely to have much more buy-in and be much fairer and useful to all parties than more arbitrary and agency-driven post-hoc assessment processes. In the end it gives joint ownership to both the agency and the researcher. It’s worth a closer look.
A final word: The place of scientific quality
In this essay I have argued for a more balanced and nuanced consideration of ‘impact’ and approaches to measuring, and to avoid narrowly focusing solely on either scientific or utilitarian understandings of the word. I have tried to elucidate the enduring and fundamental disconnect between the science and policy communities on the meaning of and approaches to measuring impact. I suggested that this disconnect is exacerbated by muddy terminology and concepts like ‘impact factor,’ which has become nothing short of an obsession within some academic circles. I have argued that there is more utility in thinking about the range of impacts that one should expect in a public science system, than there is in thinking too narrowly or too broadly.
However, despite my plea for balance, we cannot ignore that what many scientists have considered as ‘scientific impact’ is really a proxy for quality and excellence in science, and that ensuring scientific excellence is the first and most important principle in the science system.
I have written previously about the need to consider how research funding is assessed[2]. The assessment of scientific merit (excellence) is quite distinct from that of impact (relevance) and must be the first filter. However, ultimately both the assessment of excellence and societal impact have to be integrated and we are still learning how that should be done and indeed it will vary across systems and organisations.
There is a general risk that the fundamental importance of excellence could be downplayed in the global trend toward a more utilitarian view of public science funding. It is tempting to assume that if the research is deemed relevant, then scientific quality is less important or taken for granted. But whether we are talking about early stage discovery research or late stage development, second-rate research can only lead to second-rate impacts. Many in the private sector understand this very well. All funded research, whether at a discovery stage or in late stage development, must be of the highest standards to ultimately be of value.
[1] The SAEI initiative brings officials of Singapore, Israel, Finland, Denmark, Ireland and New Zealand to explore policy issues in science, technology, innovation and micro-economics which are particular to small advanced economies. My Office serves as the Secretariat for the STI aspects of the Initiative.
[2] http://www.pmcsa.org.nz/wp-content/uploads/Which-science-to-fund-time-to-review-peer-review.pdf