The changing culture of science

by Sir Peter Gluckman
Out of focus aerial shot of a crowd of people walking in a city.

Over the last few weeks I have met with a number of science advisors, officials, officers from several national academies, and journal editors. A common theme arising in many of these discussions, and which was also echoed in some of my discussions within the science community, was the deeper implications of the significant and ongoing change in our scientific culture. This essay is a reflection on some of these changes.

But first, a warning to readers: the thoughts captured here are somewhat unstructured.  They are observations of a system that is, itself, multi-layered, complex and intricately interconnected such that challenges in one component (e.g. overburdened peer reviewers or fragmented funding structures) will have multiple impacts elsewhere in the system.  For this reason, the discussion that follows is simply a bookmarking of issues that will each require much deeper analysis.  It is my hope that by publishing these initial thoughts, we can at least start that conversation.

Evolution and change in the scientific endeavour is inevitable, but several factors seem to be interacting to generate a number of concerns which, if not addressed, may over time seriously undermine the quality of science, its impact, and ultimately, the public’s trust. Given the close interplay between the public research sector and universities, it is not surprising that some of the concerns outlined here derive from changes in higher education that are already making their impact felt.

Within a small nation such as New Zealand, some of these issues are more likely to become amplified and particularly acute. While the answers are not necessarily obvious – and of necessity need to be global –we cannot bury our head in the sand. The science, policy and academic communities need to seek integrated solutions.

In the discussion that follows, I will attempt to parse and elucidate some of the challenges within the context of the New Zealand national science system. There are issues that, to my mind, require critical consideration and urgent attention if we are to optimise our public research investment.  These concerns are not marginal topics that are the sole purview of the science policy community; rather they are starting to become a matter for general commentary, as evidenced by the recent Economist report “How Science Goes Wrong” (October 19, 2013).

Publicly funded science – an unbalanced system

Most critical attention has focused on the multiple challenges surrounding the publication of research results such as publication bias, reproducibility, open access and issues of properly disclosing potential conflict of interests. But underlying all of these matters are deeper policy issues of the size and purpose of the public science system, the relationship between public and private sector research, the measurement of its impact, and the nature and purpose of tertiary education.

The global public science system has expanded dramatically over recent decades. In no small part this has been due to both the massive expansion in the size of universities accompanied, in some countries, by an increase in the number of tertiary education institutions that have now adopted the research university paradigm, where once their mandate had been primarily teaching.  Rating and rewarding university staff is generally done on the basis of their research productivity, which is measured by the number of grants and publications. In New Zealand the growth of the university sector over recent decades (in terms of academics employed) has significantly exceeded real growth in the investment in public research. As elsewhere, this fueled a growing mismatch between the demand from academic institutions for contestable research funding and the supply of public research funds.

At the same time, in many countries, policy makers have developed a new understanding of the importance of science to economic growth, social well-being and environmental protection. In particular, the role of science as tool of economic growth first captured the attention of policy makers in the 1980s and this led, in many countries, to progressively rising levels of investment in public sector R&D not previously seen in peacetime budgets. It is now becoming apparent that countries that made such an investment two or three decades ago are now reaping rewards, whereas those developed countries that did not do so, have generally not done so well, at least in terms of productivity growth.

We New Zealanders like to consider ourselves as an export-intensive nation. Yet in reality, compared to other smaller advanced economies, we are a rather low exporter and our export volumes (as % of GDP) have not grown nearly as fast as those of other small advanced countries in recent decades. In general, the more rapid economic growth of those nations appears linked to growing a higher value science-based export sector including both products and services.  New Zealand has come to this realization of the role of science and innovation in export growth more slowly, and with a more incremental approach than have other countries. In part, this may reflect the lack of large research-intensive companies (both domestic and multinational) within our overall science landscape.

The birth of the ‘Impact Agenda’ for publically funded research

This more utilitarian attitude of policy makers towards science certainly allowed the science enterprise in many countries to grow. At the same time however, it brings with it a challenge: how to demonstrate the impact of the science that the taxpayer is funding. Most academics are accustomed to viewing their productivity almost entirely in terms of publications (because that is what university administrators and peers use to evaluate one another). Yet for policy makers, these measures of output are of limited value – what they need are measures of impact and this turns out to be a major challenge that many jurisdictions are grappling with.

‘Impact’ can be the effect of research on public policy, on public and societal health, on the environment or on the economy as well as less quantifiable and more intangible impacts such as those that affect public opinion and, more broadly speaking, national reputation and culture. The problem is that science is generally not linear.  The nature of research is that it builds on the published results that preceded it, not in a direct genealogy, but in a growing constellation of ideas and experimentation that produce incremental shifts both backwards and forwards and often sideways. When one looks backward from any particular innovation (social or economic), it is not likely that the chain of innovation can be directly linked back to one particular publically funded research project. Equally, looking forward from a research project, much of the anticipated impact will be indirect and occurring over the long term. But just because measuring impact is very complex does not mean that there is not a need to try and improve our methods. Granting bodies are increasingly seeking better ways of assessing potential impact, both at the commencement of funding and in review processes. The larger and longer the grant award is, the sharper this requirement becomes.

The way impact is assessed will inevitably be somewhat subjective – for the policy maker, a good narrative may be more important than some artificial metric (much more about metrics below).  My Office is currently working with counterparts in other small advanced nations to explore more effective ways of looking at impact – in no small part because it is important that the science community understands from the outset more clearly about what is the potential of the research it undertakes.

The funding architecture:  setting the tone for better collaboration

This more utilitarian focus of the global science enterprise over recent decades – and which has escalated since the Global Financial Crisis – has prompted the policy community to focus more on the kinds of funding tools best suited for supporting public science. But it potentially can also create constraints particularly as longer-term research, including discovery research, can become compromised.  In small countries the tradeoffs that are necessary within publically funded research are easier to see because such countries likely have smaller absolute public budgets for science and fewer charitable endowments.  Smaller countries cannot do everything and must make strategic choices in structuring their science systems.

One trend across smaller countries has been for a greater percentage of public funds to be allocated into specific priority areas, and into mission-led research collaborations where critical intellectual mass can be achieved. Where this has been done well, it has been achieved without compromising the importance of ‘discovery science’ whether within a research cluster or within the wider academic community.  Parenthetically, I am increasingly uncomfortable with the term ‘basic science’ – one sector’s basic science may well be another’s ‘applied science’ reflecting the increasingly trans-sectoral application of new knowledge.

New Zealand’s Centres of Research Excellence (CoREs) and the groupings emerging from the National Science Challenges reflect this trend toward larger, virtual and multidisciplinary clusters, where the focus on impact is more explicit. However to be truly successful, such collaborative arrangements have required scientists who often have worked in relative isolation to give up some autonomy and coalesce around a common set of goals. This is not easy; different individuals clearly have different views of their own work and its potential impact. Most academic researchers, by virtue of their training and position, are focused on a relatively defined trajectory of enquiry.  But to become a member of a larger research collective means adapting one’s own research and potentially one’s career game plan.

A second trend in small countries is for the system to expect the science community to rely more heavily on public-private partnerships.  Other countries, with larger and more established public science budgets, are sometimes surprised at the extent to which smaller countries have engaged with the private sector as an integral part of public science funding.  The policy logic for encouraging such partnerships is clear. Not only do these partnerships leverage limited public resources, but research results are more likely to be turned into drivers of economic growth and social development if the private sector has a stake from the outset.

Yet these funding relationships are not without real tensions and concerns. Conflicts of interest, either real or perceived, are generally considered to be the biggest. The specter of perceived conflict of interest can place scientists in very difficult positions not only with respect to the scientific community but also with editors and the public. For, if scientists are investigating complex topics with high public interest and values content, it is often easier for advocates and others to attack the individual rather than the science or to use the issue of perceived conflict to undermine confidence in the science. On the other hand, there have been many unacceptable examples of egregious behaviour by scientists engaged with the private sector. Conflicts of interest (real and perceived) are best handled by transparency and good process but as the public and private sectors become inevitably more intertwined we need to get better at both identifying and managing the interface rather than ignoring the problem. There will always be good and bad science and we cannot assume automatically that the presence of a public-private sector partnership creates bad science.

Another tension is to achieve the right balance between opportunities for curiosity-driven ‘discovery science’ and more practical research that has a more obvious application to industry partners.  For small countries, these decisions are more acute and their impact is amplified.

The willingness to encourage public-private research interactions may have put the smaller advanced nations like New Zealand on a different trajectory to those of the traditional research super-powers and in some ways we have had to confront these issues earlier and more directly.  Issues of governance and the creation of funding tools to enable industry partnerships have been a considerable focus in countries like New Zealand, Denmark, Israel and Singapore, but larger countries now increasingly have to acquire similar knowledge as leveraging public funds becomes more important.

The tertiary education sector – In the midst of an identity crisis?

The shift to a more utilitarian perspective on the value of public science interplays with a concurrent shift in thinking about the role of tertiary education. As discussed above, many countries have invested in considerable expansion in university-style tertiary education, accepting that there is a relationship between greater educational achievement and economic performance. But globally, less thinking has been done as to the appropriate balance between research-based tertiary education and other forms of post-secondary training.  How much tertiary education activity needs to be undertaken within a research-intensive environment? If the answer is a lot then the research enterprise needs to grow. If the answer is not much, then governments may need to be more explicit about how they distinguish the goals of various types of tertiary institutions and in how they incentivize and fund the research system.

As the university sector puts increased emphasis on research to support economic growth or having direct impact in other ways, the mandate of non-university public research organisations may change to ensure complementary roles and a harmonized approach to private sector collaboration. Some countries (for example Denmark and Finland) have addressed this by embarking on a restructuring of their tertiary education and research sectors.   In addition, some countries, such as New Zealand and Denmark, are creating new forms of organisations (e.g. Callaghan Innovation) to support the research needs of the private sector.

As academic researchers get closer to the more utilitarian and applied research sector, several other tensions emerge. The academic CV becomes less meaningful. For instance: how to assess an academic who is heavily invested in applied research and development in areas that may not produce traditional publications; How to assess the social science academic heavily involved in policy formation; How to encourage a greater rapprochement between ‘applied’ and ‘discovery’ endeavours in academic research, without jeopardising the value and intent of either?

These questions are not trivial. There remains a pervasive attitude within parts of academe that is somewhat disparaging toward those who engage directly with knowledge users, particularly within the private sector. This attitude can have an impact on academics’ choice of research questions, their career paths, and ultimately, their career promotion and grant success.  Too often, applied research in the university sector is still seen by some as second-rate, yet there is no evidence that this is true.  In fact, if bibliometrics are to be taken as the standard measure, our analysis of international citation rates for research done in collaboration with industry is beginning to suggest that the opposite is true.

But even when the quality of output is beyond reproach, issues of potential conflict of interest are often raised on philosophical grounds as a reason to disparage such activity. Potential or perceived conflicts must simply be faced if public-private sector partnerships and utilitarian research are to be effective and encouraged. However these can generally be managed as long as there is full transparency.  But real transparency will only be possible if automatically pejorative attitudes to such research are addressed – otherwise the unfortunate tendency not to disclose will remain.  The medical sciences have established some mechanisms for better ensuring transparency such as the use of formal declarations of interest in journal submissions, and the use of clinical trial registers.

What is more complex is the culture of academe that remains suspicious of the utilitarian turn, even while recognizing that tax-payers can rightly expect applied outputs of the research they fund. The evolving model of the University sees them as key tools of economic development, where once they had been considered primarily as generators of new knowledge and social commentary, and as the evidence-informed conscience of a society.  However universities do evolve, it is critical that their core role as places of intellectual freedom and knowledge generation is not impeded.

Academic assessment and peer review – where key issues collide

While the challenges of public-private research partnerships have been much debated, far less effort has gone into recognizing and managing the conflicts of interest – conscious or otherwise – that many academics face within academe itself.  In small countries this is a particular issue where the level of competition for funds tends to be more intense and more personal, and where local knowledge can create unacknowledged and unconscious biases.

For instance, in New Zealand we still persist with conducting funding peer review locally, when the experience of some comparable countries suggests that all components of the grant peer review process need to be protected from potential conflicts. For example, the Science Foundation of Ireland requires that all scientific assessments are conducted by international reviewers.  Importantly this includes the triaging (elimination) step of two-stage grant processes, where potential local biases would be much more likely to operate.

Indeed as I have previously written, in my judgment the peer review system is under threat, with increased application churn (fewer people to review more and smaller grants), and greater risks of conflict of interest. As I argued previously, and still contend, the peer-review system – whether for grants, academic assessment or for publications – must evolve. New approaches will be needed to protect the integrity of science.  This is true both nationally and internationally. If we do not address this we may undermine the right of science to claim a privileged role in knowledge production.

The unintended consequences of our current measurement tools

Though the traditional academic CV may now be less meaningful in practice as the science system evolves, it still anchors core attitudes and behaviours across the whole scientific and academic system.  The drive for the ‘right kind’ of citation can impede scientific collaborations even within the same department.

This quest for the appropriate citations is driven in no small part by the “elephant in the room” with respect to academic assessments:  the reliance on impact factor and related forms of bibliometrics. The limitations of these methods are well known and indeed the journal impact factor was not developed for the purpose to which it is now applied. While this is intellectually understood in general, and has been the subject of a recent international declaration by scientists, administrators and editors, the reality is that academic institutions and funding agencies implicitly or explicitly use it as their principal tool in the assessment of individual performance.  Journal rankings have come to dominate in every field of scholarship and this now has all sorts of unintended consequences. For example our social scientists may find it is to their career advantage to work on global issues that will assist in publication in a higher ranked journal rather than focus on a policy problem that is relevant to New Zealand.

The hyper-awareness of journal impact-factors is now actively incentivised in many academic environments.  This, together with the rush to undertake profile-raising priority research, may be creating an enabling climate for unacceptable scientific misconduct or just plain bad science. Another factor also emerges in the rush for priority; John Ioannidis has pointed out, we tend to see too many small and underpowered studies with consequent false negatives and positives. We have become more aware and open about these issues but we continue to struggle to find the best way to address them.

In addition, the massive expansion of the scientific literature (now more than 2 million papers a year are published, or about 1 every 10 seconds) means that traditional filters to identify quality have become much more difficult to apply. For all the problems it creates, the impact factor remains the most common way of assessing journal quality, but even the highest impact journals publish much research that cannot be replicated or research that is never cited.  With the rapid expansion of the global science community, these journals also have very low rates of acceptance of papers sent to them for publication, which further drives the ‘impact factor chase’.

A recent issue of Nature discussed many of these issues. The solution is not obvious because we are trying to use a dubious metric to assess something that is likely inherently best assessed qualitatively rather than quantitatively. On the other hand with the plethora of new journals, we need some way of labeling those that are credible to the scientific community. Further, as peer recognition is a major driver of good science, there is indeed a need for standard processes to assess one’s contribution to the body of scientific literature, but the solution must be a concerted international one, commensurate with the global nature of science.

Whereas the issue of scientific review and assessment is an international one, there are implications that resonate much closer to home.  A researcher’s publication output builds reputational capital, attracts trainees and funding, and brings profile to the university.  These factors, in turn, are reflected for example in one of the New Zealand government’s principal funding tools for universities, the PBRF.  While the PBRF has clearly had a number of very positive effects in increasing individual and institutional academic research performance, it also has had some unintended consequences, which need to be considered and which have been seen similar funding systems elsewhere.

For instance, funding schemes like the PBRF tend to result in de facto ranking processes often promoted by the institutions themselves. These can drive a strong institutional focus that can, especially in a small country, limit collaboration between researchers from different universities.  It is the paradox of small country science that we seek to, on one hand, reward excellence and high performance in science that requires a degree of competition, but at the same time, encourage necessary ‘strength in numbers’ collaboration. In practice, these can be quite opposing goals and this is something that creates real difficulties for policy makers.

Interestingly though, if New Zealand universities could see beyond their institutional focus, I suspect that they, could in fact to be better protected by networked collaboration given the changing academic landscape arising in part from the widespread use of international ranking systems by potential students, faculty and industry, and from the rapid emergence of massive online courses (MOOCs) provided by “marquee” institutions.

The Impact of the Internet – social media, the ‘open’ movement and networked science

Our current system of public science evolved largely in the 1960s-1980s and was based on relatively few journals, a relatively clear but informal pecking order of journal importance, a much smaller scientific community and one much more isolated from the wider community and the private sector. Pressures to publish or perish were not quite the same and the grant success rate was much higher than it is today.

In recent decades however, the virtual world has had profound effects on all aspects of the scientific endeavour. The effects are felt not only in the dissemination of research and traditional measures of impact, but also in the ways that ideas can be generated and shared; how data collected and is used; and even how research is traditionally funded. All of these changes pose significant potential opportunities, especially for the science sector of a small and geographically isolated country such as ours.  But there are also new challenges.

For instance, what will published research citations mean when Twitter, the blogosphere and unrecorded downloads to the public and industry become the major way that knowledge – regardless of whether or not it is scientifically sound – is diffused?

Emerging methods of analysis based on publication metadata (altmetrics) can now take into account not just paper citations but also references in databases, online views, downloads and social media mentions.  However, these metrics are not without criticism, when social media mentions can be bought and self-downloads cannot be monitored.  Are we just replacing one pseudo-objective measure with another?  And what of assessing the quality of the published (or self-published) research?

Indeed, the rise of the internet has affected science publishing at a fundamental level, most notably through the growth of the Open Access movement.  With publications being made available online, harnessing the technology to make them freely available has been the next logical step.  In general, this has been welcomed in the major knowledge producing countries, but its broader implications are not well discussed or understood.

Certainly, Open Access publication satisfies arguments that publically funded research should be available for free. But the burden of costs associated with publication has now moved from the reader onto the researcher. In turn this has created a business model that has led to a plethora of journals of debatable process and quality.  Many of these will simply charge for the publication of virtually anything that is submitted, and are not always easy to distinguish from proper peer reviewed journals (as recently demonstrated by the Bohannon experiment). In an effort to curb this destructive practice, some disciplines have experimented with post-publication online review. Finding novel solutions is important because currently, the burden on reviewers is real, the pressure on authors is greater than ever, and the system needs a serious rethink.

At the other end of the ‘open’ science movement is the question of access to data. Open access to data is not the same as access to publications, and these two issues should not be conflated.  The issues associated with data access are far more complex than those related to publication access, yet because both are often considered similar public goods, it is often assumed that they require similar approaches. Just because the internet makes information accessible, it cannot be assumed that the consequences of that access are similar.

In the case of research data, scientists obviously want recognition and control over data they have painstakingly collected. But they also rely on access to data generated by others in order to advance their own work and, in turn, the global knowledge base.  The rules change when access to data is not based on collegial consent but is assumed to be a free public good; provisions need to be in place to protect existing or potential intellectual property, as well as good data stewardship practices where the personal information of human research participants is concerned.

The issues become much more complex when the private sector has partially or wholly funded the data collected and claim property rights to the data. With governments increasingly encouraging greater public-private research collaborations as a central pillar of the public research endeavour, we need to attend to the emergent tension caused by calls for open access to data and the private sector’s property rights. In most cases these tensions can be addressed through early patenting and disclosure. But the arguable rights of industry partners to withhold access to full data sets beyond that relevant to immediate publication on the basis of its potential for subsequent discovery can cause conflict and put the collaborating academics in a difficult position.    Clearly this is an area where we have yet to fully work through the competing viewpoints.

Open access to data has received most attention in relationship to clinical trials and here the issues have been given considerable prominence by authors such as Ben Goldacre and by Richard Horton in the Lancet. Indeed, there are compelling arguments not just for registering clinical trials, but also for ensuring access by regulators, researchers and perhaps the public to resulting datasets (including, and perhaps especially, to negative data and information about trial termination).

This issue of access to negative results is complex and inextricably linked to how scientific performance is incentivised and rewarded both in the public and private sectors. It comes down to the fact that researchers are less likely to put efforts into publishing negative results and journals are less likely to accept them.  The net effect is a publication bias and, arguably, a skewed assessment of the true state of knowledge on a given topic.

Many in the science community, as well as informed commentators like Ben Goldacre are pushing these concerns onto the public stage, where they belong.  There is a need for greater public awareness and debate among all stakeholders – including those with legitimate concerns about balancing the protection of both public good and commercially sensitive information.

Quite apart from the ethical and commercial questions associated with data access, there are also unprecedented opportunities in the era of networked research or ‘big science’. The internet has enabled the creation of enormous databases and the collaboration of huge consortia of researchers across multiple jurisdictions. It is not uncommon in molecular biology or clinical science for papers now to have hundreds of authors. Indeed, one only has to think of the army of scientists across the globe – including in New Zealand – involved in the work at the Large Hadron Collider (recently I visited CERN and was impressed to see how much the contributions of scientists in NZ particularly at The University of Canterbury, was valued).  Such collaboration is to be welcomed, yet it also creates a new reality where it is now more difficult to assess the particular scientific contribution of an individual based on the traditional academic CV or to find a recognition system that applies across multiple disciplines.

But then again, in some corners of the Internet, there is now also a growing movement where the traditional academic CV may not even matter, and where it is the idea – even that of a relative outsider – that counts.  Witness the advent of both ‘big donor’ prize money and crowd-sourced funding for innovative new ideas and feats of science and engineering (i.e. Xprize, the Launch Challenge, and Microryza among other burgeoning “Kickstarter for science” type of models).  Though still in their infancy and not widely used, these prizes and crowd-sourced models are game changers for the traditional research sector. They are lauded by many for their potential to democratise the scientific endeavour and reward higher risk research and innovation.  However it is not clear how issues of research ethics and scientific integrity are built into such new systems and this requires some critical thought.  Nonetheless, such open models of research funding, made possible by the Internet, do give pause to consider what is wrong with our current system and just what is possible.

Final comments

Scientists are humans and as creative souls much of our reward system is often intrinsic and comes from recognition by peers. But as the enterprise has grown in both size and importance to society, the pressures on scientists and the science system have changed. This changed culture of science has had very positive effects such as the greater focus on societal and economic impacts, but it is also creating real and apparently growing challenges in how research is undertaken, utilised and trusted. Important components of the science system will have to adapt over coming years but how exactly is not at all clear.   Science advisors, policymakers national academies, funders, the academic community, and R&D driven industries need to engage together on these matters – they are not for any one component of the system alone  Many need global not just local attention.

In this essay, I have explored some of the concerns I have had as both an active participant in and an external advisor on our national science system.  I do not wish to suggest that my collection of observations is in any way the robust analysis that these issues deserve.  Rather, I have tried to convey the issues that have emerged from the diverse consultations, conversations and observations that I have undertaken within the science system over recent months and which merit further reflection.

I welcome comments and input on these issues. Please email to info@pmcsa.org.nz with “the culture of science” in the subject line.

Our themes