18 October 2022
Sir Peter Gluckman
President; International Science Council
The International Science Council is the world’s primary NGO for promoting the global voice of science and integrating the natural and social sciences. It comprises most of the world’s scientific academies, and international disciplinary bodies including the various natural science unions and social science associations. It sponsors many international research programmes, scientific committees, and affiliated bodies. It has deep and increasingly effective relationships with many UN agencies. It sees its roles as primarily being the interface between the science system and the multilateral policy system and secondly in reflecting on the evolution of science systems. At times it has, and will likely in the future, play an important role in track 2 diplomacy, which will be increasingly needed in a fractured multipolar world. It is an organisation undergoing a rapid transition as science is increasingly challenged in a world of misinformation and as the existential risks to the global commons have become increasingly urgent to address. Trust in science is essential. Both you as publishers and we as scientists must work together to enhance it and we need to be honest about the changes that may be needed.
In talking about the two major components of public science, that is your industry and the industry of doing public science, and their interrelationship, I am going to be rather frank and self-critical.
But before we start, we need to be clear about what defines science if we are to have useful dialogue as that must frame how we think and act.
The modern understanding of what science is has evolved a long way since the scientific revolution, the Enlightenment and since the somewhat narrow Popperian view of falsifiability. Philosophers of science now define science by those characteristics that make it a distinct form of knowledge, which is systematically organised, is subject to the scrutiny of peers and is rationally explicable. Notably, no other explanations, such as tradition, belief or the supernatural, are permitted. Because science is defined in this way, science is not a fixed knowledge system, but one that is self-correcting and evolving. For example, it would be difficult to argue that most medicine of the 18th century was science. Indeed, evidence-based medical practice and systematic explanations of pathology and pathophysiology only emerged in the 20th century.
But this working definition places the open record and the open scrutiny of peers central to defining science. In this context, the publication of the record of science is critical and hence the importance of the scientific publications and your industry. But there are big buts which I will return to.
But first let me reflect on the conduct of science.
Over the 77 years since Vannevar Bush published his seminal report The Endless Frontier, countries have invested increasing amounts of taxpayers’ funds in science to promote economic, technological, military, health and, to a lesser extent, environmental and social outcomes.
In my early career in the 70’s the industry of academia was small and incentives on both sides of these two industries were rather straight forward. The volume of papers was small, published in the Current Contents booklets distributed weekly. The number of journals in any field one might publish in was perhaps ten or less, and the journals of record were well understood without the need for metrics.
Garfield, the publisher of Current Contents, was of course, the one to start us down the road of applying metrics to scientific publications, and so the bibliometric disease emerged.
But as tertiary education grew, and in the 1980s with the emergence of spinout companies especially in the life sciences and then later in the physical and digital sciences, the science enterprise rapidly expanded. More and more counties and a more diverse range of colleagues engaged more completely in the global enterprise of science.
The bibliometrics disease may have started small, but soon became a pandemic and arguably rather virulent. Universities started using metrics somewhat naively for performance management, for promotion assessment, and tenure decisions. Institutions encouraged the use of metrics and pseudo-metrics in rankings for their reputational enhancement. In some places staff were financially rewarded for high-impact papers.
Then governments saw this proxy data as a way to ‘performance manage their higher education institutions’ and so a very mixed range of incentives emerged and where there are incentives, then inevitably opportunities, not always benign, emerge.
Impact factors, citation rates, H factors and then alt-metrics became the language of the interface between two interlocked industries of academia and publishing. Increasingly individual scientists sought more proxy points through deciding where to publish and how much to publish; their careers depended on how their employers and their funders saw it. A feedforward loop was formed in which H indices and impact factors determined not just peer esteem but funding and employment. And as the assessment of performance and quality is inherently subjective, the use of these apparently objective measures became a way out for everyone. But most of the community knew the inherent subjectivity of these measures.
The publishing industry fed off this opportunity. Your ability to exploit it grew, with unpaid and generally unrecognised editors, and unpaid peer reviewers, while authors paying for the privilege of publishing grew and grew.
Open science is critical: it has however opened up access to those who previously could not get access. But in many systems, it has shifted some of the costs from institution to author, meaning from the perspective of the author, less money for their research.
And in this interacting set of feedforward loops, the business of making money from naive authors grew. We saw the emergence of predatory journals – which pretend and deceive so many, especially in the global south. We saw the emergence of many journals, many published by you, that might be a place of record but where the readership is minimal and peer review is not necessarily substantial.
New models of publishing are emerging – for example preprints with or without peer review – an area the ISC is working on presently to emphasise the absolute need for peer review. But regarding preprints, the question must be asked: is this rise because of competition for primacy and attention, or how often does it genuinely reflect the urgency of information release. Depending on how the preprint enterprise develops could be good or bad for science.
Last year about 3 million scientific articles were published in about 35,000 journals. Most of those papers will not be cited or used and their primary value is, if we are honest, to their authors. Some of that high volume is very valid, for example for an early stage researcher or for research of primary local relevance. But the volume of papers of low quality and impact being published even in high income countries suggests some deeper issues, in part related to incentives – often institutional. And despite DORA and likeminded declarations, the very institutions that sign up to them continue at least implicitly with their same behaviours.
At the same time science is evolving – the emergence of big data science, of trans- disciplinarity involving a broader range of interdisciplinary interfaces. They too need their outlets to make science an open record. It is critical that the data sets of science are not put behind paywalls as some in your industry are suggesting. Sadly, neither the funding systems nor the publication systems serve these new areas well.
My own research is at the transdisciplinary interface. For me, major outputs include reports and interactions with communities and policy makers. The current publishing model does not serve that actionable knowledge sector well and I suspect in many areas of science there will be shift away from the journal article as the only place of record as are seeing now with data repositories.
The definition of science I started with emphasises that scrutiny by peers. That is critical – while the institutions of peer review developed by both our industries in the last 70 years have served us well, they are becoming overwhelmed. We need to think of new models and approaches.
Once Dame Bridget Ogilvie, former director of the Wellcome Trust when receiving a honorary doctorate from my University spoke to a simple aphorism: ‘bad science is a waste of money’.
Thus we need to reflect on whether this complex interplay between our two industries is potentially damaging science and reducing its value to society. I hope not, but we must ask the question as the world needs science more than ever. We must work harder to ensure greater trust, not more cynicism, about the role of science.
Science is critical to every aspect of the human and planetary condition – be it dealing with climate change, social cohesion, conflict, mental health or environmental degradation. And if we look back to the principles that define science, then we need a robust system of ensuring knowledge is open and subject to scrutiny. That is why the ISC, at its last general assembly, established eight principles that must define how science and science publishing co-evolve. I want to close by reiterating them.
First, there must be universal, prompt open access to the record of science, both for authors and readers, with no barriers to participation, in particular, those based on ability to pay, institutional privilege, language or geography
Second, scientific publications should carry open licences that permit reuse and text and data mining. Too much of the record of science is inaccessible for reuse and the application of modern methods of knowledge discovery because of restrictive licences and paywalls.
Third, rigorous, timely and ongoing peer review must continue to be central to creating and maintaining the public record of science. Conventional peer review is foundering. We need to greatly rethink the processes.
Fourth, the data and observations on which a published truth claim is based should be concurrently accessible to scrutiny and supported by necessary metadata. Data are in principle as important an output of science as text articles. They should be concurrently accessible under FAIR (Findable–Accessible–Interoperable–Reusable) principles and with securely managed routes to access where general access needs to be restricted because of considerations of safety, security, ethics, or privacy.
Fifth, with the demise of the physical ‘library of record’, it is vital to develop digital means of ensuring sustainable, enduring access to the global record of science and the means of identifying and accessing its content.
Sixth, we need to recognise the growing diversity in science: Given the diversity of needs, of discipline or geography, no one size or language fits all.
Seventh, as we all recognise, the paper journal is still dominant. It will need to be replaced by more efficient and flexible forms that exploit the developing capacities of digital technologies.
And finally, and most complexly yet critically, we need to find a path such that the governance of the processes of dissemination of scientific knowledge becomes more accountable to the scientific community. The scientific record is our key tool: yet the incentives in play, as I have discussed, have created deep issues in both industries and their interaction. Science must be a global public good. Systems can only change if we look at the underlying incentives, and there are many stakeholders.
We may not agree on everything as scientists and publishers, but we must work through our different perspectives and see how we can better enshrine these principles. This is key to science progressing as a trusted means of better understanding the world around us and within us. Only with trust can science truly make its promised contributions to the existential challenges of the future. The solutions are not easy, but constructive dialogue is needed.