You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Open and autonomous. The basis for trust in science

While it is yet too early to assess the exact proportions and repercussions of the COVID-19 pandemic, it seems undeniable to me that it does have a historic dimension. There will be many respects in which future historians will divide epochs in a “before” and an “after Corona”.

One of those respects in which COVID-19 is a game changer is the role of science in society. Never before has the global reaction to a devastating catastrophe been so crucially dependent on the speed and success of scientific research. Politicians in many countries let it be known that scientific results were essential for their decision making. Science-related news were consistently in the headlines for much of the year 2020, at times even outshining reports on the acrimonious fight for the US presidency.

At the Council for Sciences and Humanities (Wissenschaftsrat), we have immediately begun to ask ourselves what this new level of attention, and the pressure which it exerts on a number of scientific fields, mean for science as a system.

At first sight, this might seem an almost frivolous question to ask. Is this not a Kennedy moment, a moment best captured in the famous quote from John F. Kennedy’s inaugural address:

“Ask not what your country can do for you, ask what you can do for your country.”

And indeed, in asking what the pandemic means for science, at the Council we are not asking whether it is good or bad for science; we are asking what it tells us about science.

As has been noted by many people before, the health crisis and its social and economic repercussions have had such an enormous impact on virtually every part of our social and economic systems that we can consider it a kind of natural experiment. The year 2020 laid bare strengths and weaknesses of our societies. And it also laid bare strengths and weaknesses of our system of science, which became visible as if viewed under a magnifying glass.

We should not miss the opportunity to learn from that. Not all of us are involved in the immediate response to the crisis. Those of us who are not should pay close attention. We should prepare for a speedy recovery, and we should start thinking about how we can improve the system. For many of the weaknesses and defects that became visible during the pandemic are not easily mended, but will take years of hard work to repair, reconstruct, and redesign. As an advisory body that assists the federal government and state governments in Germany with reports and recommendations on the organisation and funding of research and teaching, the Wissenschaftsrat took it as its duty to identify the most important issues that have to be addressed in the near future.

One of the lessons learnt from the pandemic is that complex and globally connected societies are vulnerable to new kinds of crises. And while there are reports on emergency exercises that simulated pandemics closely resembling COVID-19 years before the actual crisis set in1, not all kinds of crises can be anticipated in any detail. Consequently, “resilience” is the word of the day, meaning the ability of societies to react speedily to un-anticipated challenges, to recover from crises, and to re-invent themselves in the light of the experiences made.

With respect to the system of science, this raises two separate, but intertwined questions: How can the system of science itself be made more resilient, enabling it to play the role it has to play in times of crises? And how can science contribute to the resilience of societies? These questions are connected, as only a resilient system of science can help society at large to become more resilient. And they are a good starting point for thinking about the future role of science in society.

Today, I would like to focus on one important aspect of this future role – the “transmission belt”, as it were, that links the inner workings of science with the capacities that our societies can draw upon in order to master crises like these. That transmission belt, as the title of our conference implies, is trust. For what the pandemic has demonstrated is that the extent to which science can contribute to resilient societies is crucially dependent on the extent to which politicians and the public are willing to take both the results and the uncertainties and non-results of science into account.

It depends on that willingness because the very existence of scientific institutions and of science as a profession represent what one might describe as an division of epistemic labour that pervades our societies2. Science is the social sub-system which specializes in formulating novel knowledge claims, putting those claims to the test, and inferring the implications of those claims.

As science becomes more and more specialized, it takes years, even decades to learn how to participate actively in that process. As a consequence, there is an asymmetry between scientists, if and as long as they are talking about their field of specialization, and non-specialists, including scientists from other fields of research.

Accepting the state of the art, as defined by a community of specialized researchers, therefore means, for anyone who is not a member of that community, that she or he forms a belief the reasons for which she or he cannot fully comprehend, nor criticize competently. In other words, accepting the word of an expert as more reliable than the opinions of a layperson depends on our trust in the expert.

Among the few encouraging experiences from 2020 was that in times like these, trust in science is still wide-spread. Physicians and scientists have always been among the most respected and most trusted professions. This was a good basis to build upon, and was important for the immediate reaction to the pandemic.

Surveys from the Wissenschaftsbarometer showed that in Germany, trust in science has greatly increased over the spring and early summer of 2020. By April, the proportion of people who fully trust or rather trust in science had received a boost of more than 20 percentage points compared to the 2019 baseline3. International surveys suggest that similar effects could be observed all over the world, although it has to be said that there have been huge and remarkable differences across countries and regions4.

In some countries, low levels of trust in society at large and a tendency towards polarisation along political divides have not spared science. Consequently, the measures taken on the basis of scientific recommendations have been intensely disputed, leading to lower levels of compliance. Without claiming causalities here, the lack of trust even seems to be correlated with higher levels of infections and deaths.

As we are seeing a silver lining on the horizon with the first people being vaccinated in several countries these weeks, the attention is now shifting towards questions of logistics, but also to the question whether enough people will be willing to take the vaccination in order to reach herd immunity.

Trust in science is not only crucial for the success of non-pharmaceutic interventions, which depend on many people changing their habits. It is also crucial for a successful vaccination campaign. Given this, it is some cause for alarm that the level of trust in science is about to reverse to the mean5. How will we run the last miles of this marathon is still very much an open question, and as last year, we might again see a huge amount of variance across countries.

Thus, the pandemic has demonstrated to us that trust in science is an important societal resource. Without trust, there is no consensus on the situation and its challenges, as any detailed description of our complex societies and their environment is dependent on scientific methods of measurement and data aggregation. Without trust, there is no consensus on the need for action, as any assessment of possible scenarios depends on scientific simulations. And without trust, there is no consensus on the effectivity of measures nor on the extent and likelihood of unwanted side-effects. In the end, in a society devoid of trust in science, the scientific endeavour is little more than a wheel spinning in the void.

For this reason, the pandemic gives us reason to ask what is the basis of trust in science, and how we can foster and strengthen it. Surveys suggest that for many people, the basis for trust in science unsurprisingly is their belief that scientists are highly competent and abide by the rules of scientific research.

However, it is interesting to see that among those people who have lower levels of trust in science, the primary reason for mistrust is neither scepticism concerning the epistemic reliability of the scientific method, nor do they fear that scientists are incompetent.

Instead, the most common reason for mistrusting science is the belief that scientists are not acting independently, but are susceptible to illegitimate influences. In particular, a common reason for concern is that scientists are perceived to depend on research funding from private companies and, as a consequence, are thought to be motivated to serve interests of those companies6. In some countries or whenever specific issues are involved, the concerns might be different, being more focused on the question of whether or not scientists are politically independent.

The lesson, however, is the same: The primary motivation for many of those who do not trust science seems to be that scientists, too, have very earthly motives; and that they might be just as susceptible to motivated reasoning as anybody else. To put it succinctly, what is most commonly undermining trust in science is not scepticism concerning Science writ large. On the contrary, most sceptics compare the actual behaviour of scientists to their ideal of what science should be like, and find it wanting. In other words, the primary reason for mistrust in science is the belief that scientist are not scientific enough.

Now this is a conference on academic publishing, so you will probably ask what does the question of trust have to do with the system of scientific publications? Quite a lot, in my opinion.

First of all, science is a system of collective knowledge production. Only through the exchange among researchers about hypotheses, methods and results can this collective endeavour exist at all. Publications serve this purpose, and to that extent, the system of publication is a critical infrastructure for science. We could not trust in science if such a critical infrastructure were in bad shape.

This does not mean that the system of publication is of interest only to scientists, however. Publications are also the primary medium through which science has an impact on society. Of course, journalists, politicians, entrepreneurs, administrators or lay people do not usually turn to scientific publications if they want to know what “science says” about some issues. They seek out specialists and ask for their assessment. But as soon as important decisions depend on such assessments, the published record serves as a benchmark for the state of play in a field of research.

In this sense, scientific publications represent our stock of knowledge as a society. The bedrock of trust in science is that this published record of science must be reliable. This is why quality assurance is and remains central to academic publishing. In addition, reliability also requires provisions for long-term archiving, unequivocal versioning and referencing.

However, it is of course not as simple as that – scientific knowledge is not just some intermediate product that can be stock-piled for future use. As we all know, and as the general public learns on the occasion of public controversies and major revisions, scientific knowledge changes over time. Sometimes, as the developing knowledge on SARS-CoV2 and COVID-19 has shown over the course of the last 12 months, it changes very quickly. And it does so because it is being challenged time and again by other researchers. Methodical scepticism is one of the prime virtues of a scientist. The question therefore is, how can we ask politicians and the public to trust in science while we regard even the best of science as fallible?

The answer is probably obvious to you: it is transparency. The preliminary status of knowledge, the amount of uncertainty and the status of controversies have to be made public and be explained time and again. There can never be enough in the way of explanation. Research on science communication shows that it does not help to try to “protect” the public from complexities and confusing inconsistencies. Not everyone wants to go into all the details; but nobody wants to live with the feeling that crucial information is hidden.

From that point of view, it is important to regard the transformation of scientific publishing towards Open Access as a building block for a more general Open Science strategy. Trust depends not only on the accessibility of final publications; it also depends on the transparency of the processes that lead to these publications. The status of manuscripts, preprints or presentations that can be found in repositories or on some personal webpage need to be made transparent. In doing so, we can show how publication, criticism, and revision are part of a process that leads to an ever improving and growing knowledge base. And in doing so, we can demonstrate why such a shifting knowledge base is the best that we can achieve despite being indefinitely open for revision.

Of course, this is a lot easier if members of the public are scientifically literate. Science education is an indispensable basis for trust in science, and for science to perform its role in society. Also, trust in science is a challenge for science communication and science journalism, both of which are susceptible to their specific kinds of defects. Science communication activities are in danger of becoming public relations campaigns if they try to hide uncertainties and controversies from view. Conversely, science journalism is in danger of exaggerating the importance of dissenting positions in order to stir up a controversy that makes for a good story. Scientists and publishers are equally responsible for working together with communicators and journalists in order to make sure that we avoid these pitfalls.

So much, I think, is uncontroversial. An effective system of quality assurance, transparency about uncertainties and controversies, and good science communication are important as a basis for trust. If politicians and the public have a good understanding of how science works, they will hopefully feel more comfortable with controversies in science and will not easily fall prey to exaggerated claims. However, transparency about how science works does not suffice.

As we repeatedly learn from surveys and psychological research, trust depends not only on trust in the workings of the system. Yes, a good system, as Bernard de Mandeville famously said, might be intended to convert “private vices” into “public benefits”. But even the best system can be gamed if someone is determined to work against it. People know this. And it is therefore hardly surprising that trust depends not only on an understanding of the system and how it works to produce the best possible results. It also depends on the motives that the people working within the system are perceived to be driven by.

This is not a big revelation. After all, trust is first and foremost an interpersonal relation. We trust in people if we are confident that they will not deliberately act against our interests; and we do not trust people if we know that their interests are at odds with ours, and believe that they are willing to act upon them. Rules and other safeguards can make us feel secure; but trust is precisely the expectation that the other will do no harm even in the absence of such rules and safeguards.

For this reason, the question of trust casts new light on the autonomy of science. We are perhaps too much used to think of the autonomy of science primarily in terms of protecting the system of science. And it is indeed important to make sure that the institutions of science are strong enough to follow their own rules.

The question of trust in science, however, reminds us that autonomy is not some abstract quality of the system, but depends on how the people that make up the system act. And for the overwhelming majorities in our societies who cannot directly observe the behaviour of scientists, the best way to form expectations about that behaviour is to guess which motives drive them.

If we take this to heart, it follows that we have to address problems with the system of scientific publication not only because they deteriorate the quality of scientific publications. We should also ask us what they reveal about the motives of the people involved. Let me just briefly discuss two examples.

From the perspective of trust, the real danger of predatory publishing is not that it might spread low-quality research. There might be remedies for that, and if these are not perfect, one might still point to the tiny proportion of publications that appear in dubious journals, or to the fact that these are almost never cited by respectable researchers.

But what remains problematic about predatory publishing even when the number of cases is low are its symbolic qualities. The phenomenon of predatory publishing reveals that publishing has become an end in itself because indicator-based systems of resource allocation and promotion reward mere quantities of publications. Profit-maximizing behaviour on the part of irresponsible publishers takes advantage of the indicator-maximizing behaviour on the part of researchers who live under the impression that the system of science is ruled by the “publish or perish” imperative. It is quite understandable that this worries politicians and the public. The simple fact that scientists feel pressurized to publish at all costs, and that the system offers opportunities to do so, suffice to cast doubt on the whole system.

The question of trust also casts light on the issue of plagiarism. Instances of plagiarism are among the most prominent scandals involving science and scientific publications. Here in Germany, the careers of several politicians have been dented or terminated due to accusations of plagiarism.

The question why plagiarism is drawing so much public attention deserves some scrutiny. After all, unlike the fabrication and falsification of data, plagiarism in itself does not deteriorate our stock of knowledge if the source publications conform to the standards of good scientific research. Internally, we consider plagiarism as violating the standards of good scientific practice because it undermines the system of reward allocation on which the reputations of scientists are built. Viewed from the outside, however, plagiarism raises another problem: It reveals something about the motives of people who are part of the system.

Cases of plagiarism show that authors of scientific publications can be driven by strong external motives such as earning a degree, being promoted, increasing their salaries, or just improving their reputation; and that these external motives are stronger than the mechanisms in place to enforce good scientific practice. It is therefore important to take measures which prevent plagiarism, and sanction those who are involved. To be clear, I do not condone the spirit of witch-hunting that is sometimes associated with plagiarism-detection. However, it would justly undermine trust in the system of scientific publishing if it were perceived as tolerating such practices and, by implication, possibly different kinds of violations of good scientific practice as well.

As the cases of predatory publishing as well as plagiarism show, trust in science is jeopardized if science is painted as an arena where publicity seeking sensationalism, profit maximizing business-interests, or the struggle for individual advancement are the dominant motivations.

If we look closely at what we learned in the course of 2020, I think there is another small lesson to be drawn from the surveys: For we have not only seen increasing trust in science. If we go back to the reasons for mistrusting science, we have also seen that fewer people mistrust science on account of their perceived dependence from funders7. And while there are no explicit data on that, I think the main reason for this change is that the pandemic has amply demonstrated to the general public how strongly devoted scientists generally are to promoting the common good.

In thinking about the future of academic publishing, it is therefore not only important to think about how we can make publishing work better. We also need to pay attention to what the mechanisms of publishing reveal about ourselves, unless we spoil the positive image which science has gained over the course of the pandemic.

With this experience in mind, the transformation to open access is an opportunity to work together to create an ecosystem which maximizes the use and quality of scientific publications. In this respect, the advantages of a system where publications are immediately being made openly accessible seem undeniable.

But the transformation to open access is also an opportunity for a public discourse on the purpose of academic publishing. We should welcome that opportunity as an opportunity to demonstrate that science is about creating new knowledge and making it available to society as a public good. And we should strive for a pluralistic ecosystem of academic publishing that does not undermine this public goods character of scientific knowledge because it is primarily driven by indicator-optimizing behaviour and profit maximization.

It is in this spirit that the German council for sciences and humanities has established a working group on the transformation of scientific publishing. We expect that working group to draft recommendations on how to support this transformation, which are to be published by October 2021 as our contribution to an important discussion.

I am confident that we will find many allies among the participants of this conference when it comes to shaping this transformation for the good of science and society. We are looking forward to cooperating with you, and now I wish you lively debates and a fruitful conference.

Notes

1 As recently as in October 2019, the Johns Hopkins Center for Health Security together with the World Economic Forum and the Bill & Melinda Gates Foundation hosted a high-level simulation exercise for pandemic preparedness and response dubbed “Event 201” at New York, simulating a pandemic scenario caused by a highly transmissible Coronavirus disease. Cf. https://www.centerforhealthsecurity.org/event201/.

2 Philip Kitcher: Science in a Democratic Society. Prometheus Books, New York 2011, p. 20 ff.

4 Pew Research Center: “Science and Scientists Held in High Esteem Across Global Publics”, September 2020. https://www.pewresearch.org/science/wp-content/uploads/sites/16/2020/09/PS_2020.09.29_global-science_REPORT.pdf.

5 Among the German populace, the proportion of people who declared that they would “rather” or “fully” trust in science and research declined from an all-time high of 73% in April 2020 to 60% by November that year, cf. Markus Weißzkopf & Ricarda Ziegler (eds.): Wissenschaftsbarometer 2020, November 2020. https://www.wissenschaft-im-dialog.de/fileadmin/user_upload/Projekte/Wissenschaftsbarometer/Dokumente_20/WiD-Wissenschaftsbarometer_2020_Broschuere_final.pdf.

6 Weißzkopf & Ziegler l.c.