The COVID-19 pandemic has confronted society with a range of issues, dilemmas and challenges. One topic that has attracted considerable attention has been trust in science. Whilst a majority of people have shown great faith in scientific work and have applauded the arrival of a vaccine that has been realized through scientific endeavor, a significant minority has also challenged the opinions of scientists and the reliability of their research findings. This minority argues that scientists and their science is flawed, that it is biased and unsound, and captured by commercial and other interests. This minority has resisted the introduction of governmental measures based on scientific data and in doing so have challenged the legitimacy of government.
The research that we publish in this journal has not stirred this level societal debate. But, at the same time, the question of trust in academic work is also playing an increasing role in our field. The erosion of trust in social science is more related to a series of high profile cases of academic fraud, often driven by a desire of ambitious individuals to perform well in an academic world that is increasingly focused on measurable metrics, such as the H-index (for some interesting analyses see: Budd, 2013; Butler et al., 2017). In some countries, there are even direct financial incentives connected to the publication of articles in highly ranked journals, and this in turn may encourage some scholars into bad scientific practices.
In view of the need to maintain trust in science, a variety of measures have been proposed and are being implemented. More emphasis is being placed on ‘research integrity’ and some journals demand that research has been reviewed by an ethical board. There is a call for more ‘research transparency’ which translates into an obligation to make original datasets openly available so that others can check the reliability of the research processes and findings presented in an article. There is also an emphasis on providing transparency about the funding of research and whether those funding research may have shaped research outcomes. Increasingly, a number of journals are putting mechanisms in place to check whether co-authors have been actively involved in the generation of a manuscript and what that role has been.
The range of formal measures being introduced by journals are understandable but they bring with them certain risks. The biggest risk is that the very measures that are intended to generate enhanced trust in academic work will actually perversely undermine this trust. The dynamic around trust has been analyzed comprehensively by Michael Power in his book exploring the ‘Audit Society’ (1997). Here, he argues that an increased emphasis on bureaucratic mechanisms to create trust can backfire since they are based on a starting point of mistrust. For the academic world, this could mean that the increased emphasis on openness and transparency will actually result in a climate where there is little room to discuss how science really works and how researchers deal with the difficulties that they encounter in their work. The formal reporting of scientific outcomes will be increasingly ‘decoupled’, as Power calls it, from actual practice.
So how do we as Editors of Information Polity address issues associated with trust in science? We are well aware that there is an increased emphasis on bureaucratic procedures but our position is that these should only be introduced to help authors construe their own foundations for trust in their work. Authors may indeed make datasets available and for certain research papers this will indeed be helpful to the reader. At the same time, we do not believe that this type of transparency should be compulsory, as this may not work with all scientific approaches (see Jacobs et al. (2019) for an overview of the deliberations on qualitative research transparency within the political science community). The move towards the publication of accompanying datasets can be very beneficial to the research community but we need to realize that this type of transparency does not fit other research such more qualitative or theoretical work
As Editors-in-Chief of Information Polity, we do not reject procedures that will contribute to trust in science but we would like to emphasize that these procedures are instruments for building relations of trust and that they should only be introduced in ways that are respectful of the diverse nature of the scientific process. These relations of trust are crucial within and beyond scientific communities and the instruments should only be used if they help build these relations. This is why we encourage authors to build a clear case for trust in their work and we facilitate this by enabling authors to make, where appropriate, additional material/data available. In this way we aim to build a strong research community based on trust in academic work. We encourage everyone to assume their own responsibility and we are very open to any suggestion that will help to further strengthen trust in the academic work that is published in this journal.
E-mail: [email protected]
E-mail: [email protected]
Budd, J. (2013). The Stapel case: An object lesson in research integrity and its lapses. Synesis: A Journal of Science, Technology, Ethics, and Policy, 4(1), G47-G53.
Butler, N., Delaney, H. & Spoelstra, S. (2017). The gray zone: Questionable research practices in the business school. Academy of Management Learning & Education, 16(1), 94-109.
Jacobs, A.M., Buthe, T., Arjona, A.M., Arriola, L.R., Bellin, E., Bennet, A., Björkman, L., et al. (2019). Transparency in Qualitative Research: An Overview of Key Findings and Implications of the Deliberations. SSRN Electronic Journal, July. doi: 10.2139/ssrn.3430025.
Power, M. (1997). The audit society: Rituals of verification. Oxford: OUP