You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Where does the buck stop? Research ethics and publishing

Abstract

Issues around research ethics and the reproducibility of research are impacting the credibility of science. Firstly, we look at what is to be understand by research ethics, misconduct, and reproducibility. Next, we examine some examples of fraud, beautification, and failed reproducibility. Next, we address the causes and possible resolutions, culminating in the question whether the scholarly literature is still self-correcting.

We need more convergence in our thinking on research ethics and publishing. At the moment our dealings with ethical issues, misconduct, and reproducibility differ too much from discipline to discipline, institution to institution, and publisher to publisher. We should be working towards a global policy agreement. Otherwise the credibility of research, particularly in the eyes of the public, the ultimate funder of science, will diminish and may be lost.

1.Research ethics, misconduct, and reproducibility

What do we mean by research ethics? I am looking at ethical standards and behaviour across research conduct, reporting, and evaluation. This means:

  • 1. Ethical conduct of research, including particular the conduct of experiments and the attribution of credit;

  • 2. Ethical reporting, implying that there is no falsification, fabrication or plagiarism;

  • 3. Ethical assessment, implying that reviewers, editors, and publishers follow ethical standards.

We do have definitions, particularly of misconduct. For example, the US Office for Research Integrity states.11

“Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.

  • a) Fabrication is making up data or results and recording or reporting them.

  • b) Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.

  • c) Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.

  • d) Research misconduct does not include honest error or differences of opinion.”

Yet, definitions do not necessarily translate into joint policies and procedures in dealing with breaches and misconduct. The need for policies and procedures has been highlighted by a related issue, namely reproducibility, which has hit the news stand.

We have observed a trend among authors towards seeking the most efficient path in verifying and supporting a hypothesis for the purpose of a fast and ‘storified’ publication to maximize impact. This practice demonstrates that ethical standards are in flux. ‘Verification’ and ‘storification’ are somewhat at odds with the notion of the open testing of hypotheses, particularly their rejection, and of unbiased reporting.

2.Experimentation and reporting: Fraud or beautification?

The race for impact highlights the important role of the publisher across research conduct, reporting, and evaluation.

Consider the following example. At EMBO, we received a paper based on an experiment conducted on the lab students of the principal investigator. The paper raised ethical issues around selecting and controlling research subjects. Moreover, the experiment included induced pain, and the reporting revealed that on a scale of 0 to 10, the pain level reached 8. This raises more ethical issues around the issue of harm. Then there is the difference between ethics and the law. In the country the research was conducted, it was legal. In other countries it would not be legal. All of this leaves the journal editor in somewhat uncomfortable position of having to make a judgement call. Should the paper be rejected on ethical grounds?

Consider another example: the practice of ‘photoshopping’ images. Authors are selectively taking out or dropping in information, typically in an effort to reduce noise or highlight a result. At EMBO, in more than 20% of otherwise acceptable papers we find image problems. We have to investigate. Are we dealing with fraud or sloppy ‘beautification’? Again, it is a judgement call. There are no established policies or practices, e.g. banning the use of the eraser tool in photo applications.

When you see problems, it is hard not to jump to negative conclusions. Yet, the examples show the complex nature of ethical decision making, which makes the convergence of thought and the establishment of standards all the more urgent.

3.Reproducibility: Shaken, but not stirred

Research has been undertaken that suggests that only 30–60% of the ‘milestone’ research papers across various disciplines are reproducible. This seems shocking because ‘milestone’ papers communicate important research advances.

The Reproducibility Initiative22 is systematically investigating the issue by testing if papers in the social science and cancer research can be reproduced. On average it costs $26,000 to reproduce a paper, so this is a one-off public initiative that will not scale ($1.3M for 50 papers). Nevertheless, the results are important and highly visible. Moreover, reporting in mainstream media implies that politics and society will take note.

Yet, again, the issue is complex. The finding of seeming non-reproducibility does not signal that research is not reproducible. At this stage, it only indicates that our protocols for recording experiments and methods are not good enough to ensure that reproducibility is efficient. In another well publicized example [1], two labs had to keep going for many months to find the error that prevented the reproduction of the research. Of course, this is too long and too expensive. Hence, we need more convergence of thought on reproducibility and improved standards for reporting and capturing methods and procedures.

4.Causes and complexity in publishing and ethics

Among the causes contributing to the problems with ethics and reproducibility we may note the following interlocking trends:

  • The rapidly rising number of researchers and papers;

  • The relative decline in research funding;

  • The dominance of research assessment and impact measurement;

  • The trend towards rushed research for fast publication;

  • A lack of training, detection, and consequences for research misconduct;

  • The ambiguity of many cases: mistake, beautification or fraud?

This list highlights both the importance of the publishing process and the responsibility of the editor or publisher.

Further still, a look at the stakeholder landscape reinforces the notion that the publishing process and the publishers have a key role. At the level of the laboratory and co-authors, detection and reporting of ethical issues or misconduct seems to be infrequent. Institutions typically only act when notified, whereas funders and governments try not to get involved at all. Moreover, national laws and regulations differ. Hence, detection and discussion of ethical issues, misconduct, and reproducibility is very much down to publishers, journals, and their editors, including the senior academics serving as editors.

This is not a happy situation. Journals and editors are the last check point before a result enters the scientific record. Hence we need to ask if our policies and standards must not include structures that enables us to detect pertinent ethical issues and misconduct much earlier.

5.Is the literature still self-correcting?

All of the above leaves one not too optimistic about the self-correcting nature of the literature. How many papers are not reproducible? How much fraud goes unnoticed? How many ethical issues are ignored?

However, in the absence of a better system we must look to the publication process and the publishers. At EMBO, we encourage the correction, versioning, and retraction of papers. We seek more proactively to publish refutations and negative data. Also, we enforce policies on reagent sharing. All of this is designed to reduce spin, hype, and selective reporting.

References

[1] 

W.C. Haines et al., Sorting out the FACS: A devil in the details, Cell Reports 6: (5) ((2014) ), 779–781. doi:10.1016/j.celrep.2014.02.021.