You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

The value of scholarly communication

1.Introduction

In this article I will address the value of scholarly communication. I will touch on aspects that include the value provided by researchers, publishers, funders and other stakeholders within the science, technology and medical (STM) industry information ecosystem. When referring to publishers throughout, I would like to point out that in this context I refer to publishers of any kind and size: for-profit and not-for-profit publishers, university presses, as well as the publishing activities of medical and learned societies. I also deliberately mention the words information and knowledge ecosystem – my best effort to acknowledge that the many stakeholders form an intricate and inseparable network that support the discoverability, exchange and accumulation of information supporting research.

The aim this paper is to describe the value of research communication broadly, and in doing so I distinguish between: (a) value that is clearly there, and typically recognized; (b) value that is delivered, yet taken for granted, or not so well recognized; (c) areas of missing value, that is, value that should be there yet is generally not there, or not provided sufficiently at scale by anyone within the information system. I will argue that some of that missing value of the system is hindering the progress of research efficiency. Finally, at a different level, I identify the need for “new” value defined by dialogue, mutual understanding and planning with stakeholders for a better future of research communication.

Research – i.e. as conducted by researchers, deserves the maximum quality support. Research is a driving force behind technological advancement, solving societal issues, increasing economic value, and, ultimately health or well-being. From our own employee-surveys at Elsevier, and meeting with many hundreds of colleagues, it is rare to find people within our organization who do not indicate to be proud about their supporting the world of research and global dissemination of research information.

Nevertheless, conducting research and even more so the production of information around research is under budget pressures. The worlds expenditure on R&D at business enterprises, governments, higher education and private non-profit organizations is estimated at ca $1,850,000 Million (sources OECD, UNESCO), and the sum of information budgets are around 2% of the overall budget, somewhat declining annually now over more than a decade.

The internet was invented and developed mainly by researchers. Ironically the world of research itself was affected relatively late by the technology. Incumbent publishers have woken up to new players who picked up the looming existing ambitions and needs from researchers, in many different segments of the workflow. Pre-print servers, open access, collaborative peer review tools, social research networks such as Mendeley, Research Gate, and more, have not come about through the work of traditional publishers.

2.Value in transition

Much of the current public focus – and debate - concerning changes in the research ecosystem is on access models, in particular open access. Open access has brought, and continues to bring, unique characteristics of disruption that find no easy similarity with innovations brought to consumers. Stakeholders across the STM industry share joint goals of open access, as does Elsevier. If one peels off the layers of social media traffic, the friction around open access typically does not concern the goal of open access, yet the end-state of open access, as well as the transitionary route to achieve that end state. I believe that the transition to open access turned out to have more challenges than anyone hoped for. For something that is arguably fair and desirable for many, open access is a complex innovation.

Every innovation challenges the traditional way of doing something. Open access challenges the subscription model, i.e. where readers pay for immediate reading – while authorship appears for free. With open access the author pays for publishing, so that others can read immediately and for free. The OA model therefore pays for a different value in the system, and this is causing regional cost and budget impact, which I will explain. An author who pays for OA publishing still has the need to read the articles of others. In that sense OA is different than innovations underpinning, say Uber. Unlike OA, if you get into an Uber taxi, it fully replaces your need to step in a traditional taxi to get to the railway station (one can only sit in one taxi at a time). Unlike OA, with Uber you do not pay for a different value: arrive at the railway station. Unlike OA, if you get into an Uber taxi in London it does not change the taxi economics in Brussels. Unlike OA, the introduction of Uber does not cause that frequent Uber users in Berlin, create cheaper Uber rides in, say, Copenhagen. The complexity of the OA transition in a nutshell.

The economic and technical characteristics of the journal subscription versus open access publishing are just different: not morally better or worse (though I respect anyone referring to OA as a moral imperative). Arguably, Netflix is no poorer service to consumers because it is a subscription service that is behind a paywall. The subscription model for research articles causes lower costs per reading, in case usage increased. In open access, such scaling does not exist: the costs associated with a higher number of published OA articles scales up more or less linearly. In addition, for the subscription model (or the Uber economics), lower downloads in region A, do not trigger cost differences in region B. Subscription (or Uber) does not have cost disadvantages for early movers, and cost advantages for late movers.

In summary, the OA publishing model, under which 20% of articles were published globally in 20181, differs with the subscription model by its; (a) cross-regional economic impact, (b) by its payment for a fundamental different value and; (c) by its scaling economies. For all its value and desire, open access has a unique combination of innovation characteristics. I would argue these characteristics, all of which can be overcome, are deep (and somewhat hidden) drivers for the conflict around the transition from subscription content to the route to 100% OA – and not enough on the table in the public dialogue.

Which brings me to the point of clarifying “transition” and “end-state”. Some issues clearly belong to the former, other to the latter. If all research institutes and funders in the world would endorse OA roughly at the same pace, the OA transition would have less challenges, yet they would still exist. In absence of such coordinated pacing, some have suggested that OA transition needs some sort of global levelling of the cost for research information. If all institutes and funders in the world would support OA at the same pace, the result would still be that some organisations pay (in relative terms) more than they do now, in particular highly productive research institutes. Possibly on the other side of the spectrum one would have organisations with high R&D investments, and low research output, paying increasingly less for information while OA moves forward. Many global corporations with strong R&D profiles fit this bill and will be the free riders of information of the future. Everything else equal, governmental and academic research will pick up the information costs of corporations. It’s just an aspect of OA publishing. While the total cost of research information, irrespective the model, may show healthy reduction of unit pricing (the pricing per published article show a long term down-wards trend), yet the growth of the volume of information has shown to drive an increase in information cost. With rising research outputs and changing cost distribution, I would encourage an open dialogue on the topic of this inconvenient characteristic of OA publishing and options for cost (sharing) and cost re-allocation of research information.

3.Value recognized

Value brought by publishers to the research system which is typically recognized, is the value in the publication system around the function of publishers to register (the intellectual property of the author, the format rights of the publisher, the conflict of interest declarations, etc.), to review and filter information through desk rejection, to do integrity checking, peer review, and take and communicate a final decision. In addition, the recognized value is provided by publishers who enhance information through formatting, tagging and adding information, by organizing information (though brand attraction, virtual special issues, and more), and by global dissemination (through any kind of promotion, sales or free online access or distribution). Finally, publishers make best attempts to digitally preserve documented research discoveries in public archives, done via initiatives of the publishers or collective actions such as CLOCKSS Archive2.

All these activities listed above clearly add value to the system, so will be recognized and acknowledged as such by most stakeholders. Speaking to this point, a recommended read would be: Kent Anderson’s 2018 update on The Scholarly Kitchen: “102 Things Journal Publishers Do”3.

However, it is the hidden value which publishers provide which I will more vocally advocate for; value which I find underexposed, under-lit, value which stakeholders need to be made aware of. Please note that this is of course a personal list, and in views of others, may not be considered.

  1. Editorial Independence: the notion of editorial independence4 is a core value for Elsevier – and something I have reminded others of in many occasions. As an organization, we are reminded of this key value in cases of persistent dispute against rejection decisions, editors, upon adherence of the agreed remit of the aims & scope of a journal, and their guidance and best efforts to lead and display integrity, are ultimately answerable to no-one when it comes to decisions of individual research articles, letters, and sometimes ambiguous disputes. Editors are entitled to take an accept-decision after two reject advices from referents, or the opposite, and the best editors do this with high confidence, connected to transparency of their decision. Without being complete, editors are entitled to lead a board walk-out or write against publisher policies in their own journals when it is their opinion – not the opinion of others. Editors have accepted autonomy from the organizations they are member of, the research institute or hospital who employ them, or their research funding. All that surrounds safe-guarding editorial independence is something the research ecosystem needs reminding of with a certain frequency.

  2. Political pressure: some of the pressures with an origin clearly outside normal academic is listed here:

Politically motivated pressure to change the naming of cites, countries, regions or maritime zones.

Pressure by NGO’s or patient lobby organizations against decisions in heavily debated society matters e.g. genetical modification, behavior & biology, climate matters, diagnosis, prevention and treatment of diseases.

Pressure by governments to remove existing content from entire collections for their country.

Pressure editors in case of country negotiation (Rest assured, Editors can make up their own minds).

Pressure to publishers to remove corresponding authors from existing articles for clear country-political reasons.

Some of these disputes have taken several publishing organizations lengthy consultations, diplomacy, PR repercussions or costly commercial impact. For that reason, I abstain here from specific references, however am open to discuss further beyond this article with anyone who wishes to.

  1. Research Integrity: information about retractions, corrections, errata are typically public. Less public is the plagiarism, duplicate publications and other infringements that don’t make it to an article in press or published article. From internal sources, we estimate that only 2–3% of infringements slip through peer review and editorial de-selection. The far majority of research ethics breaches is prevented by the outcome of reviewers, editors, and publishers. Governance of research integrity means protecting against potential reputation damage to research institutes and researchers. Most academics have low awareness of the lengthy investigations in matters such as (co-) author disputes or the long correspondences of authors regretting research misconduct many years before. Infringement of academic research integrity does not expire over time, pardoning or appeal infrastructure is largely absent. Editors, together with Publishers, supported by organizations such as Committee on Publishing Ethics, or the International Committee of Medical Journal Editors and others, have their hands full on these matters. While numbers are small, a shadow side of research, and rather hidden work in the value chain.

  2. Impact of Peer Review to Research Conduct: the majority of retractions concern matters of infringed research conduct. The majority of peer review feedback concerns research methodology, research conduct, or data analysis. The implies that value around peer review flows from referent and editor to the author and her/his manuscript, irrespective whether the article is accepted. Value also flows from the submitting author to the reviewer, through received value from sharpened reading skills, from receiving early research information, and from future network opportunities. Value of peer review also flows from to the publishers, since the outcome are better manuscripts for the readers. These multiple peer review value streams have for long time gone largely from west to east, from north to south, implying generally a net influx from developed countries to developing countries. In scale this a large knowledge stream since the volume of peer review reports equals roughly the annual volume of accepted articles. Together the feedback of review reports forms an information stream, that with surgical precision helps the democratization of knowledge at large scale. There is hidden, yet large value in these streams of peer review reports. I would argue it won’t be long until peer reviews reports become openly available, anonymized or not, for all to review, allowing to map the influence of peer review feedback including its geographical distribution.

4.Where value is lacking: predatory publishing

The publication volume of predatory publishers is significant, and typically escaping proper sizing from the various indexing organizations. Chen and Bjork [1] estimated a volume of 420,000 articles published by predatory publishers in 2014 alone; Marilyn H. Oerrmann et al. reviewed predatory publishers in the area of nursing [2]. From a sample of 140 suspected journals, 358 articles showed that e.g. 32.4% had fatal flaws, for instance flawed research design or no information on human subjects. 50.9% of the articles were estimated as relevant or useful for nurses, and 69% of the results were based on data. Let me add that I have never met individual representatives of predatory publishers, so I can’t know their intentions to be “predatory”. Perhaps they are just (very) sloppy, or ignorant. Also, while peer review at predatory publishers is absent or incredibly thin, the real opportunity costs are connected to the lack of feedback to research methodology, research conduct, and research analysis. Opposite to the value spreading from peer review and reading of articles from sincere publishers, articles published by predatory publish lack to share and re-distribute value. Further, from direct experience, predatory publishers poorly respond in suspected cases of ethics infringement, show no appetite for industry collaboration, platform optimization, or article promotion. Finally, predatory publishers make a poor and unnecessary show case of open access publishing, create fake information, destroy ecosystem resources and reduce trust in science. Their low APC’s don’t make up for anything. In an interview article on Elsevier Connect I elaborate further on the common issues and questions which the research community and public often raise about “predatory” journals.5

5.Comparisons with general mainstream news and social media

The notion that predatory publishers contribute to fake science, prompting a brief comparison between research communication and the general news and social media. News media suffer from large scale copying of their content, availability of fake news, fake images and increasingly fake videos. They suffer from troll farms, censorship and experience pressure to monetize quality content. Artificial intelligence has entered the workplace of reporters and investigative journalism is under pressure, except for a small group of successful newspapers and online magazines. Probably related, reporter wages in the US fell behind the growth of average wages6. How is research communication any different to this? SciHub7 represents information piracy at full scale: predatory publishers spread sizeable volumes of unreliable information, parasitic to the system. Troll farms from social media know their counterparts in fake peer review firms that offer “scientific” review reports from hacked or fake e-mail accounts – yet their scale is small and controlled by efforts of various stakeholders. State controlled censorship is infrequent, yet existing in the form of requests to remove content from country collections and rarely from pressure to publishers to remove corresponding authors from existing articles for clear country-political reasons – see notes above. Some content monetization pressure exists creating lower revenue per article. This pressure is technology-driven, also resulting from a combination of piracy (“everyone can read SciHub”), alternative content aggregation and legitimate activities from Google and Facebook. The impacts are seen in areas of subscription content and open access – and smaller publishers may more easily suffer from the impacts than those who can operate with scaling economies and network impacts.

So, I’d like to raise a call to the STM industry to prevent that scholarly communications falls pray to similar issues as mainstream news and social media. An ecosystem of reliable general news and social media is part of civil democracy, freedom and protection against political polarization and poverty. Similarly, the stakes for efficient research communication include innovation, economic strength, the competitive positions of regions and nations, health and wellbeing. These stakes are high enough to secure the highest levels of robustness, sustainability and trust of the underlying information system.

STM publishers continue to stand up for editorial independency, deflect political pressure to change or remove author names or geographical names, deflect retract politically unwelcomed content – I do feel we should talk more about this side of our profession.

Researchers may need to become more aware of the value editors and the peer review system adds; in the same regard, researchers should also better understand the lack of value predatory publishers provide – specifically a lack of transparency around research conduct and data analysis, and a lack of interest to bring the right content to the right audience.

6.Where not enough value is delivered

Whatever one may think about value provided by and delivered to stakeholders in the research ecosystem, one thing is very clear: the current research information system, i.e. all tools researchers have at their disposal to execute their tasks, knows many shortcomings –and its where I too feel value in scholarly communications is lacking.

Let me elaborate on some of these shortcomings:

6.1.Bias of content and content discoverability

In general, published information lacks so called negative results, lacks innovation of article types, lacks information on author contribution. Too many articles are sliced and diced: the system knows a surplus of submissions of low quality. The fact that over 50% of all article submissions to Elsevier journals are rejected prior to peer review (so called “desk rejects”), may already say something about the average quality. The very little published information around research methodologies and protocols hampers reproducibility. The journal article as we know it is too static to meet the needs of all readers, and possibly enshrined too much as the only acceptable unit-of-knowledge. Little innovation is done to pilot with information formats that would improve research outcomes, research speed, or research productivity.

6.2.Embedding and recognizing peer review

The system around the evaluation of research via (usually) anonymous peers is arguably the best system. Over the centuries it has brought great contributions to knowledge accumulation and global knowledge transfer – and more broadly has helped science, and broader still, society progress. However, the system of peer review also holds shortcomings. Finding and inviting reviewers is often based too much on existing networks and geographic allocation of accepting reviewers is not replicating the origin of submissions. Peer review bias has been documented well, information about reviewers and peer review outcomes are too often not available to readers, efforts of peer reviewers are not sufficiently recognized, reviewers and authors are not sufficiently connected. Good attempts exist to address almost all these short comings, yet don’t work at scale.

6.3.Functionality and connectivity of tools and platforms

Finally, the information system(s) supporting research in its current form is fragmented, too closed, and getting outdated. The collection of article databases, platforms, pre-print servers and other content sharing systems all suffer from a relative lack of easy access across the content from the publishing houses, a lack of integrated available workflow across tools, a lack of transparency concerning the functioning of e.g. content recommendations, and poor discoverability of for instance data in tables or charts. The system also suffers from a lack of influence of the researcher to improve individualization and customization of information. In addition, the system largely lacks an analytical layer that helps the researchers in efficient and to-the-point knowledge transfer. For instance, machine-generated, customizable and individualized visualizations or insights are typically not available on publishers primary reading platforms.

The primary article platforms typically lack basic tools that researchers today are likely to find very helpful: the ability to copy/paste without losing citation information, exporting data with the article reference, making annotations while reading, sharing or reviewing. Also, information that immediately points readers to the relevance of a certain article, or group of articles, is typically immature. Such information could be the availability of review reports, or the credentials of the reviews and or reviewers, or any crowd-driven feedback to the peer review system.

Information filtering for specific use-cases can only be found after lengthy searching: how different indeed are the discoverability needs when a scientist needs to retrieve an overview of available research methods, versus a search for similar data? How different are the information needs for finding similar research outcomes, versus comparing all information about outcome modelling? In short, tools for use case relevant discoverability are largely absent, just one of the current imperfections of the research information system.

Going forward I would think that only publishers (or others) with sufficient monetary resources, well-funded start-ups, or rich philanthrope’s, can substantially contribute to a more open, transparent, and research-friendly information system. It will take money to design, build, improve and maintain such information system.

Not long ago we have announced8 to work on a system that is more source neutral through the inclusions of information from multiple content sources, multiple publishers. Such platform would also be more interoperable and more transparent and will put the researcher better in control and the visualization of that vision is in Fig. 1.

That picture visualized an ambition we will deliver iterative steps, build together with stakeholders, and can be founded using a back-bone of existing information systems such as Mendeley, SSRN, Scopus, Reaxys or ScienceDirect. Rest assured, this is a North-star vision, and we would deliver it in incremental steps, involving the research communities and other stakeholders all the way along.

Fig. 1.

The four principles of the information system supporting research [source: Elsevier Connect].

The four principles of the information system supporting research [source: Elsevier Connect].

Irrespective our own plans, we strongly believe that research is served by platforms that help researchers in professional tasks far beyond primary reading and need to include knowledge sources, irrespective from the publisher “owning” the source of information. New information ecosystems would allow discoverability across both research articles and research data. It would need to cover tools for (multidisciplinary) collaboration, data analysis and data re-use, user-base specific recommendation, integrity assessment, comparison of methods and protocols, etc. These are likely licensed platforms, possibly with freemium layers, possibly in places partially operating separately from the monetization of trusted and relevant primary article content.

7.Missing dialogue

To close, I’ll make a final observation on a lack of cross-industry, cross-stakeholder conversation and brainstorming on both the current and future state of our shared research information system – and with it all that comes with research communications. In my introduction I referred to this as part of missing value. With some liberty, I’ll now rephrase to all this missing dialogue. The current dialogue concerning the future of research communication is much influenced, if not dominated by one single theme: open access. The fair principles behind open access, budget and commercial issues, supported by well-prepared communications, are drivers behind the place of open access in the “public” dialogue. Add to that the complexities around open access itself, in particular in term of its influence on global information economics, and the lack of a global solution for the resulted budget friction. In my view, the relative dominance of open access as sole focus on the functioning of the research information system, lacks justification.

I connect to that observation an open invitation to a different dialogue and start joint thinking concerning future scenarios, options ahead for the knowledge ecology. I would think that this dialogue is best made away from consortia negotiations, political or moral statements. It would be great if this dialogue could be led by researcher themselves – away from the known advocates and vested stakeholders. I believe this dialogue should be owned by researchers – and if any voice may dominate, I would gently push the voice of the millennials and other young digital-native research generations. How would researchers communicate if journals, publishers, subscriptions, impact factors, platforms, would not exist? What would truly support research? How would they build the research information system from scratch?

Conflict of interest

Views are my own, however I should declare a bias resulting from my role as Managing Director Journals at Elsevier.