You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

New ways of building, showcasing, and measuring scholarly reputation in the digital age


Journals and scholars build their reputation on citations. An array of citation measures has been developed, including citation counts for the individual scholar. Yet, over 50 activities that contribute to reputation building can be identified across five areas of scholarly activity: research, integration, application, teaching, and co-creation. In the past ten years new platforms have merged that foster and measure an array of practices. This article systematically audits new reputation measures and addresses the question if the new and old measures will collide, ushering in disruptive change.

When I wake up and power up my laptop, I frequently find an email or two from one of the new scholarly platforms emphasizing the impact of my research. The other day Academia, which boasts more than 30 million registered users,11 informed me that based on users’ activity in the past 30 days my research impact made me a ‘Top 4%’ researcher. Next was an email from ResearchGate,22 letting me know that my most recent paper had found more than 400 readers, and that my RG Score was now 37.13. So, these platforms made my day, because reputation is based on the judgement of your peers and measured by how they interact or not, as the case may be, with your ideas and data.

Reputation is the key scholarly currency

The scholar seeks reputation above everything else, and measuring that reputation has been left largely to citations; more precisely: to citations in journals. This has led to the prominence of the Journal Impact Factor, and the corresponding ranking of journals. With the advent of Internet technologies and platforms, open and transparent citation metrics are provided with confidence also for the individual article, author, or book, as well as aggregate entities such as research groups, departments, or institutions. For appointments, tenure, and promotion, publication citation metrics are increasingly important, especially when packaged as an h index.

When the tail wags the dog: Publication citations

A once bibliographic tool now defines scholarly reputation. How bizarre is that? Of course, this narrow view of reputation suits publishers only too well, they are after all the main purveyors of scholarly reputation this way, but it marginalises other scholarly activities, thus skewing the whole of academia. Not surprisingly, other areas of the research system are increasingly scrutinized, e.g. the lack of accessible research data, the limited reproducibility of experiments. More pressing, and arguably more important to students and societies, is that poor teaching quality and mediocre student experiences hardly figure in reputation measurement. Given the possibilities of Internet technologies, it seems anachronistic to infer the reputation of the scholar, department, and institution solely or mainly from publication citations.

Open science as harbinger of change

Open science, open access to publications, sharing technologies for open data and lab notebooks, collaboration platforms for researchers and for students, citizen science and so on give rise to (a) new ways of researching, teaching, dissemination, and measurement; and (b) usher in new actors.

Therefore, we need a more inclusive definition of scholarly activity that emphasizes not just high-impact publications, but also covers other key activities of reputation building, such as:

  • Teaching;

  • Mentoring;

  • Peer-reviewing;

  • Collaboration;

  • Outreach.

A more inclusive definition includes a recognition of a new type of scholarly profile, e.g. free-lance scholar, student researcher, and industry collaborator. After all, thanks to the fat, open, and interactive scholarly information pipe we all are scholars now. Any countries economic future is tied up with exploiting this new resource.

Also, we must take account and give full recognition to new scholarly formats, such as blogs, social media, online communities, data repositories, online courses, and open educational resources.

Emerging reputation mechanisms

Over the past five to six years we have seen the emergence of new services and tools that allow scholars to enhance their everyday work, while also building and maintaining their reputation, e.g. data sharing, collaboration platforms, networking services. In 2014, the Joint Research Centre of the European Commission, as a major proponent of Open Science, commissioned a six-month study on ‘Emergent reputation mechanisms for scholars’.33

The study proceeded in four steps:

  • 1. Audit of scholarly activities and production of new conceptual framework;

  • 2. Evaluation of existing reputation platforms and mechanisms;

  • 3. User studies, based on a Europe-wide questionnaire and interviews;

  • 4. Expert workshop with policy recommendations.

Auditing scholarly activities contributing to reputation

An initial list of more than fifty scholarly activities with reputation conferring potential were produced, and we found that these could be attributed to five distinct areas, as identified by Boyer (1990)44 and Garnett and Ecclesfield (2011).55 Table 1 presents the key areas of scholarly activity, defines their key characteristic, and indicates the effort expended.

Table 1

Scholarly reputation building

Area of scholarly activityKey characteristicActivity countExample
ResearchDiscovery and pursuit of knowledge24Grant application, lab experiment, publication, peer reviewing
IntegrationArraying of extant knowledge10Literature review, textbook, interdisciplinary project
ApplicationProblem-oriented application of knowledge and skill10Consultancy, industry research, pop science
TeachingConveying of knowledge to new generations9Lecture, MOOC, PhD supervision
Co-creationPublic scholarly research and interaction5Citizen science project, public teaching project

Auditing emerging reputation platforms

We audited twenty-five emerging platforms, launched between 2006 and 2014 (the exception being LinkedIn in 2002), with most launched in 2008–2010. Overall, the platforms covered about half of the scholarly activities, but with significant variation across the activity areas: Research (16 of 24); Integration (1/10); Application (2/10); Teaching (3/9); Co-creation (5/5).

Only in the area of co-creation do platforms exist that systematically cover all activities, i.e. FoldIt and Socientize.66 Of course, research activities are many and diverse, but it is noteworthy that only Academia (9/24) and Research Gate (14/24) capture a significant number, with ten other platforms covering only a few activities, possibly only one or two (e.g. Dryad, Kudos, Impact Story, Mendeley, Scitable).

In sum, the state of play is the following: (a) there has been innovation by building new platforms; (b) for co-creation activities integrated platforms exist; (c) proliferation has been most pronounced in the research field; (d) research platforms address a limited range of activities and often are one-trick ponies; (e) for integration, application, and teaching activities there is a real lack of platforms.

Perhaps the most puzzling and jarring fact is the gap between research and teaching. Where we have reputation measurement for every author and publication, individually and in the aggregate, we know next to nothing about the quality and impact of teaching. Clearly, it is not defensible to argue that research impact is an indicator of (or proxy for) teaching quality. Nor is it acceptable to claim that teaching and learning is personal and research not – this is clearly not the case.

Scholarly views on activities contributing most towards reputation

Unsurprisingly, tradition in combination with a renewed focus on research activities means that scholars focus their efforts on conducting research, publishing results, and interacting with the immediate peers. Table 2 provides a representative ranking of how scholars (based on a European sample) rank activities from most to least important. Noteworthy is the high and principal level of agreement between scholars and their institutions. The only difference is that for institutions (employers) administration and management activities are significantly more important.

Table 2

Scholarly priorities

Conducting research1
Publishing research results2
Collaborating in research3
Disseminate results at conferences4
Peer reviewing5
Partaking of inter-/multidisciplinary projects6
Community service in societies and journals7
Literature reviews and textbooks8
Conducting application-oriented research9
Consultancy for industry and government11
Popularisation of scholarship12
Designing courses and programmes13
Production of open educational resources14
Conducting research with lay participants15
Disseminating research via social networking16
Administration and management17
Disseminating research via blogging/tweeting18

Benefits and downsides of reputation platforms

Our user studies in conjunction with the platform evaluation led to the identification of the following benefits to scholars, with an inherent dynamic and feedback cycle:

  • Build a dynamic digital identity you can control;

  • Build a reputation more quickly;

  • Calibrate and benchmark your reputation;

  • Provide greater opportunities for collaboration;

  • Get ongoing peer feedback;

  • Attract the attention of colleagues to your research and publications;

  • Make your research and impact more visible to a larger and wider audience;

  • Better understanding of who are the most valuable contacts in their specialism;

  • More efficient access to research (no password or costs);

  • Be spotted by editorial teams, scientific authorities for jobs, and conference organisers.

Of particular interest is that open and dynamic platforms enable younger and early-stage scholars to build reputation earlier and in new ways.

On the other hand, the current state of play has some noteworthy downsides. They are due to a combination of the persistence of traditional attitudes, the limits of current innovation, and some inherent difficulties with the new technologies. Consider the following:

  • Innovation is skewed towards certain kinds of research activities (i.e. the top five as ranked by scholars) thus hindering an adaptation of the scholarly system towards changing societal priorities (e.g. better education, open science);

  • Palpable mistrust of social media and the crowd, with a fear of being ‘polluted’ by other and new actors;

  • Very little in the way of institutional support and buy-in for new platforms, with most things left to personal initiative;

  • A lack of transparency in reputation measurement, e.g. with methods and mechanics not available for public scrutiny;

  • The confusing multiplicity of ways of providing and measuring recognition, with no clear indication of the validity and reliability of many new measures.

What ‘is’ and what ‘will be’

The experts believe that the new platforms will spark changes, but they will be incremental and marginal only. This is because the scholarly system will not be easily disrupted, but evolve in a conservative fashion. Yet, scholars in every subject and in every country are using the emerging reputational platforms. Even sceptics felt that these platforms are the future. Particularly the younger researchers have a more encompassing view of reputation, coupled with the expectation that fast-track careers will result.

Publishers have benefitted from publication citation tools. Currently, it is unclear whether they will be able to maintain their ‘warehouse’ function (i.e. being the primary repository of knowledge) as they have been slow to become involved in building new reputation platforms. We are moving towards parallel universes, i.e. traditional versus new reputation platforms and measures. The new universe has grown exponentially in user numbers, and the interesting question is whether these universes will collide soon.

Another issue is whether the funders of the system – especially government, parents, and students – will remain content with a reputation system that prioritizes only a handful of research activities at the expense of everything else. If not, this might be a second harbinger of big and disruptive changes to come.


1 Alexa global rank 733 for the first three months of 2016 – (retrieved 20 April 2016).

2 Alexa global rank 657 for the first three months of 2016 – (retrieved 20 April 2016).

3 David Nicholas, Eti Herman and Hamid R. Jamali, Emerging reputation mechanisms for scholars, JRC Science and Policy Report No. 94955, European Commission EUR 27174 EN, 2015. doi:10.2791/891948.

4 E.L. Boyer, Scholarship Reconsidered: Priorities of the Professoriate, Josey-Bass, San Francisco, 1990. A Special Report of the Carnegie Foundation for the Advancement of Teaching.

5 Fred Garnett and Nigel Ecclesfield, Towards a framework for co-creating Open Scholarship, Research in Learning Technology 19 (2011), 0199.