Skip to:

Did anyone read my article? Did it have any impact?

Elsevier Library Connect Research Impact Metrics Cards

Any author will ask questions such as the ones above, and academic authors are no exception. In one sense, we have better answers than were possible just 20 years ago. Although thousands of copies of print books are sold per year, in those days there was little evidence coming back to the publisher that those books were actually read. In fact, one joke among publishers was that encyclopedias and bibles had one thing in common: they were more bought than read. A typical publisher would receive just a handful of comments from readers each year. As publishers, we knew the books were sold; but we didn’t know if they had ever been read. So if an author had asked us if anyone read their book, we couldn’t say.

Nowadays, articles are published online, so in principle we can discover a lot more: we can track how often an article has been opened, where the user went to after reading it, and so on. How is the author to learn all this? Who should they ask? For academic articles, there is the official citation ranking, but that takes years before it can be calculated, and is measured by journal rather than by article. It doesn’t answer the question “has anyone read the article I published last week?”, which is a reasonable thing for an author to ask.

There does not appear to be a single reliable and usable source of metrics for authors. Instead there is a proliferation of partial solutions, providing answers to some of the questions authors had, as well as answers to several questions authors no doubt had not thought of. These include social metrics tools, “altmetrics”, plus other indicators that an article has been read and/or commented on. But many of these tools have to be used with caution, which often means the academic author has to gain expertise in interpreting metrics, rather than simply publishing and getting back to their research.

Well-meaning publishers and librarians have tried to resolve this problem by providing guidance for authors. One difficulty here is that the writers of such guides tend to be expert bibliometricians who, keen to demonstrate their competence, often provide content that is more complex than necessary for authors.

There are some helpful introductory sites: most publishers now provide blogs aimed at their authors, full of gentle insights into how to use metrics for articles. Wiley, for example, has a blog for authors at Wiley Exchanges. Elsevier Connect provides some useful blog posts. But I haven’t found a single reliable unbiased outline to the whole subject for a practising author. In an interesting initiative, Jenny Delasalle, a former librarian and trainer, wrote an interesting blog post recently about her participation in a poster produced by Elsevier, aimed at authors and certainly easily accessible. This poster outlines 14 separate metrics that can be used to measure the impact of an article. It has the catchy title “Librarian Quick Reference Cards for Research Impact Metrics”. The cards are designed for use as teaching aids when training authors. It attempts to summarise some of the most widely-used metrics, but on closer examination, it’s not quite the simple, balanced overview it appears to be. Some of the metrics described can’t be summarised in one sentence; and if 14 separate metrics are required to measure the impact of one article, many authors will say that is 13 (or at least 10) too many.

Should we believe these 14 metrics are the ones to use? One way to simplify the process would be to clarify which metrics we are talking about. Some apply to institutions, some to individuals. Individual authors could therefore restrict themselves to the metrics that apply to their work as researchers, rather than to their institution.

To demonstrate its credibility, the poster includes the endorsement of some of these 14 metrics by the Snowball Metrics Group. This group comprises the major research universities in the UK and now around the world, but turns out to have created a list of no fewer than 24 metrics for measuring impact, and if you want to know about those, you have to plough through over a hundred pages of documentation. 

Should we just take their word for it? The Snowball Metrics are defined on their website as “global standards for institutional benchmarking”, and the group is described as “a bottom-up initiative ... owned by research-intensive universities around the globe”. However, this “academic-industry collaboration” turns out to have only one industry partner, Elsevier, and at the foot of the Snowball Metrics Group website is the statement “copyright © 2012 Elsevier B.V. All rights reserved”.  I’m sure that Elsevier is not doing anything wrong, but why should it own the copyright to the website of an international university metrics group? Are authors inspired by the thought that this body’s decisions on metrics are not unduly influenced by any single publisher? Other industry-wide initiatives in publishing, such as Editeur, are jointly funded by several publishers, and have an independent governing body and appear to have independent status.

Finally, some recommendations. If the research process is to be open and transparent, then its metrics need to be easily understood. That means having fewer, more easily intelligible metrics. In the meantime, a useful site for authors to look at is TIDSR, short for Toolkit for the Impact of Digitised Scholarly Resources, a guide from the Oxford Internet Institute (and an initiative partly funded by JISC). With the aid of this kind of toolkit, let’s hope authors will be able to answer their question “Did anyone read my article?” and trust the answer.