Subscribe to RSS
DOI: 10.1055/s-0033-1351655
On Metrics
Publication History
Publication Date:
05 August 2013 (online)
Not many of the readers will be familiar with The Meters, I am afraid. This is a pity, because these gifted musicians from New Orleans formed what was probably the most influential funk band in the 1960s and 70s. Brilliantly produced by Allen Toussaint their impact can be heard and felt (funk is a very physical music) in the records of much more famous people, all the way to the contemporary acid jazz icons Jamiroquai. Their name remained their program, however: The Meters are the funk musicians against whom all others must be measured. Today they would perhaps call themselves The Benchmarks.
Science is a lot about measuring. If you can measure something, you wield the power of numbers, become almost invulnerable in an argument. Doubtful decisions are more than often based on sheer numbers. Scores abound which try to compress patients into categories to make them more measurable. On the other hand, there are scientists who claim that you cannot observe, let alone measure something without changing it at the same time by doing so. Medicine itself can never be an exact science and therefore will immanently keep trying to escape measurability. This is something the ambitious young doctor with grant money and a brain spurting with ideas and idealism must be reminded of from time to time. He or she will be difficult to put a brake on, however, because scientists themselves are the continued target of measurements.
Everybody knows about and is afraid of the Impact Factor (IF), for instance. This is a metric tool that was originally developed by Thomson Reuters to advise librarians about the selection of journals worth subscribing to. As they themselves stated in 2008, “Perhaps the most prominent misuse of the Journal Impact Factor is its misapplication to draw conclusions about the performance of an individual researcher.”[1] But even its original intention as a valid metric for journal quality has become hotly discussed over the years, when it became apparent that it is dependent on numerous factors – some quality-related, some definitely not. The debate is ongoing and found its most recent culmination in the San Francisco Declaration on Research Assessment (DORA),[2] which by its very nature is highly controversial itself.
Scientists would not be scientists if they gave up easily. If one metric instrument appears to be inappropriate, a substitute must be found. Meanwhile we have the 5-year IF, the Immediacy Index, the Cited Half-Life, the Eigenfactor, the Hirsch Index, the Article Influence, and many more – each with a specific target. Only the combination of several analyses does allow effective conclusions about the quality of a journal or an article. But is such an apparent influence on citations itself really a suitable metric for quality (whatever that may be) at all? If one concentrates on an individual researcher or institution, matters become even more complex.
With the analyzed world constantly changing around us, it was only a matter of time until totally different approaches were introduced. A fascinating one is the appropriately named “Altmetric” concept. In its own words “Altmetric tracks what people are saying about papers online on behalf of institutions, publishers, authors, libraries and institutions.”[3] For a generation which spends a considerable part of their lives bent over a smartphone or tablet computer, happily wiping their index fingers across an illuminated touchscreen, distribution across the so-called social media is what does create an impact. Today, if something is not available on Facebook, Twitter, LinkedIn, and the like, it does not seem to exist at all. These are citations too, if of a different kind. This is something the publishing world should better become accustomed to fast.
Bibliometric analysis is a fascinating field for an editor, which helps him to understand readers and to model a journal according to their apparent needs. The Thoracic and Cardiovascular Surgeon with its 60-year history is currently undergoing a complete in-depth investigation to this effect. A first spin-off is a comparison trying to answer the question if readers actually do read what the peer reviewers selected for them. We came up with astonishing results that will be presented at the International Congress on Peer Review and Biomedical Publication.[4] As soon as the data will be officially published, I shall be happy to inform you about your reading habits in detail. Something to look forward to – or as The Meters had it: “Here Comes the Meter Man” (again).
-
References
- 1 http://community.thomsonreuters.com/t5/Citation-Impact-Center/Preserving-the-Integrity-of-The-Journal-Impact-Factor-Guidelines/ba-p/1218 , accessed on July 1st, 2013.
- 2 http://am.ascb.org/dora/ , accessed on July 1st, 2013.
- 3 http://altmetric.com , accessed on July 1st, 2013.
- 4 http://www.peerreviewcongress.org/index.html , accessed on July 1st, 2013.