Impact Factors have hit the NN blogs this week, and my intended comments on these interesting posts unintentionally swelled to this… Some of the below is reworked from an article I wrote as part of a feature on publications in the British Ecological Society Bulletin. Can we state often enough and clearly enough how lazy it is to use journal Impact Factors to measure individual performace? First, in the super-high IF multidisciplinary journals (e.g. Nature, IF = 34.5, Science, IF = 29.7), different disciplines do not contribute equally. As a specific example, the ‘impact’ of evolutionary papers appearing in these journals is lower than the journal IF would suggest (although still pretty high for the discipline). So, Nature papers in my field (ecology) are piggy-backing on the IF generated by papers in more highly-cited fields. (Although of course, IFs do become self-fulfilling: I must cite a Nature paper in order to make my work sound important.)
In probably the best summary I’ve read of the use and misuse of bibliometrics, Brian Cameron puts it nicely:
Publication… in a high-impact factor journal does not mean that an article will be highly cited, influential, or high quality
Given that it’s as easy to find out the number of citations to an individual paper as it is to obtain a journal’s IF, it seems odd to judge a paper on the journal-level figure which it may possibly exceed (although it probably will not). As a (humbling?) exercise, should we maybe highlight which of our papers exceed (or do not) the citation pattern predicted by the IF of the journal in which they appear?
Inevitably, we turn to the (in)famous h index. You’ve got to admire the succinctness of this index, with the entire text of the abstract of Hirsch’s orignal paper reading:
I propose the index h, defined as the number of papers with citation number ≥ h, as a useful index to characterize the scientific output of a researcher
It also led to my favourite Nature headline, sometime in 2007:
Hirsch index valuable, says Hirsch
And it probably is, despite problems with irregular citation patterns. For instance, when I checked the citation record of Hubbell’s Unified Neutral Theory of Biodiversity and Biogeography, one of the most influential ecological works of the last decade (and published, would you believe, as an old-fashioned book!), I found that that incorrect spellings and permutations of name, title etc. have resulted in this single work being cited in more than 60 different ways over the course of its >1100 citations!
Anyway, I’ve not kept up with the bibliometrics literature, but wonder if anyone has proposed the following modification of h: what is your h score per paper published? In other words, have you achieved an h of 20 due to 20 brilliant papers, or 200 mostly mediocre ones?
Finally, a quick note on game playing: if you haven’t seen it, check out Andy Purvis’s cheeky demonstration of how preferentially citing those of your papers that hover just below the h threshold can be beneficial…