Still more on bibliometrics

Impact Factors have hit the NN blogs this week, and my intended comments on these interesting posts unintentionally swelled to this… Some of the below is reworked from an article I wrote as part of a feature on publications in the British Ecological Society Bulletin. Can we state often enough and clearly enough how lazy it is to use journal Impact Factors to measure individual performace? First, in the super-high IF multidisciplinary journals (e.g. Nature, IF = 34.5, Science, IF = 29.7), different disciplines do not contribute equally. As a specific example, the ‘impact’ of evolutionary papers appearing in these journals is lower than the journal IF would suggest (although still pretty high for the discipline). So, Nature papers in my field (ecology) are piggy-backing on the IF generated by papers in more highly-cited fields. (Although of course, IFs do become self-fulfilling: I must cite a Nature paper in order to make my work sound important.)

In probably the best summary I’ve read of the use and misuse of bibliometrics, Brian Cameron puts it nicely:

Publication… in a high-impact factor journal does not mean that an article will be highly cited, influential, or high quality

Given that it’s as easy to find out the number of citations to an individual paper as it is to obtain a journal’s IF, it seems odd to judge a paper on the journal-level figure which it may possibly exceed (although it probably will not). As a (humbling?) exercise, should we maybe highlight which of our papers exceed (or do not) the citation pattern predicted by the IF of the journal in which they appear?

Inevitably, we turn to the (in)famous h index. You’ve got to admire the succinctness of this index, with the entire text of the abstract of Hirsch’s orignal paper reading:

I propose the index h, defined as the number of papers with citation number ≥ h, as a useful index to characterize the scientific output of a researcher

It also led to my favourite Nature headline, sometime in 2007:

Hirsch index valuable, says Hirsch

And it probably is, despite problems with irregular citation patterns. For instance, when I checked the citation record of Hubbell’s Unified Neutral Theory of Biodiversity and Biogeography, one of the most influential ecological works of the last decade (and published, would you believe, as an old-fashioned book!), I found that that incorrect spellings and permutations of name, title etc. have resulted in this single work being cited in more than 60 different ways over the course of its >1100 citations!

Anyway, I’ve not kept up with the bibliometrics literature, but wonder if anyone has proposed the following modification of h: what is your h score per paper published? In other words, have you achieved an h of 20 due to 20 brilliant papers, or 200 mostly mediocre ones?

Finally, a quick note on game playing: if you haven’t seen it, check out Andy Purvis’s cheeky demonstration of how preferentially citing those of your papers that hover just below the h threshold can be beneficial…

Sciency Fiction

I’ve been thinking about science in fiction recently. Prompted by Jennifer Rohn’s recent piece in Nature, I checked out her excellent LabLit site. Cath Ennis has also been blogging here about books, but most enticingly I have a stack of novels ready for my holiday next month. Time, then, for a personal selection of some science-themed fiction I’ve enjoyed in the last ten or so years… Although I was a science fiction nut as a boy, and still enjoy the odd foray into epic space operas – Iain M. Banks, principally – most of what I read these days comes of the general fiction shelves. And I’ll be honest, often I read fiction to escape from the day job, so I’m not really a massive consumer of LabLit. In fact, interpreting the genre (overly) literally, I can think of only two books I’ve read that actually feature a laboratory – Simon Mawer’s Mendel’s Dwarf and Atomised by Michel Houellebecq.

These two novels demonstrate nicely though that even when science and scientific ideas are important in a book, it is their qualities as works of fiction that are most important to me. Chief among these are character and style (possibly one of the reasons I didn’t get on with Ian McEwan’s Saturday was my intense dislike of the charmless Henry Perowne – although neither Michel nor Bruno in Atomised are exactly sympathetic and I loved that…) I’m less fussed about plot. Indeed some of my favourite books, by the likes of David Mitchell, Haruki Murukami, or Geoff Dyer, either leave large chunks of plot unresolved, or (in the case of Dyer especially) simply have little of much consequence happen in them.

I perhaps don’t tend to go for classic LabLit then, but I do like a book with a scientific sensibility, ‘sciency fiction’, if you will. Specifically, as an ecologist I like to read good authors writing about the natural world. Sometimes this is overtly scientific – Hope Clearwater, protagonist in William Boyd’s Brazzaville Beach, is a professional ecologist, and distilled through Boyd’s literary eye, her descriptions of field work in Dorset and Africa are especially vivid.

More often though, the science is more subtle. I loved Being Dead by Jim Crace, for example: the story of an elderly couple, murdered amid remote sand dunes, and slowly decomposing, it sounds horribly morbid but the tiny ecological details make it strangely beautiful. Given that I’m writing as Mola mola, I should probably mention Gould’s Book of Fish by Richard Flannagan too, although the focus is less on fish biology than on the grotesque deprivations of life in the penal colonies of Tasmania. There’s a nice fishy theme in Luis Fernando Verissimo’s wonderful The Club of Angels too, specifically the joys of eating the deadly fugu – more gastronomy than science, but a diversion well worth taking!

I’m a sucker for geekiness, and I like authors who assume of their readers a certain facility with numbers. Many writers think nothing of inserting, untranslated, phrases in Latin or Greek, German or Spanish. Why not then credit readers with knowledge of calculus or t-tests? So, while some may think it indulgent, I was rather charmed by the pages of pi reproduced in Douglas Coupland’s JPod, and the mathematical ‘calcae’ (appendices) at the end of Neal Stephenson’s Anathem.

Neal Stephenson is an interesting case, the only author above who remains confined to the Science Fiction and Fantasy shelves in my local bookshop – a barrier sufficient to deter many potential readers. I maintain however that his sprawling Baroque Cycle is as effective an evocation of time and place (17th and early 18th Century Europe and beyond) as any I’ve read, encompassing wars and disease as well as the foundations of modern economics and natural scientific enquiry – in short, it is a work of historical fiction, albeit one with a scientific bent. I’m not claiming his works are worthy of the major literary prizes (indeed I’ve often felt that some of his tomes might benefit from losing a hundred pages here and there), but it has become something of a mission for me to convince those friends and family who loved, for instance, Hilary Mantel’s Wolf Hall that they would find something to enjoy in Stephenson’s epic. That my missionary zeal has yet to catch on only makes me more determined!

Making Impact Plans make more impact

I’m in the midst of putting together a grant application at the moment, to the Natural Environment Research Council (NERC). Part of this involves a document describing the activities I plan to undertake in order to fully realise the broader societal impact of my work. Given the fuss that UK academics have made regarding the instigation of this requirement across all UK RCs a couple of years ago – long enough, indeed, for two name changes already, from ‘Knowledge Exchange Plan’ to ‘Impact Plan’ to ‘Pathways to Impact’ – it was something of a surprise to read in Nature that the NSF new Impact scheme is the first of its kind. Now personally, I have no objection to being asked about the wider impact of my work, even though it’s actually quite hard to demonstrate any kind of economic benefit (which ultimately is what the RCs are after) deriving from the kind of biodiversity research I do. Actually, it’s a positive thing to think about the ways in which one’s idle musings may actually be of some use to someone else. And particularly in this recession, with the looming presence of an austerity budget here towards the end of this month, it’s simply not realistic to expect the tax payer to fund me to sit and think, regardless of how culturally sophisticated that would make us as a nation.

So broadly, I’m in favour of impact activities, especially as they actually offer the chance of more money, to fund some fun stuff – in my field, generally initiatives in web-based data-sharing, or stakeholder involvement, but also potentially seeding spin-out companies, partnerships with industry, and so on.

The issue I have is in the way that impact has crept into the assessment process of responsive mode (i.e. ‘blue skies’) funding proposals. For example, NERC initially stated quite categorically that – providing they reached a certain minimum standard – Impact Plans would have no bearing on the decision to fund research, which would be entirely based on the scientific excellence of the main proposal.

Of course, although that may have been the order from on high, it is not what happened in the panel meetings convened to make funding decisions. These panels have a really tough job assessing an increasing number of proposals with a limited pot of money. Clearly, any additional means that they can find to separate the funded from the also-rans will be used. And given that they (and all reviewers of the grant proposals) had full access to the impact plans, these quickly became an additional way of ranking proposals. Great science, crap impact? Sorry, we’ll give the money to the proposal with great science and great impact.

NERC now state:

Research grant proposals will continue to be assessed on science excellence. Pathways to impact will be included for assessment as part of the usual review process and will be considered as a secondary criterion alongside risk/reward and cost effectiveness.

At least this is out in the open now, and we know where we stand. The remaining problem lies with how these plans are assessed – in the standard review process, and by scientists who do not necessarily (in fact, almost certainly do not) have any expertise in matters of ‘impact’, leading to potential frustration if your proposal happens to land on the desk of someone with a different view of impact from yourself.

So here’s my proposition. No impact plan to be submitted with any responsive mode grant proposal. Then, should your proposal be selected for funding, a condition of receiving the money is that you submit a formal ‘pathways to impact’ application which must meet a certain standard. Because the field would be so much smaller at this stage, these plans could be reviewed by a smaller panel of experts in the field of knowledge exchange and impact. This would save regular reviewers time. It would also, I believe, drive up the standard of impact activities, because if you already know you’re getting funding, but that you could get some extra money for a top impact plan, the incentive is there to make a really good job of writing it.

Just an idea, but I’ve yet to come up with a downside.