Privatising the Peer Review Process?

I spent last week scouting for sunfish and generally enjoying the beautiful Pembrokeshire coastline, despite the idiosyncratic Welsh summer weather (which eventually led to me abandoning tent for the second time in my life!) So, relaxed and windblown, I thought I’d ease back into things by posting on my colleague Owen Petchey plan to fix the peer review process by privatizing the reviewer commons, just published in the Bulletin of the Ecological Society of America. Owen and Jeremy Fox start by outlining their concerns with the peer review in its current form:

The peer review system is breaking down and will soon be in crisis: increasing numbers of submitted manuscripts mean that demand for reviews is outstripping supply. This is a classic “tragedy of the commons,” in which individuals have every incentive to exploit the “reviewer commons” by submitting manuscripts, but little or no incentive to contribute reviews. The result is a system increasingly dominated by “cheats” (individuals who submit papers without doing proportionate reviewing), with increasingly random and potentially biased results as more and more manuscripts are rejected without external review.

Their solution is to privatise these commons, through a system of PubCreds: researchers would earn credits for reviewing manuscripts, which are then required to ‘pay’ for submitting their own papers:

We propose that authors “pay” for their submissions with credits, called PubCreds, “earned” by doing reviews. Submission of a manuscript costs three PubCreds, while a completed peer review pays one PubCred. Every individual would have an account held in the central “PubCred Bank.” Their account would be credited when they carry out a peer review, and debited when a manuscript is submitted. Individuals could view their account balance and transaction history on the PubCred Bank web site. We suggest that the PubCred Bank also log requests to review that have been declined, and the reason for declining (the reasons for this are explained below). Critically, submission of a manuscript to a journal would be possible only if an individual’s account balance contained sufficient PubCreds…

Jeremy and Owen develop this idea in more detail, and consider how some of the technical and philosophical obstacles to implementation may be overcome (for instance, the possibility of overdrafts could ensure that publication of important work was not delayed because of a shortage of recent reviews). But they emphasise that:

…potential drawbacks to our proposed system must be weighed against the actual drawbacks of the current system, which are widely recognized and increasingly serious.

I think it’s an interesting idea, and, despite a congenital knee-jerk opposition to all forms of privatisation (I grew up in Thatcher’s Britain…) I suspect I would probably do OK from it – my reviewing balances my writing reasonably well at the moment. It does seem to be all-or-nothing though: it’s difficult to see how it could ever be trialled without the full involvement of at least a large majority of publications within a field.

Also, just as academics play games with impact factors, inevitably strategies would develop here too. For instance, given that PubCreds can be shared between co-authors, might we see clubs of authors develop, some of whom specialise in reviewing, whilst others do the research? But, having relatively recently taken up editorial positions with a couple of journals, I do agree that finding suitable reviewers is a pain, and so some kind of initiative to incentivise the process would be welcome.

In defense of indiscreet emails

The saga of the climate emails just never seems to end, and one of the things that amuses me most is the shock regularly expressed by journalists that scientists occasionally stray from objective and impartial assessments of the work of their rivals. The response from many scientists and science publications has been contrite. Well, I object to this hand wringing. I want to defend the role of the private email in academia, and our right to be indiscreet. A couple of years ago the publication of a contentious paper in a leading ecological journal resulted in a flurry of incredulous emails passing through my inbox:

…have a look at this, the latest ramblings from X…; …knowing the authors there must be a bad statistical error; It might be a good exercise for students to read this and then to list the 100 biggest errors in order of magnitude (although it might be tough to narrow it down to 100…)

A few of us decided to go a step further, and put together a response to the paper in question. The tactics of publication were again discussed frankly:

in the submission letter you could… say that we are extremely surprised that this has made publication given the highly dubious nature of their results; I would just submit it today… as you say, it would be unbelievable if no one else spotted how crap that paper is!

Yes, I admit it. I regularly try to suppress the publication and dissemination of work that I do not rate. Interpret this, if you like, as a shady conspiracy of unscrupulous scientists to undermine the peer-review process and push a specific agenda (probably in cahoots with world leaders and funding bodies). But in fact, I was simply doing what university scientists have done for years: using email to bitch, gossip and whinge when the person with whom I wished to bitch, gossip or whinge happened to be in a different building, city or time zone. I have absolutely no qualms about this, because I am satisfied that any criticism that I choose to place into the public domain, in publications, peer reviews, and so on, will be thoroughly professional and well-reasoned.

Here’s another thing: I use tricks too. Not that I’ve used the word in an email – I was once a student at UEA, so cannot be too careful – but it would be entirely appropriate to do so. In a recent piece of work, we were looking for the perfect ‘trick’ to bring graphical order out of the chaos of 7 million data points, including everything down to the scaling of axes and choice of colour scheme. It is ludicrous to suppose that we shouldn’t manipulate and statistically interrogate complex datasets in order to better reveal patterns. That is a large part of my job. Presumably the realisation that it is easier to deny global warming if time begins in 2001 also dawned following similar private conversations among colleagues. The difference is that I’m comfortable enough with the tricks we used to have documented them all in the R code associated with the paper ( still in press – I’ll blog about the science when it’s finally published!).

I have friends working in government science who won’t write in an email anything they wouldn’t want to see published. Paul Ehrlich recently made the same point, claiming that there is now no such thing as a private email. I just think it would be a damn shame if this attitude spreads throughout academia. As I have previously commented, science requires a thick skin. Sometimes we crack, and take all the criticism personally, and then email offers an outlet for us express to our mates our annoyance and distress, before we take some deep breaths, and get on with the business of being professional and dispassionate about it all.

After all, the forceful criticism of substandard work is essential to the progress of science. Are we not allowed to have a little fun along the way?

Still more on bibliometrics

Impact Factors have hit the NN blogs this week, and my intended comments on these interesting posts unintentionally swelled to this… Some of the below is reworked from an article I wrote as part of a feature on publications in the British Ecological Society Bulletin. Can we state often enough and clearly enough how lazy it is to use journal Impact Factors to measure individual performace? First, in the super-high IF multidisciplinary journals (e.g. Nature, IF = 34.5, Science, IF = 29.7), different disciplines do not contribute equally. As a specific example, the ‘impact’ of evolutionary papers appearing in these journals is lower than the journal IF would suggest (although still pretty high for the discipline). So, Nature papers in my field (ecology) are piggy-backing on the IF generated by papers in more highly-cited fields. (Although of course, IFs do become self-fulfilling: I must cite a Nature paper in order to make my work sound important.)

In probably the best summary I’ve read of the use and misuse of bibliometrics, Brian Cameron puts it nicely:

Publication… in a high-impact factor journal does not mean that an article will be highly cited, influential, or high quality

Given that it’s as easy to find out the number of citations to an individual paper as it is to obtain a journal’s IF, it seems odd to judge a paper on the journal-level figure which it may possibly exceed (although it probably will not). As a (humbling?) exercise, should we maybe highlight which of our papers exceed (or do not) the citation pattern predicted by the IF of the journal in which they appear?

Inevitably, we turn to the (in)famous h index. You’ve got to admire the succinctness of this index, with the entire text of the abstract of Hirsch’s orignal paper reading:

I propose the index h, defined as the number of papers with citation number ≥ h, as a useful index to characterize the scientific output of a researcher

It also led to my favourite Nature headline, sometime in 2007:

Hirsch index valuable, says Hirsch

And it probably is, despite problems with irregular citation patterns. For instance, when I checked the citation record of Hubbell’s Unified Neutral Theory of Biodiversity and Biogeography, one of the most influential ecological works of the last decade (and published, would you believe, as an old-fashioned book!), I found that that incorrect spellings and permutations of name, title etc. have resulted in this single work being cited in more than 60 different ways over the course of its >1100 citations!

Anyway, I’ve not kept up with the bibliometrics literature, but wonder if anyone has proposed the following modification of h: what is your h score per paper published? In other words, have you achieved an h of 20 due to 20 brilliant papers, or 200 mostly mediocre ones?

Finally, a quick note on game playing: if you haven’t seen it, check out Andy Purvis’s cheeky demonstration of how preferentially citing those of your papers that hover just below the h threshold can be beneficial…