publishing

Science, Gender, and the Social Network

Some while ago, preparing a piece for the British Ecological Society’s Bulletin on the general scarcity of female ecology professors, we had the pleasure of interviewing Professor Anne Glover. (Shortly afterwards Anne went on to become EU Chief Scientist. Coincidence? You decide…) One of the things that Anne talked to us about was the importance of informal social networks in career progression within science. Business conducted after hours, over drinks. Basically Bigwig A asking Bigwig B if he (inevitably) could think of anyone suitable for this new high level committee, or that new editorial board; Bigwig B responding that he knew just the chap. That kind of thing. In some ways this is one of the less tractable parts of the whole gender in science thing. Much harder to confront, in many ways, than the outright and unashamed misogyny of the likes of Tim Hunt, simply because it is so much harder to pin down. We know that all male panels in conferences, for instance, are rarely the result of conscious discrimination, more often stemming from thoughtlessness, laziness, or more implicit bias.

With something as public as a conference, of course, then we can easily point out such imbalances, and smart conference organisers can take steps to avoid them. (My strategy, by the way, is to identify the top names in your field, and invite members of their research groups. Has worked wonders for workshops I have run.) But how to get more diversity out of those those agreements made over a pint (or post-pint, at the urinals)?

One way is to take steps to help a wide range of early career scientists to raise their profile. Be nice to them online, invite them to give talks, promote their papers, and so on. But another way into prominence is through publishing. Not your own papers (though that helps, of course); but the process of publishing others. Get a reputation for reviewing manuscripts well, and invitations onto editorial boards will follow. From their, editorial board meetings and socials, and your name starts to gain currency among influential people.

All of which is fine, but peer review is an invitation-only club. If you’re not invited, you’re not coming in.

Which brings me to the point of this post. I’m on a couple of editorial boards - Journal of Animal Ecology and Biology Letters. As a handling editor, I am responsible, among other things, for inviting referees to review manuscripts. And when I do this, you can bet your life that I will be calling on those potential reviewers nominated by the authors. Not exclusively, but certainly they will figure.

And I started to wonder what kind of gender balance there might be among these suggestions. 34 papers in, here’s your answer. (I should stress that the identity of the journals has no bearing on the following, all statistics are purely the result of choices made by submitting authors.) Over 40% of submitting authors did not suggest any female referees, with female suggested referees exceeding males on only 2 occasions, and a median proportion of 15% female suggestions. The number of suggested female referees does not increase with the total number of referees suggested, neither is there any relationship between the proportion of female authors (median in this sample of 1/3) and proportion of female suggested referees (correlation of 0.05, if you want numbers). Here’s a couple of figures:

Frequency distribution of the proportion of female suggested reviewers from 34 paper (left), and the number of female reviewers against the total number of suggested reviewers (right), where the diagonal line indicates parity.

 

What’s the message here? Maybe we need to start thinking more carefully about lists of names we come up with, not just when these choices will be public - speakers at a conference for example; but also - perhaps especially - when they will not. And not just because of benefits that reviewers may or may not eventually receive in terms of board membership and so on. We get quickly jaded about the whole process of reviewing manuscripts, and forget too soon what a confidence boost it can be to be asked.

And just a coda: I’ve been thinking about this blog post for some time, a year at least. What is depressing is the number of occasions over that year - Hunt’s ridiculous outburst merely the most recent - when I have thought ‘I must get that post written, it’s so topical right now.’ How many years since Anne Glover outlined all these issues to us? (Eight, and counting.) How much has actually changed?

Well, one thing has, at least - the rise of new social networks, the online community that can be cruel but can also be incredibly supportive, providing a voice for those whom certain public figures would prefer to remain mute. These networks are open, no longer dependent - thank goodness - on 1950s values, beer-fuelled patronage, and old school ties.

A Case for Anonymous Open Review

I recently reviewed a manuscript for the pioneering journal PeerJ. This presented me with a quandary. PeerJ’s experiment in open reviewing is nicely outlined in their recent post, and includes two steps: reviewers can sign their reports, and authors can publish the review history alongside their accepted paper. My quandary was this: I love the second idea, and think it is an important step forward in opening up the peer review process; but I don’t like to sign my reviews. Not because I want to hide behind anonymity - clearly, writing this post shows that I’m not going to any great lengths to hide my identity from the authors of the PeerJ manuscript - but rather because I think remaining anonymous makes me, personally, a better reviewer. So, on this occasion - despite producing what I consider to be a ‘good’ review, in that it was both pretty thorough, and very positive - I declined to sign. To explain why, here’s some history.

It started with so-called ‘double blind’ review, whereby manuscripts are anonymised before being sent to review. Or rather, it started with an argument about double-blind review. A paper said it benefited female authors. We disputed the evidence, and, although I know I’m predisposed to come down on my side of the argument, I honestly cannot see how anyone else can fail to agree with us - just look at our figure!!! And anyway, at a practical level how can it help, when only reviewers are blinded but editors make all the key decisions?

But I digress…

Thinking about double-blind review in turn led me to think about what I’d prefer to see in peer review, and openness seemed the way forward. At that time, only the first of PeerJ’s options was available, and for a while I started to sign all my reviews.

Well, I say ‘all’, but I noticed a trend: I was reluctant to sign my most critical reviews. This seems like basic human nature - it’s evident still in PeerJ, where reviewers are far less likely to sign reviews recommending rejection (see fig 5 here) - but is perhaps worth exploring more closely.

My particular field is relatively small, and I often know the authors of the manuscripts I review, at least well enough to say ‘hello’ to at conferences, sometimes much better than that. I have never seen this as a conflict of interest - I provide honest reviews whoever the author, and I have absolutely panned the work of some senior authors of very high standing - as well as some quite good friends - whose work I usually respect. I am much more comfortable doing this anonymously, not because there is anything in my comments that I would not, if forced, say to the face of the lead author; but simply because I would rather not be placed in that situation.

Yes, it all comes down to avoiding socially awkward situations. I will do almost anything to avoid face-to-face awkwardness. I am not one of those people who delights in pointing out a fatal flaw in someone’s work in the Q&A after a talk. I will find a million euphemisms for ‘crap’ if asked to comment on a (hypothetical, of course!) colleague’s substandard work. Whether you see that as a good or a bad quality in me probably depends on your cultural upbringing, but the simple fact is that I find the option of anonymity very appealing.

And so, having come to the conclusion that I preferred to remain anonymous when writing critical reviews, I felt the only morally consistent position for me to take was not to sign any reviews. Sometimes this is difficult. If I write an especially insightful (read: long) review of a piece by someone I really admire, it’s definitely tempting to sign. But no. Joey doesn’t share food, and Tom doesn’t sign reviews. Frankly - and I’m not suggesting for a moment that this is true of everyone - I think this makes me a better reviewer.

The other reason given for signing reviews is that it enables you to gain appropriate credit for your reviewing activity. I don’t really buy this - what kind of credit are you expecting? And how much? Let’s face it, writing a review can be hard work, but it’s much less demanding than writing the damn paper in the first place. My worry is that chasing formal credit encourages early career researchers to spend too long on reviews. I reviewed for Science a while back, and treated it with due seriousness: my review was several pages long, and really thorough, I thought. The other review stated, essentially: “Nah, not a Science paper”. I’m not saying this second review is something to aspire to, but you do need to learn to apportion time appropriately, and if you think a manuscript has very little merit, you probably don’t need six pages to say so.

Also: from whom are you expecting this credit from reviewing? You can already easily summarise your reviewing activity on your CV; I’m simply not convinced that adding a doi for each review will drastically increase your employment prospects or standing in the community. Or at least, it’s not something I feel I need at this point. For those who want credit, and feel like a doi gives them that, then of course it’s great to have the option.

I wouldn’t want any of the above to suggest that I am in any way against openness in peer review, which has numerous benefits. I would be delighted to see my (anonymous) reviews appended to published papers. There is of course an editorial issue here - it’s probably more useful to publish an essay-style review, à la Peerage of Science, than a numbered list of typos; and my experience is that many reviews themselves are riddled with spelling and grammatical errors. Who will review the reviews?! But in principle, yes, let’s open up the process. Transfer of reviews between journals - another form of openness, adding memory to the review process - is becoming more common too, especially within publishing houses, which is great, and ought to help avoid the kind of situation I wrote about here.

My point is that open, civil, and constructive reviews can still be conducted under anonymity. For the sake us shrinking violets who value its protection, I hope the publishing pioneers at PeerJ and elsewhere retain it as an option.

Judgement vs Accountability

What with one thing and another, it’s taken me a while to sit down to write this, and the event that triggered it - the furore over this year’s GCSE results – already seems like old news. But it got me thinking more broadly, and I hope those thoughts are still relevant these several news cycles later. So: on a lively Newsnight debate about the GCSEs, someone suggested that exams at 16 were unnecessary, and said something like ‘teachers are professionals, they can use their professional judgement to assess their students at that age without the need for external examining bodies’. I don’t have particularly strong views on this particular topic (although I’m happy my results did not depend on my chemistry teacher who once graded – apparently without noticing – a pile of French essays that we handed in for a joke) but the underlying issue of the (not always complementary) relationship between professional judgment and rigid accountability seems to me highly relevant to academia, in several ways.

Most obviously, of course, in teaching. In general the days of simply sticking a grade on a paper with no justification have passed, and with them rumours of dubious practices (the famous ‘chuck a pile of essays down stairs and rank them by where they fall’). This is surely a good thing, and is the least that students should expect now that they have a more personal sense of what their education is costing.

But, partly as a consequence of increasingly assertive students, I’m getting more and more questions about the marks I give for undergraduate essays. Not disputing the marks, but asking what they would have needed to do to get that 72 rather than 68, or 75 rather than 72… Now I do try to set out an explicit marking scheme, and to provide ample feedback, but sometimes it’s tempting just to say ‘I just thought it was on solid 2:1’; or ‘What do you need to do to get 80? Just write a fantastic essay!’; or ‘What makes a great essay? Not sure but I know one when I read one…’ The strict accountability introduced by rigid marking schemes can be your friend when you have 150 exam scripts to process, but when you’re marking half a dozen tutorial essays it can get in the way of a more subjective judgement.

Something similar happens in the peer review process for both papers and grant proposals. For papers, especially when acting as an editor and rejecting work without sending it for full review, I frequently justify this course of action using bits copied and pasted from the journal’s aims and scope to defend my decision in an accountable fashion. But usually what I’m really saying (except on those occasions when I’m saying: 'this is crap') is, ‘Nah, sorry, didn’t really float my boat’. Or to couch the same sentiment in more formal language, ‘In my professional judgement, I don’t think this work merits publication in journal X’. Full stop. I think this has some similarities to a GP’s diagnosis – one hopes that it is founded in a good understanding of the subject, but one need not document every single step ruling out all other possible diagnoses.

Finally, in reviewing grant proposals you can be forced to be more prescriptive than perhaps you would like. Certain boxes must be filled in, for instance on what you perceive to be the main strengths and weaknesses of the proposed work, which forces you to break down the proposal in a way which may not match your gut feeling (to use another term for professional judgement). So something that you thought was eminently fundable is scuppered because you happened to list more in the weaknesses column than in the strengths – regardless of your overall impression.

Accountability is of course absolutely essential to the process of science – the audit trail which leads from raw data to published results is arguably more important than the results themselves. But in the assessment of its worth? I’m not so sure.