What with one thing and another, it’s taken me a while to sit down to write this, and the event that triggered it - the furore over this year’s GCSE results – already seems like old news. But it got me thinking more broadly, and I hope those thoughts are still relevant these several news cycles later. So: on a lively Newsnight debate about the GCSEs, someone suggested that exams at 16 were unnecessary, and said something like ‘teachers are professionals, they can use their professional judgement to assess their students at that age without the need for external examining bodies’. I don’t have particularly strong views on this particular topic (although I’m happy my results did not depend on my chemistry teacher who once graded – apparently without noticing – a pile of French essays that we handed in for a joke) but the underlying issue of the (not always complementary) relationship between professional judgment and rigid accountability seems to me highly relevant to academia, in several ways.
Most obviously, of course, in teaching. In general the days of simply sticking a grade on a paper with no justification have passed, and with them rumours of dubious practices (the famous ‘chuck a pile of essays down stairs and rank them by where they fall’). This is surely a good thing, and is the least that students should expect now that they have a more personal sense of what their education is costing.
But, partly as a consequence of increasingly assertive students, I’m getting more and more questions about the marks I give for undergraduate essays. Not disputing the marks, but asking what they would have needed to do to get that 72 rather than 68, or 75 rather than 72… Now I do try to set out an explicit marking scheme, and to provide ample feedback, but sometimes it’s tempting just to say ‘I just thought it was on solid 2:1’; or ‘What do you need to do to get 80? Just write a fantastic essay!’; or ‘What makes a great essay? Not sure but I know one when I read one…’ The strict accountability introduced by rigid marking schemes can be your friend when you have 150 exam scripts to process, but when you’re marking half a dozen tutorial essays it can get in the way of a more subjective judgement.
Something similar happens in the peer review process for both papers and grant proposals. For papers, especially when acting as an editor and rejecting work without sending it for full review, I frequently justify this course of action using bits copied and pasted from the journal’s aims and scope to defend my decision in an accountable fashion. But usually what I’m really saying (except on those occasions when I’m saying: 'this is crap') is, ‘Nah, sorry, didn’t really float my boat’. Or to couch the same sentiment in more formal language, ‘In my professional judgement, I don’t think this work merits publication in journal X’. Full stop. I think this has some similarities to a GP’s diagnosis – one hopes that it is founded in a good understanding of the subject, but one need not document every single step ruling out all other possible diagnoses.
Finally, in reviewing grant proposals you can be forced to be more prescriptive than perhaps you would like. Certain boxes must be filled in, for instance on what you perceive to be the main strengths and weaknesses of the proposed work, which forces you to break down the proposal in a way which may not match your gut feeling (to use another term for professional judgement). So something that you thought was eminently fundable is scuppered because you happened to list more in the weaknesses column than in the strengths – regardless of your overall impression.
Accountability is of course absolutely essential to the process of science – the audit trail which leads from raw data to published results is arguably more important than the results themselves. But in the assessment of its worth? I’m not so sure.