Ben Godacre’s Bad Science column has recently been moved to the inside back page of the Saturday Guardian, which means I read it over breakfast (like most people with a passing interest in sport, and old enough still to read news on paper, I always read newspapers back to front, even when the sport is in a separate section…). Last week, he wrote about the dearth of evidence in politics. Specifically, on the resistance to actually finding out – collecting and analysing evidence, in other words – if policies do what they were intended to. I couldn’t agree more, and it’s an issue that has been frustrating me for a while. Those of us involved in environmental science (and I suspect it’s the same in other areas) are constantly bombarded with calls to feed in to the ‘evidence-based policy’ process. Now, basing policy on evidence seems to me a very good idea (although I suspect that ‘evidence-based policy’ is usually just a stalling mechanism – a way to avoid making difficult decisions by constantly calling for more evidence before acting), and one result of this push is that the evidence base for phenomena like climate change, biodiversity loss, etc., is fast becoming exceptional.
But I’ve often felt that whereas ‘scientific’ policy is held up to very high standards of evidence, the same is not the case for ‘social’ policy (nor economic policy either, which may be the subject of a future blog…)
Rather, when considering which policy to advocate, politicians seem as likely to be swayed by a snappily-titled book than by any substantive body of evidence. A title like Blink (‘the power of thinking without thinking’), Nudge (‘improving decisions about health, wealth and happiness’), and Sneeze (‘harnessing the creative power of hayfever’) is ideal (OK, so I may have made one of those up), wherein a (sometimes good) idea is stretched well beyond its limits, and a hodge-podge of facts are crammed into this shaky framework. The Big Society beloved of Mr Cameron falls into this category: a scheme which nobody has tested, but on which basis incredibly important decisions are now being made. (For my money, the ‘big’ is redundant anyway, all that’s being described is what we used to call society (when such a thing still existed…).)
So, yes, Ben Goldacre is absolutely right: let’s get evidence into the policy process, and put some numbers behind big decisions (such as the voting system). If, say, we make wholesale changes to the NHS, triple university tuition fees, or whatever, we must carefully record the outcome of this intervention so that in years to come, we have a fighting chance of deciding whether or not it succeeded.
Where I depart slightly Goldacre is in how we do this. He (like most medics) is a firm believer in the randomised controlled trial, a tremendously powerful way to assess the efficacy of a medical procedure. In some cases, it may be feasible to perform analogous trials in social policy, but this will rarely be the case – you can’t, for instance, change the whole governance structure of one hospital in a region without changing others; and if you then end up comparing regions, the randomising is lost, as regions will differ in a series of other metrics.
I should add that Goldacre’s column is predicated on two books about randomised trials in social policy, which I haven’t yet read. My scepticism is derived more from my experience in applied ecology, where there has been a move recently to adopt medical methods – specifically, systematic reviews – to assess the outcome of conservation interventions. The problem is, ecosystem manipulations are not clinical trials. Often, there is no standard intervention, and even if there is, it may be applied to very different systems (differing in species composition, and all kinds of physical characteristcs, not least spatial extent). And often too, there is no agreed-upon outcome – I could increase the species richness in my garden, for instance (at least for a while) by introducing Japanese knotweed, but few would argue that that would be a ‘good’ conservation outcome. In medicine, you treat a patient, and they get better or they don’t, making comparisons between trials much more straightforward.
The solutions that environmental scientists have come up with generally are highly sophisticated statistical methods, allowing us to draw powerful inference from nasty, heterogeneous data. Similar methods have of course been applied to social systems, but somehow they don’t seem to feed through to policy, at least not as often as they should; and even when they do, they risk being ignored if their message is politically undesirable .
To return to the original point, improving social and environmental policy requires that we know what has worked, and what has not, in the past and elsewhere. So solving this evidence problem (i.e. gathering it, and communicating it) should be top priority for both natural and social sciences.