To AV or to AV not?

We go to the polls in the UK tomorrow, in a national referendum to decide whether we should change the voting system we use in parliamentary elections. The choice is between our current system, First Past the Post (FPTP), and the Alternative Vote (AV) system. There’s a good, non-partisan explanation of the alternatives here, but briefly, under FPTP each voter gets a single choice, and the candidate with the most votes wins – even if they receive rather a low percentage of all votes cast. Under AV, voters can rank candidates according to preference, and second preferences are counted until one candidate has at least 50% of votes. Proponents of AV system argue that it will result in fewer ‘wasted’ votes – so I could vote, say Green, but have Labour as a second preference to indicate that I would rather they got in than the Conservatives. Opponents balk at the idea that someone can win the first round (i.e., get the most votes), but not get elected. I can see pros and cons in both systems, but the debate has descended into pettiness and misinformation, particularly on the part of the ‘No’ (No to AV, that is) campaign.

In particular, the No campaign have used a range of sporting analogies to suggest that it would be ridiculous in a race for someone to cross the line in first position, yet not be declared the winner. My gut feeling was that this is actually unlikely to happen very often in practice, but I hadn’t seen any data from countries which use AV, to back this up.

The Yes campaign have, however, used Australia as an example of a country which has used AV for years, without any general trend for less stable governments (coalitions, hung parliaments, etc.) than we’ve had in the UK under FPTP. But what I wanted to know was, how often does a candidate finish second or lower based on first preferences, and end up getting elected?

Turns out, there’s a ton of data easily available to look at this. I found information on the 1998 federal election which stated that “99 of the 148 electorates in the House of Representatives required the distribution of preferences. In 7 of these seats… the candidate who led on primary votes lost after the distribution of preferences.” So, in only 5% of cases did the caricature of the No campaign, i.e. a ‘loser’ winning, actually happen.

I had a look at the 2010 election too. The definitive data are available to download here, for someone to do a thorough job of this – if, for example, they were employed on a campaign in favour of AV, say. I have neither time nor inclination to do a proper job, so I relied on the ABC report of the election results.

Looking through each of the 150 seats, I counted 139 (93%) in which the candidate who lead on first preferences, ended up winning the seat. Of those 11 seats where the first preference winner lost on the second preference count, 10 were won by the candidate who came second on first preference. The remaining seat was won by the originally third-placed candidate. Excluding this 11th seat, the candidate who ended up winning was on average about 3.1% behind after the first round (range 0.1-9.5%); including all 11 seats, this changes to 3.7 (0.1-14.5).

So, in general, Australia offers little evidence that candidates who don’t come first based on first preferences will often end up winning seats, except in a (very) few pretty close-run races. Of course, AV may change voter (and campaigning) behaviour in all kinds of ways, some of which may not be desirable. But to base a campaign on such a straw man as ‘losers’ winning, is pretty disingenuous.

In the end, It looks like I will end up making my decision on this issue based on my distaste for a misleading and negative campaign, rather than through any great enthusiasm engendered by a campaign for positive change.

Slacking

On Friday I have the day off, thanks to the wedding of a fantastically privileged prince to, as Tim Dowling nicely put it (and appropriately for this blog),

…Future princess, common Kate;
As common as the common skate,
Which is to say, quite rare these days…

Add this to the rather more proletarian May Day holiday on Monday, and last weekend’s Easter break (Good Friday? Yes, and good Monday too ta very much) and that makes 11 workdays in three weeks.

Which led me to wonder, how many other academic scientists will do as I have done, and take off every minute offered? I’ve been thinking a bit recently about the culture of ridiculous working hours in science, triggered by Rachel Bowden’s Nature Jobs blog a couple of weeks back. More particularly, I’ve been working myself into a state of some annoyance about what I perceive to be a bit of macho posturing, of bragging about the hours one works.

The biologist Edward O. Wilson – whose work and writing on biodiversity I respect and admire very much – has been quoted (I can’t find the source) as saying that a good scientist should expect to work 80 hours a week. 40 for research, 40 for admin, teaching, etc. This figure seems to be bandied around almost as something to aspire to, and I suppose that during your PhD, there is something kind of rights-of-passage about really putting in the hours, in the manner of junior hospital doctors. But, perhaps because I gave up trying to maintain anything like that pace a while ago (still more so since the birth of my son), it aggravates me that anyone thinks that, for a normal person, an 80 hour working week could possibly be sustained over a period of years, decades even.

Let’s do the maths (as Mark insists to Jeremy in an episode of Peep Show, the ‘s’ on the end there is non-negotiable).

There are 168 hours in the week, so 88 after your 80 hours of work.

Let’s assume you sleep for 7 hours a night – personally I’d prefer more, but 7 sounds reasonable. So, you now have 39 waking hours left.

Some of that will be filled with the business of staying alive. Let’s say 2 hours a day for preparing and eating food. More will be taken up with remaining vaguely hygienic in body and home – does an average of an hour a day for washing self and clothes, cleaning the house, doing any essential DIY tasks that arise, tidying the garden, buying groceries, etc., sound about right?

OK. You have 18 hours left. You’ll spend some of those commuting to work and back – an hour a day, if you’re lucky, which – assuming you spread your 80 hours over 6 days, brings you down to 12 hours.

12 hours a week, every week, to do everything else. To socialise, chat to family and friends on the phone, play with your kids (assuming you ever found the time to perform the various tasks necessary to produce any), exercise, watch TV, whatever.

It’s worth laying these sums out in black and white because I know that people do feel pressured into working stupid hours (as a matter of course, I mean, not just for the occasional mad week before a deadline, which we all do). My message is that there are two ways to survive such a punishing schedule.

First, you could live to work. If you have no interest in food as anything other than fuel; if you are prepared to forego hobbies, exercise, a social life of any real meaning; then I suppose you could just about cram it in.

Second – and this I suspect is the route taken by most senior academics who are proud of their long hours at the desk – you can perform an accounting trick. You acquire a spouse who is prepared to do all of the work of making life run smoothly, and you simply appropriate their 40 hours. 80 hours work between a couple? (With sole credit to the partner in paid employment, of course.) Easy enough. But it’s not a path that many of us particularly want to follow.

So, I’ll be enjoying my bank holiday weekends. And working damn hard – during office hours – when I get back.

Thinker's Block

One of the challenges of managing the progression from being ’somebody’s post-doc’ to an autonomous PI is to maintain the flow of new ideas. To start with, this is easy. After all, if you’re anything like me, you will have spend a considerable portion of the preceding few years developing and honing a whole stock of ideas, trying to convince employers and fellowship panels of their worth. The moment someone finally agrees to fund you, you’re up an running, free and gloriously independent to bring these thoughts to fruition. The first cracks in this blissful state of being start to appear a couple of years later. You’ve tried out those things that you were sure would make your name. They may have worked, to an extent – you’ve probably published a few papers of which you’re rather proud. And no doubt, your initial research has suggested all kinds of routes for future work.

The first decision you need to make is, should I be pursuing these new ideas myself? Or, should I be chasing the money, aiming to employ people to do the work for me? On the one hand, your institution will be desperate for the overheads that a big grant will bring (conveniently overlooking the generous overheads that your fellowship is already delivering…). But – you also need to think, how much do I want to be a manager? Especially when you know, deep down, that the person best qualified to do the work quickly and to the requisite standard is, well, you.

The second big mental shift you need to make is to recognise that there is no end point. There’s no final report, no thesis to hand in, after which you can go out and celebrate, relax. No. Even if you’ve achieved everything you set out to, got it all written up and published; even then, you’ve got to come into work the next day, face that blank page, and think what to do next. All the while well aware that the most productive researchers in your field have maintained an average publication rate of a paper every two weeks for over 10 years

This is when the doubts creep in. Are you up to this after all? What if your last good idea really was just that: your last good idea. What’s next on the list? And after that? Where are your next 6 papers coming from? Your next big grant application?

It’s easy, on such occasions, to feel swamped, to feel the sheer enormity of the amount of stuff you need to know, to read and absorb and understand in order to make progress in any one of the disparate areas that interest you; to wish, sometimes, that you could be happy making steady, incremental progress in one tiny area, rather than constantly to seek an exciting new challenge in a discipline whose history and literature remain (to you) as yet unknown. Even the most meticulously-ordered hierarchical to-do list can struggle to cope with this.

Two things can help here. Dripping taps. And walking.

I can’t remember where I picked up the dripping tap analogy, but the idea when feeling flooded is to imagine this flood emanating from a vast array of dripping taps, each one representing something that you should be doing. And then you simply walk methodically through this landscape, concentrating on a tap at a time, turning each off in turn. You won’t stem the tide, of course, but you may keep your head above water.

And walking. Nietzsche said that “all truly great thoughts are conceived while walking”, but the quote I cling to more is one half remembered, I’m pretty sure from Roger Deakin’s lovely Wildwood, about “working it out while walking”. The rhythm of walking seems to coax disparate thoughts out into the open, encouraging them to coalesce into something more tangible. (In passing, you can crowbar this into theories of creativity if you like, for instance James Webb Young’s classic A Technique for Producing Ideas involves thinking very very hard about something, then removing yourself from the work environment – by going for a walk, say – to let the doughy information you’ve absorbed prove into a nice elastic idea.)

So if you see me marching around Weston Park in Sheffield, chances are I’m tackling my thinker’s block. And incidentally, this walk home seems to have done the trick of unclogging my blogger’s block, too.