Big data for big ecology

As buzz words go, ‘big data’ is right up there just now. It seems that every question you care to think of, in every field from public policy to evolutionary biology, can be hit with the big data hammer. Add an ‘omics’ or two too, and you’re laughing. So I’m slightly ashamed that we decided to call our workshop at the British Ecological Society’s Annual Meeting ‘Big Data for Big Ecology’. But when I say ‘we’ I mean the BES Macroecology Special Interest Group, and Macroecology is – as its name suggests – ‘big ecology’, so it seemed natural to combine this with the buzz word du jour.

And as it turned out, I think we were vindicated. We held the first of two 1 hour workshops in a room that could comfortably seat 50. Over 100 squeezed in, and we had to turn some people away. So clearly the interest is there, perhaps at least partly because ecological ‘big data’ differ from the data collected in other fields, and we’re still feeling for how best to deal with issues of storage, access, and analysis. This contrasts with some other fields. For instance, sequence data take a pretty standard form, and it’s relatively straightforward to design a system to collate all sequence data – Genbank is testament to this. Ecological data are much more heterogeneous – people measure different things in different systems, there’s no universally agreed common unit of measurement, people work at different spatial scales, in different habitats and environments, and so on. There is also the matter of what we mean by ‘big’. Again, there’s a contrast here with genomics, where a million sequences is now almost a trivially small number. I think in ecology we’re much more likely to be dealing with records in the thousands or hundreds of thousands, so again the computational challenges are different: doing something clever with a large quantity of complex data, rather than with an absolutely huge amount of more simple (or at least, relatively standard) data.

The aim of this first workshop was to introduce a couple of major ecological datasets, then to discuss the issues associated with sharing data. Importantly, by involving figshare, we were able to present some solutions rather than simply rehashing the same old (perceived) problems. I posted a storify of this first hour here, but briefly we heard from Paula Lightfoot, data access officer for the UK’s National Biodiversity Network Trust. The NBN holds >80 million distribution records from around 150 data providers, consisting of almost 800 individual datasets. Data cover a very wide range of taxa, although birds, lepidoptera and flowering plants make up ¾ of the records. The NBN gateway has always been a fantastic public-facing portal to biodiversity data (go and have a play if you want to confirm this), but these data are underused in research. So for me it was particularly interesting to learn about recent improvements to the NBN’s data delivery system to try to address concerns such as those raised by a BES working group involving several of the Macroecology group (including myself and group chair Nick Isaac). Some of the data on NBN is sensitive or otherwise restricted access, but now you can trigger a single request which goes to all relevant data owners. Likewise, you can download information from multiple datasets as a single text file – which, as ecological data analysts, is often all that we want.

Charly Griffiths from the Marine Biological Association data team then gave an overview of the data holdings in Plymouth, which was really valuable I think to raise awareness of some of these phenomenal datasets among the overwhelmingly terrestrial community of the BES. Things like the Continuous Plankton Recorder data held by SAHFOS, which at >80y is among the longest-running and most spatially extensive ecological time series in existence. Or the Western Channel Observatory data, which is one of the very few long-term datasets to collect information across an entire community (“from genes to fish, from seconds to centuries”).

Then we changed tack, from talking about where we might find data, to what we should do with our own. A quick show of hands revealed that almost everyone in the room had used other people’s data in their work; rather fewer had shared their own data. Mark Hahnel from figshare gave a quick demo to show how easy it can be to share all kinds of outputs – from static figures to code to very large datasets – on the figshare platform, where it instantly gains a doi, and thus becomes citable.

Given how easy this process is, why don’t more people share their data? Our discussion identified two main objections. First, people remain highly protective of their data, and suspicious that there are armies of people just waiting for it to become public so that they can do the same analyses (only faster) that the data owner had planned. I think this is understandable – ecological data are often collected in pretty extreme environments, involving a huge amount of hard work, and it is natural to want to get the full benefit of this toil before others are able to profit.

There are two counters to this. First, the idealistic one: in most cases you were paid to collect your data, very often with public money; the data are not yours to hoard; you were not funded to advance your career, but to advance science. Second, more pragmatically: it’s unlikely that many people are especially interested in what you do. Only a small fraction of those who are will have both the time to start to work on your data, and the expertise to do anything useful. Fewer still will be inclined to screw you over, especially (and this is important) if you have taken the step of laying out your stall in public (on figshare or wherever). And academic karma will sort them out soon anyway…

The second issue, that of data ownership, is harder to address, regardless of any mandate to make data available. This is a particular problem for someone like me, who uses other people’s data all the time. The value that I add lies in combining existing datasets and analysing them in novel ways. Often I have had to secure various permissions to use the data in the first place, and the extent to which what I have produced is an original data product is not clear. So while my inclination is to share everything, I do have to be very careful that I’m not sharing anything where I have previously signed an agreement to say that I won’t. Even in these cases though it is still possible to share extensive metadata and the code used to access and analyse the data.

Scott Chamberlain, who delivered the second workshop, touched on some of these kinds of issues, as well as potential solutions. Scott and the rest of the ROpenSci team use APIs to access large datasets, and it is perfectly possible for a data provider to restrict access to their data via this API route. In which case, one can publish a load of R code documenting how data were accessed, manipulated and analysed, which could be replicated by anyone having the same data access privileges that you do (often gained through personal contact with the data provider). This could be a really neat solution to accessing multiply-owned datasets. Scott’s presentation is online here, and if you have any interest in accessing data using R, it is a must read, and highly endorsed by all of the 100 or so of us who were at the workshop (see some of the comments in my second storify).

So where do we go from here? That’s a genuine question: we clearly hit a nerve and got a huge amount of interest, so we want to take it forward. But how? Should we be writing a set of standards for ecological data? A catalogue of existing datasets? A set of tutorials? I appreciate that we are far from the only people interested in this, and don’t want to replicate the efforts of others – so maybe a list of these other efforts would be a good place to start? Any thoughts gratefully received, either in the comments here or via Twitter (@besmacroecol, @tomjwebb, #besbigdata) or our facebook group.