Skip to content

Biostumblematic

A biophysicist teaches himself how to code

I’ve been thinking a bit more about open science lately, given the outside chance that I’ll be able to attend the upcoming Science Commons symposium in Seattle. It’s a topic that I’ve unfortunately pushed to the back burner a bit while I’ve been getting settled in my post-doc.

Again I’ve been trying to decide what I think is the key issue for developing a culture of sharing with scientific data. At the moment I feel like the main problem is data management. What I mean here is that labs have a hard time keeping track of their data internally, let alone “preparing” it for broader release.

For example, in my lab we are generating a relatively small amount of DATA (easily quantifiable files, like results of instrument runs); on the order of a 1GB/month. Even though this is probably about average for a science lab, it’s surprisingly difficult to keep organized and readily accessible. This is because it’s being produced by several largely independent students on distinct projects. In addition, the tools we have for analyzing this data are clunky, prone to crashes, and using them is an exercise in caveats and “magic numbers”. Combining and parsing data across multiple experiments is a major operation.

I’d like to point out a couple of key points here. Firstly, this is actually a better situation than other labs I’ve been in. At least here there are some common repositories, in the form of a few spreadsheets saved on common-use computers, from which one can find pointers to the raw data files. Secondly, I think this example illuminates the type of ad-hoc system in place for many academic labs. I think there is a desire in many cases to implement a better system, but not really the drive, dedication, and resources that are required to implement one with the tools that are available.

Perhaps we can take a lesson from industry, where data management has financial and legal ramifications. Although my experience in this environment is somewhat limited, I believe that the difference is largely a matter of resources. Industrial labs might have access to a Technical Information Manager on staff and/or use a Laboratory Information Management System (LIMS). Why haven’t either of these taken hold in academia?

One issue is the separation between IT and scientists in many departments. Often the IT department is lightly staffed, and spends a large portion of their time doing desktop support for individual users (cleaning viruses, updating software, etc). When possible, they may be able to implement some larger projects like deploying a server, managing a common datastore, or things of this nature. The key is that almost all of these activities are more or less completely decoupled from the actual science. They are IT issues, and are handled by the IT folks. Meanwhile, the professors (or more often their students) are generating and analyzing data on the infrastructure that IT has provided. Again, this is decoupled from IT. They use the computers, and when the computers break they call IT. The issue here is that there is no guidance on good practices in data management. It’s an area that falls between the cracks, and is often only addressed as an afterthought or following a major computer failure. Individual professors don’t have the resources (or workload) to hire a full time technical information manager to fill this gap, and this isn’t a position that I’ve ever seen at a departmental level in academia.

The other option is to use a software system which can automate the data management. The term for this software “LIMS”, has been tarnished by an abundance of clunky, overpriced, closed-source products developed at fly-by-night software houses. I’m sure not all LIMS producers fall under this umbrella, but an unfortunate number do. So what would a good LIMS look like? I think there are just a few simple criteria:

  • It has to be simple & flexible. Getting your data into the LIMS needs to be easier than not doing it. Students are incredibly busy, and will resist anything that involves extra work.
  • It has to be open source, to leverage the power of the community. No development team can anticipate the needs of every lab (or even department), so an easily-extensible core with freely available code is the only way to encourage widespread adoption and contribution.
  • It has to be trustworthy. The data store has to be rock-solid, and backups need to be bulletproof. This data is the highly valuable output of labs, and no one will touch a system that has a whiff of instability.

I think these can all be accomplished. Many open-source projects have already found acceptance, such as the Open Bioinformatics member projects, PyMol, and many others. One key will be developing a package that can be deployed on existing hardware (i.e. as close to a standard LAMP stack as possible), to ease the burden on the IT people who will need to do the on-site support. A web-based tool will also help with ease of use: if a student can include their data from their own laptop at the coffee shop, it’s a lot more likely to happen then if they need to fight for time on a certain cluttered common-use machine in the lab.

This type of tool would aid in the larger studies that many open science proponents are interested in. How great would it be if you wanted to do a meta-study from the published results of several labs, and all it took to have the data in a consistent format was a simple MySQL statement (or, if the software is coded properly, a couple of button clicks)? What if when you were reviewing a paper for publication you could quickly get all of the source data, again in a format that is immediately accessible and able to be parsed? What if, as a professor, all the data collected by your summer undergraduate from 4 years back was available with a few clicks? It’s possible. It will take a bit of work by a few intelligent people, but the payoff would be worth it many times over.

Advertisements

%d bloggers like this: