Skip to content

Biostumblematic

A biophysicist teaches himself how to code

Although he blogs almost as rarely as I do, John Wilbanks (VP of Science Commons) tends to inspire me with many of the things he writes.

Back at the end of 2009, he had a few posts on why the Open Source metaphor doesn’t work well when talking about science. While he’s speaking in this case more generally about science as a whole, his comments reflect directly on my post from yesterday on data management. I wanted to summarize a few of his key points and my thoughts on them.

Before I do so, however, I’ll put in another plug for the Science Commons Symposium, taking place on February 20th in Seattle. John Wilbanks will be there, along with a host of other strong voices interested in knowledge sharing. It should be a great event. If you can’t make it in person, it will be streamed live at http://chris.pirillo.com/live.

If you’re interested in reading his posts in their entirety, you can find them in parts 1, 2, & 3. In order to stick to a more continuous story, I’ll just be pulling quotes at random out of all three of John’s posts.

Several of the comments here yesterday pointed out some specific LIMS projects that have been started. I can see why (given how tightly I focus on a LIMS at the end of my post) people would latch onto this idea, but what I really had in mind was something more like the following:

We need the biological equivalent of the C compiler, of Emacs […] These tools need to be democratized to bring the beginning of distributed knowledge creation into labs, with the efficiencies we know from eBay and Amazon

Because of the complex and variable nature of “DATA” being generated in science labs, I think making one LIMS to rule them all would be nearly impossible. What I’d rather see are some tools that are accessible to the average bench scientist which can be easily modified and expanded upon by the technically gifted scientist. These tools would (if they are to be truly useful) automate some annotation/tagging/parsing of the data as a precursor to deposition in shared repositories such as:

[OpenWetWare and the Registry of Standard Biological Parts] are resources and toolchains that absolutely support distribution of capability and increase capacity, which are fundamental to early-stage distributed innovation.

Above the meat-space layer where the science is actually being done and data is being collected, we need decentralized places to store and share the “functional information units” – i.e. the data that other scientists can use. Unfortunately:

science is like writing code in the 1950s – if you didn’t work at a research institution then, you probably couldn’t write code, and if you did, you were stuck with punch cards. Science is in the punch cards stage, and punch cards aren’t so easy to turn into GNU/Linux.

I think John stretches the metaphor a bit here, but I see where he is going. The punch card above has more to do with the controlling influence of the institution than it has to do with the day-to-day practice of science. The key point is that there are interests who will put up a resistance to a more free distribution of scientific knowledge, for a variety of reasons.

He goes on to summarize his argument:

I propose that the point of this isn’t to replicate “open source” as we know it in software. The point is to create the essential foundations for distributed science so that it can emerge in a form that is locally relevant and globally impactful

and

it’s not something that’s enabled by an open source license, a code version repository, and other hallmarks of open source software. It’s users saying, “screw this, I can do better” – and doing it. It’s users who know the problem best and design the best solutions.

I couldn’t agree more, and I think this is what we’re seeing from the blog posts and conversations that are taking place. There are a subset of people who are doing science or who are avidly interested in aiding the practice of science who feel like they can do better than the current system. These people (probably most people reading this blog, especially if you’ve gotten this far) are the ones who have to effect change. It will take more than writing and talking about it, although these are important as well. I’d like to also see a nascent, community-driven project which we can point to and say “it will be like this, but better”.

One final word from John:

Data and databases are another place where the underlying property regimes don’t work as well for open source as in software. But that’s difficult enough to merit its own post. Suffice to say if Open Data had a facebook page, its relationship status with the law would be “It’s Complicated.”

Advertisements

I’ve been thinking a bit more about open science lately, given the outside chance that I’ll be able to attend the upcoming Science Commons symposium in Seattle. It’s a topic that I’ve unfortunately pushed to the back burner a bit while I’ve been getting settled in my post-doc.

Again I’ve been trying to decide what I think is the key issue for developing a culture of sharing with scientific data. At the moment I feel like the main problem is data management. What I mean here is that labs have a hard time keeping track of their data internally, let alone “preparing” it for broader release.

For example, in my lab we are generating a relatively small amount of DATA (easily quantifiable files, like results of instrument runs); on the order of a 1GB/month. Even though this is probably about average for a science lab, it’s surprisingly difficult to keep organized and readily accessible. This is because it’s being produced by several largely independent students on distinct projects. In addition, the tools we have for analyzing this data are clunky, prone to crashes, and using them is an exercise in caveats and “magic numbers”. Combining and parsing data across multiple experiments is a major operation.

I’d like to point out a couple of key points here. Firstly, this is actually a better situation than other labs I’ve been in. At least here there are some common repositories, in the form of a few spreadsheets saved on common-use computers, from which one can find pointers to the raw data files. Secondly, I think this example illuminates the type of ad-hoc system in place for many academic labs. I think there is a desire in many cases to implement a better system, but not really the drive, dedication, and resources that are required to implement one with the tools that are available.

Perhaps we can take a lesson from industry, where data management has financial and legal ramifications. Although my experience in this environment is somewhat limited, I believe that the difference is largely a matter of resources. Industrial labs might have access to a Technical Information Manager on staff and/or use a Laboratory Information Management System (LIMS). Why haven’t either of these taken hold in academia?

One issue is the separation between IT and scientists in many departments. Often the IT department is lightly staffed, and spends a large portion of their time doing desktop support for individual users (cleaning viruses, updating software, etc). When possible, they may be able to implement some larger projects like deploying a server, managing a common datastore, or things of this nature. The key is that almost all of these activities are more or less completely decoupled from the actual science. They are IT issues, and are handled by the IT folks. Meanwhile, the professors (or more often their students) are generating and analyzing data on the infrastructure that IT has provided. Again, this is decoupled from IT. They use the computers, and when the computers break they call IT. The issue here is that there is no guidance on good practices in data management. It’s an area that falls between the cracks, and is often only addressed as an afterthought or following a major computer failure. Individual professors don’t have the resources (or workload) to hire a full time technical information manager to fill this gap, and this isn’t a position that I’ve ever seen at a departmental level in academia.

The other option is to use a software system which can automate the data management. The term for this software “LIMS”, has been tarnished by an abundance of clunky, overpriced, closed-source products developed at fly-by-night software houses. I’m sure not all LIMS producers fall under this umbrella, but an unfortunate number do. So what would a good LIMS look like? I think there are just a few simple criteria:

  • It has to be simple & flexible. Getting your data into the LIMS needs to be easier than not doing it. Students are incredibly busy, and will resist anything that involves extra work.
  • It has to be open source, to leverage the power of the community. No development team can anticipate the needs of every lab (or even department), so an easily-extensible core with freely available code is the only way to encourage widespread adoption and contribution.
  • It has to be trustworthy. The data store has to be rock-solid, and backups need to be bulletproof. This data is the highly valuable output of labs, and no one will touch a system that has a whiff of instability.

I think these can all be accomplished. Many open-source projects have already found acceptance, such as the Open Bioinformatics member projects, PyMol, and many others. One key will be developing a package that can be deployed on existing hardware (i.e. as close to a standard LAMP stack as possible), to ease the burden on the IT people who will need to do the on-site support. A web-based tool will also help with ease of use: if a student can include their data from their own laptop at the coffee shop, it’s a lot more likely to happen then if they need to fight for time on a certain cluttered common-use machine in the lab.

This type of tool would aid in the larger studies that many open science proponents are interested in. How great would it be if you wanted to do a meta-study from the published results of several labs, and all it took to have the data in a consistent format was a simple MySQL statement (or, if the software is coded properly, a couple of button clicks)? What if when you were reviewing a paper for publication you could quickly get all of the source data, again in a format that is immediately accessible and able to be parsed? What if, as a professor, all the data collected by your summer undergraduate from 4 years back was available with a few clicks? It’s possible. It will take a bit of work by a few intelligent people, but the payoff would be worth it many times over.

I’ve started doing some Mass Spec, and one of the issues we have is parsing our results.

The basic workflow is to convert the raw instrument files into peak lists, then use MASCOT to identify the proteins present in the sample. Unfortunately the MASCOT results themselves can be a bit tedious to work with.

I’ve rapidly written a couple of scripts to speed up some common things that I need to do, namely to subtract the proteins in a control experiment from those identified in a sample, and to compare two lists of hits.

Here they are for your enjoyment. I’m trying to get back to a better organization of my scripts, so these are once again available on GitHub as well.

Continue reading this article ›

Yeah, I'm procrastinating.

One of my favorite things to do is to try new ways to visually represent (in 2D) a complex three-dimensional protein structure. It’s an interesting challenge because it’s all too easy to end up with a protein backbone that looks like a pile of spaghetti from which limited useful information can be drawn.

For some time my go-to package of choice was VMD. Specifically I really appreciated the ambient occlusion lighting effects that have been incorporated into its Tachyon ray tracer, which can generate really stunning figures.

Ribosome rendered with VMD

Another “rendering mode” that I’m a big fan of are the illustrations of David Goodsell, most familiar from the PDB Molecule of the Month feature.

RNA polymerase by David Goodsell


Although these illustrations are often low in information content, they have a simplistic beauty that really appeals to me.

So recently I have been searching for a way to combine these two methods in a way that I can further modify to fit my needs. VMD itself has updated recently to include “outline rendering” in a manner similar to the Goodsell illustration style, but unfortunately this requires graphics hardware that I don’t have on my main machine. I’m also not a fan of VMD from the manipulation standpoint. Atom selections, accurate rotations (e.g. exactly 180 degrees) etc. are complex operations in this software package.

I’ve gone back to using another wonderful visualization package, PyMol. I find that it hits the sweet spot between easy setup of the scene I’d like and generating nice figures.

The specific feature that I’ve come to rely on quite heavily is the built-in ray tracer. There are three available ray tracing modes in addition to the default, each of which has its uses. Mode 1 will place a black outline around your structure, which can help make the secondary structure elements visually distinct. Mode 2 is really interesting, in that it only renders the outline. I find this especially helpful if I want to show something in an overlay without obscuring what is behind it. Mode 3 produces “quantized” color in addition to the outline, giving your figure a very cartoonish appearance. I find that this one has to be used with care 🙂

Anyhow, let’s make a few figures just for kicks. As usual I’ll be using my favorite protein structure, alpha hemolysin (PDB code 7AHL). You can load files directly from the PDB using the built-in PDB Loader plugin of pymol. For this demo I’m rendering all but one chain as a gray cartoon, and rendering the last as a blue molecular surface. I also turn of specular reflections (Display -> Specular Reflections) because I don’t like them

Here are the commands I enter on the command line to generate the image:

bg_color white
set antialias, 2
ray 600, 600

This took about 5 minutes to render on my underpowered laptop. You write out the image with (e.g.):

png mode_0.png

And gives this result:

Default ray tracing mode

Now let’s look at the other fun modes 🙂 Just enter set ray_trace_mode, 1 into the command line and repeat the ray tracing and png saving steps above. Iterate through the three modes and you end up with the following figures (click for larger versions):

Ray tracing mode 1


Ray tracing mode 2


Ray tracing mode 3


You can see that each of these has a different look, which may or may not be useful depending on the figure you are trying to produce. I’m finding that it’s especially useful to do a couple of renders (e.g. one in mode 2 and another in mode 1) and combine them via a little bit of post-processing in the GIMP.

In continuing my slow migration away from “office-like” tools for working with my data, I’ve been taking a look lately at matplotlib. I’ve banged together a rough script to do some simple data plotting with a bit of flexibility:

#! /usr/bin/env python
# https://biostumblematic.wordpress.com

# An interface to matplotlib

# Import modules
import csv, sys
import matplotlib.pyplot as plt
import numpy as np

# Introduce the program
print '-'*60
print 'Your data should be in CSV format, with Y-values'
print 'in odd columns and X-values in even columns.'
print 'If your file contains a header row, these will be'
print 'automatically detected'
print '-'*60

# Open the data
datafile = sys.argv[1]
f = open(datafile, 'r')

# Check to see if the file starts with headers or data:
dialect = csv.Sniffer().has_header(f.read(1024))
f.seek(0)
reader = csv.reader(f)

# Assign the data to series via a dict
if dialect is True:
	reader.next() # Move down a line to skip headers
else:
	pass

series_dict = {}
for row in reader:
	i = 0
	for column in row:
		i += 1
		if series_dict.has_key(i):
			try:
				series_dict[i].append(float(column))
			except ValueError:
				pass
		else:
			series_dict[i] = [float(column)]
# Plot each data series
num_cols = len(series_dict)
i = 1 
while i < num_cols:
	plt.plot(series_dict&#91;i&#93;, series_dict&#91;i+1&#93;, 'o')
	i += 2 

# Get axis labels
xaxis_label = raw_input('X-axis label > ')
yaxis_label = raw_input('Y-axis label > ')

# Show the plot
plt.ylabel(yaxis_label)
plt.xlabel(xaxis_label)
plt.show()

# Enter loop for customizing appearance

# Stop
f.close()

As-is this will read in a CSV file of any number of columns and plot them as Y values/X values (alternating).

Some things that feel nasty:

  • Having to use the dictionaries to get the column data assembled. I feel like the CSV reader module should have a “transpose” function
  • The section near the end where I’m generating the different plots by iterating over the number of columns.

Some things that would be nice to implement, but I haven’t figured out yet:

  • More differentiation of the appearance for each series’ plot
  • Automatic generation of a legend using headers for the X-values from the initial file (or else requested from the user at run-time if not in the file)

No posts for a while because I haven’t actually been writing anything new. Biopython has solved many of my day-to-day problems, and I’m in love with SeqIO.

Today is Canada Day and it’s pretty quiet around the lab, so I thought I’d try to write something that would let me do two things:
First, I want to be able to view my twitter feed using conky, and secondly I’d like to be able to send updates from the console.

This also gives me the chance to work on some fundamentals like interfacing with APIs and passing options to scripts from the command line. There, I totally justified it!

To start off, I installed python-setuptools (from the Ubuntu repo), simplejson and the python-twitter API interface. To install the last two you just download the archives, extract them, and then run the following two commands from within their folders:

python setup.py build
sudo python setup.py install

Let’s start off with a pretty basic framework. This script should print out your latest five friends’ updates to the terminal, and the struts are in place to eventually add post capability:

#! /usr/bin/env python
# https://biostumblematic.wordpress.com
# Simple twitter interface

# Change the following two lines with your credentials
user = ‘username’
pw = ‘password’

num_statuses = 5 # Changes number of statuses to show

import sys, twitter
api = twitter.Api(username=user, password=pw)

if sys.argv[1] == ‘-l’:
timeline = api.GetFriendsTimeline(user)
i=0
while i < num_statuses: print timeline[i].user.name print timeline[i].text print '\n' i+=1 elif sys.argv[1] == '-p': pass else: print 'Invalid input' print 'Allowed options are:' print '-p (to post an update)' print '-l (to list friend statuses)' sys.exit(2) [/sourcecode] Adding the update functionality is facile. Just change the code as follows: [sourcecode language='python'] elif sys.argv[1] == '-p': status = api.PostUpdate(sys.argv[2]) print 'Twitter status updated' [/sourcecode] The only caveat in doing it this way is that the status update entered at the command line must be passed as a string, with quotation marks around it. Otherwise this will post a one word update, which is terse even by twitter standards. This is already fully functional in the terminal, so the last step is dumping out the statuses to a file which conky can read. Here's the code I used to make the text file: [sourcecode language='python'] if sys.argv[1] == '-l': timeline = api.GetFriendsTimeline(user) i=0 output = open(os.environ['HOME']+'/tweets.txt', 'w') while i < num_statuses: output.write(timeline[i].user.name+'\n') output.write(timeline[i].text+'\n') output.write('\n') i+=1 output.close() [/sourcecode] We then have conky read it using a file like this (I named in .conkytweets and placed it in my home directory, make sure to change your home directory below):

use_xft yes
xftfont MyriadPro-Regular:size=8
alignment top_left
xftalpha 0.8
own_window yes
own_window_type override
own_window_transparent yes
own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
double_buffer yes
draw_shades no
draw_outline no
draw_borders no
stippled_borders 10
border_margin 4
border_width 0
default_shade_color black
default_outline_color black
use_spacer right
no_buffers no
uppercase no
default_color 222222
maximum_width 200
minimum_size 200 5
gap_y 400
gap_x 10
text_buffer_size 1024



TEXT
${font size=9}Latest Tweets:
${color}${font}${execi 600 cat /home/jason/tweets.txt | fold -w 35}

Here is a screenshot of the output on my monitor (sorry for the blur over the tasks, there are some research details in there I just didn’t feel like posting for the whole world atm)

demo of conkytweets script

demo of conkytweets script

There are a few things that don’t work very well. To my knowledge, you can’t include clickable links in conky, so URLs in tweets don’t do anything. The textwrap in conky is also a bit wonky, but I don’t know that there is a nice fix for that. I suppose one option would be to modify the text file that the twitter script generates, but I’ll leave that as an exercise to the reader.

The simplest way to use this is to add a link to your path. For me it was:

cd /usr/bin/
sudo ln -s ~/scripts/pytwit.py pytwit

Then you can use it from anywhere with either:

pytwit -p 'My awesome twitter post'

or

pytwit -l

For extra points, you can add the listing to your crontab as follows:

sudo gedit /etc/crontab
*/15 *	* * *	jason pytwit -l

This will update the statuses every 15 minutes.

The full script is available on github, and I welcome any additions/modifications/improvements as always.