Skip to content

Biostumblematic

A biophysicist teaches himself how to code

Category Archives: Python

I’ve started doing some Mass Spec, and one of the issues we have is parsing our results.

The basic workflow is to convert the raw instrument files into peak lists, then use MASCOT to identify the proteins present in the sample. Unfortunately the MASCOT results themselves can be a bit tedious to work with.

I’ve rapidly written a couple of scripts to speed up some common things that I need to do, namely to subtract the proteins in a control experiment from those identified in a sample, and to compare two lists of hits.

Here they are for your enjoyment. I’m trying to get back to a better organization of my scripts, so these are once again available on GitHub as well.

Continue reading this article ›

In continuing my slow migration away from “office-like” tools for working with my data, I’ve been taking a look lately at matplotlib. I’ve banged together a rough script to do some simple data plotting with a bit of flexibility:

#! /usr/bin/env python
# https://biostumblematic.wordpress.com

# An interface to matplotlib

# Import modules
import csv, sys
import matplotlib.pyplot as plt
import numpy as np

# Introduce the program
print '-'*60
print 'Your data should be in CSV format, with Y-values'
print 'in odd columns and X-values in even columns.'
print 'If your file contains a header row, these will be'
print 'automatically detected'
print '-'*60

# Open the data
datafile = sys.argv[1]
f = open(datafile, 'r')

# Check to see if the file starts with headers or data:
dialect = csv.Sniffer().has_header(f.read(1024))
f.seek(0)
reader = csv.reader(f)

# Assign the data to series via a dict
if dialect is True:
	reader.next() # Move down a line to skip headers
else:
	pass

series_dict = {}
for row in reader:
	i = 0
	for column in row:
		i += 1
		if series_dict.has_key(i):
			try:
				series_dict[i].append(float(column))
			except ValueError:
				pass
		else:
			series_dict[i] = [float(column)]
# Plot each data series
num_cols = len(series_dict)
i = 1 
while i < num_cols:
	plt.plot(series_dict&#91;i&#93;, series_dict&#91;i+1&#93;, 'o')
	i += 2 

# Get axis labels
xaxis_label = raw_input('X-axis label > ')
yaxis_label = raw_input('Y-axis label > ')

# Show the plot
plt.ylabel(yaxis_label)
plt.xlabel(xaxis_label)
plt.show()

# Enter loop for customizing appearance

# Stop
f.close()

As-is this will read in a CSV file of any number of columns and plot them as Y values/X values (alternating).

Some things that feel nasty:

  • Having to use the dictionaries to get the column data assembled. I feel like the CSV reader module should have a “transpose” function
  • The section near the end where I’m generating the different plots by iterating over the number of columns.

Some things that would be nice to implement, but I haven’t figured out yet:

  • More differentiation of the appearance for each series’ plot
  • Automatic generation of a legend using headers for the X-values from the initial file (or else requested from the user at run-time if not in the file)

No posts for a while because I haven’t actually been writing anything new. Biopython has solved many of my day-to-day problems, and I’m in love with SeqIO.

Today is Canada Day and it’s pretty quiet around the lab, so I thought I’d try to write something that would let me do two things:
First, I want to be able to view my twitter feed using conky, and secondly I’d like to be able to send updates from the console.

This also gives me the chance to work on some fundamentals like interfacing with APIs and passing options to scripts from the command line. There, I totally justified it!

To start off, I installed python-setuptools (from the Ubuntu repo), simplejson and the python-twitter API interface. To install the last two you just download the archives, extract them, and then run the following two commands from within their folders:

python setup.py build
sudo python setup.py install

Let’s start off with a pretty basic framework. This script should print out your latest five friends’ updates to the terminal, and the struts are in place to eventually add post capability:

#! /usr/bin/env python
# https://biostumblematic.wordpress.com
# Simple twitter interface

# Change the following two lines with your credentials
user = ‘username’
pw = ‘password’

num_statuses = 5 # Changes number of statuses to show

import sys, twitter
api = twitter.Api(username=user, password=pw)

if sys.argv[1] == ‘-l’:
timeline = api.GetFriendsTimeline(user)
i=0
while i < num_statuses: print timeline[i].user.name print timeline[i].text print '\n' i+=1 elif sys.argv[1] == '-p': pass else: print 'Invalid input' print 'Allowed options are:' print '-p (to post an update)' print '-l (to list friend statuses)' sys.exit(2) [/sourcecode] Adding the update functionality is facile. Just change the code as follows: [sourcecode language='python'] elif sys.argv[1] == '-p': status = api.PostUpdate(sys.argv[2]) print 'Twitter status updated' [/sourcecode] The only caveat in doing it this way is that the status update entered at the command line must be passed as a string, with quotation marks around it. Otherwise this will post a one word update, which is terse even by twitter standards. This is already fully functional in the terminal, so the last step is dumping out the statuses to a file which conky can read. Here's the code I used to make the text file: [sourcecode language='python'] if sys.argv[1] == '-l': timeline = api.GetFriendsTimeline(user) i=0 output = open(os.environ['HOME']+'/tweets.txt', 'w') while i < num_statuses: output.write(timeline[i].user.name+'\n') output.write(timeline[i].text+'\n') output.write('\n') i+=1 output.close() [/sourcecode] We then have conky read it using a file like this (I named in .conkytweets and placed it in my home directory, make sure to change your home directory below):

use_xft yes
xftfont MyriadPro-Regular:size=8
alignment top_left
xftalpha 0.8
own_window yes
own_window_type override
own_window_transparent yes
own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
double_buffer yes
draw_shades no
draw_outline no
draw_borders no
stippled_borders 10
border_margin 4
border_width 0
default_shade_color black
default_outline_color black
use_spacer right
no_buffers no
uppercase no
default_color 222222
maximum_width 200
minimum_size 200 5
gap_y 400
gap_x 10
text_buffer_size 1024



TEXT
${font size=9}Latest Tweets:
${color}${font}${execi 600 cat /home/jason/tweets.txt | fold -w 35}

Here is a screenshot of the output on my monitor (sorry for the blur over the tasks, there are some research details in there I just didn’t feel like posting for the whole world atm)

demo of conkytweets script

demo of conkytweets script

There are a few things that don’t work very well. To my knowledge, you can’t include clickable links in conky, so URLs in tweets don’t do anything. The textwrap in conky is also a bit wonky, but I don’t know that there is a nice fix for that. I suppose one option would be to modify the text file that the twitter script generates, but I’ll leave that as an exercise to the reader.

The simplest way to use this is to add a link to your path. For me it was:

cd /usr/bin/
sudo ln -s ~/scripts/pytwit.py pytwit

Then you can use it from anywhere with either:

pytwit -p 'My awesome twitter post'

or

pytwit -l

For extra points, you can add the listing to your crontab as follows:

sudo gedit /etc/crontab
*/15 *	* * *	jason pytwit -l

This will update the statuses every 15 minutes.

The full script is available on github, and I welcome any additions/modifications/improvements as always.

For some reason it seems that every program which will output a percentage of the identity between two proteins will also align them itself – therefore screwing up any alignment which you’ve already made. I knocked up a short script over the weekend which will read in a FASTA-formatted alignment and output the percent identity of all of the proteins in it to the first one in the file.

I couldn’t find a built-in way to do this all in BioPython, but I did use it to parse the seqences out of the alignment. The rest of the work is just brute force string crunching.

#!/usr/bin/env python
# https://biostumblematic.wordpress.com

import string
from Bio import AlignIO

# change input.fasta to match your alignment
input_handle = open("input.fasta", "rU")
alignment = AlignIO.read(input_handle, "fasta")

j=0 # counts positions in first sequence
i=0 # counts identity hits 
for record in alignment:
    for amino_acid in record.seq:
        if amino_acid == '-':
            pass
        else:
            if amino_acid == alignment[0].seq[j]:
                i += 1
        j += 1
    j = 0
    seq = str(record.seq)
    gap_strip = seq.replace('-', '')
    percent = 100*i/len(gap_strip)
    print record.id+' '+str(percent)
    i=0

I didn’t implement similarity here, but it gets the basic job done. This script is available on GitHub as seqhomology.py

I’m not sure if this behavior is normal, but I find that I learn a new system best by choosing a real-life problem that I need to solve and applying the new method in order to solve it. This inevitably means that I’ll probably be doing things in a non-efficient way (since I’m a noobie), but code can always be refined later.

Here is the problem I have in front of me today: I have a series of proteins from which I’d like to isolate (via cloning) a certain domain. The cDNA clones of the full length proteins are available from the IMAGE consortium. Unfortunately these aren’t completely “clean” cDNAs; there tends to be some extraneous sequence on both ends of the gene.

The plan of action goes something like this:
The starting materials are the cDNA sequence, the amino acid sequence of the protein, and the residue ranges of the domain of interest. So what I’d like to do is to check each frame of the cDNA to find the one matching the translated protein sequence, then extract just the cDNA coding for the domain I’d like to isolate. I can then (independently) design PCR primers for this domain.

You’re probably thinking that this could be done manually (and of course that’s true), but I find this painstaking work. Also it gives me a chance to play around with the SeqIO functions of BioPython a bit 🙂

Enough introduction, let’s get to work. The protein I’ll be using for this exercise is IKBKB. This is a 756 amino acid protein; I’ll be trying to get the cDNA for the protein kinase domain from residues 15-300. The IMAGE clone ID is 5784717.

Baby step 1 – find the ORF we’re interested in

#! /usr/bin/env python

# https://biostumblematic.wordpress.com

# Extraction of the cDNA
# for a given protein domain

import re
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC

input_cdna = raw_input('Paste your cDNA sequence >> ')
input_search = raw_input('What are the first amino acids of the protein? >> ')

cdna = Seq(input_cdna, IUPAC.unambiguous_dna)

i=0
while i < 3:
    frame = cdna&#91;i:150&#93;
    trans = frame.translate()
    orf_find = re.search(input_search, str(trans))
    if orf_find:
        orf_frame = i+1
    else:
        pass
    i += 1
print 'The protein is coded in frame '+str(orf_frame)
&#91;/sourcecode&#93;
Given the input of the cDNA and the first 4 residues (MSWS), this outputs the right answer:
<pre>The protein is coded in frame 3</pre>
Note that I'm only checking the first 50 residues of the cDNA (see line 19) Hopefully this is enough to catch the protein of interest (it would be a lot of extraneous 5' sequence if not).

Obviously this is not enough sexy for Biopython.  It's cumbersome to have to type in the starting sequence of the protein you're interested in, so why don't we let SeqIO handle that for us via the SwissProt code?  Also, we'll change a couple of things to enable automation of the full list later.

First, I made a file called 'test.csv' which has a single line consisting of SwissProt ID,cDNA:
<pre>O14920,atagccccggg[...]</pre>
Then I modified the script like so:

import re, csv
from Bio import ExPASy, SeqIO
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC

reader = csv.reader(open('test.csv'))
for row in reader:
    input_prot = row[0]
    get_prot = ExPASy.get_sprot_raw(input_prot)
    prot_obj = SeqIO.read(get_prot, "swiss")
    get_prot.close()
    prot_seq = prot_obj.seq
    prot_start = prot_seq[0:4]

    cdna = Seq(row[1], IUPAC.unambiguous_dna)

    i=0
    while i < 3:
        frame = cdna&#91;i:150&#93;
        trans = frame.translate()
        orf_find = re.search(str(prot_start), str(trans))
        if orf_find:
            orf_frame = i+1
        else:
            pass
        i += 1
    print 'The protein is coded in frame '+str(orf_frame)
&#91;/sourcecode&#93;
Biopython grabs the protein sequence from the web using the SwissProt ID.  The prot_start variable takes just the first few residues and uses that as the search term for the regular expression later on.  Now there is no command line input, as everything is done via the CSV file.  This will iterate over lines in the CSV file to do multiple proteins.  Right now, however, we would just get a long list of "The protein is coded in frame X" lines, which is less than useful.  Time to take care of that.

In this case the domain is annotated in SwissProt already.  This means that I <em>could</em> use the <a href="http://biopython.org/DIST/docs/tutorial/Tutorial.html#chapter:swiss_prot">built-in parsing function</a> of BioPython to select the domain, however I have some custom annotations for other proteins in my list that make this not a good idea in this case.  Instead let's just make some minor modifications to our input CSV and existing script.  The new CSV includes the start and stop residues of interest:
<pre>O14920,15,300,atagccccgggttt[...]</pre>
Now I just modify the top of the script to take into account the new structure of the CSV:

reader = csv.reader(open('test.csv'))
for row in reader:
    input_prot = row[0]
    get_prot = ExPASy.get_sprot_raw(input_prot)
    prot_obj = SeqIO.read(get_prot, "swiss")
    get_prot.close()
    prot_seq = prot_obj.seq
    prot_domain = prot_seq[int(row[1])-1:int(row[2])]
    cdna = Seq(row[3], IUPAC.unambiguous_dna)

and adjust what happens if the script finds a match to the domain sequence:

        if orf_find:
            trans_split = re.split('('+str(prot_domain)+')', str(trans))
            cdna_start = len(trans_split[0])*3
            cdna_stop = cdna_start + len(trans_split[1])*3
            cdna_extracted = frame[cdna_start:cdna_stop]
            print cdna_extracted

And this gets the job done! This prints a cDNA sequence which, when translated back, matches the domain of interest.

The last part there feels sloppy. All I’m doing is counting the number of amino acids that come out of the translation before the start of the domain, then multiplying by three, and getting the cDNA start from this. I feel like there must should be a way to transition more effectively between protein and cDNA sequence.

The entire script, slightly modified to write out the results to a new CSV file, is available over on GitHub. I hope you found the post interesting and look forward to your comments.

Yeah, I’ve heard of it. Biopython is A python module package (thanks Chris) that’s written to help with doing computational biology. To my utter dismay I somewhat ignored it, being the “ll just brew it myself” type. What a mistake.

Today I was trying to wrangle some DNA and protein sequences and realized that this might be something covered by Biopython. It’s even better than that. You want tasty yum yums? How about a reverse complementer in 3 lines of code? I even formatted it so it looks nice on the terminal:

from Bio.Seq import Seq
sequence = Seq(raw_input('Paste your DNA sequence >> '))
print '\nReverse Complement\n------------------\n'+sequence.reverse_complement()

The very next bit of code in the tutorial replaced a ~50 line program I had cobbled together (and which still wasn’t working exactly the way I wanted) into this beauty:

#! /usr/bin/env python

# Biopython can automatically parse FASTA
# as well as many other "standard" biological formats

from Bio import SeqIO
inputfile = open('myproteins_fasta.txt')

for seq_record in SeqIO.parse(inputfile, 'fasta'):
    print seq_record.id
    print repr(seq_record.seq)
    print len(seq_record)
inputfile.close

BOOM, FASTA reader.

I’m just getting started on reading the documentation, but so far I’m really impressed (and not a little bit sheepish at my previous obstinance). Expect to see some Biopython examples in the coming days

Tags:

I’ve written about ElementTree before, and it really is a handy tool. I took the output NCBI GI numbers from my previous post and used them in concert with the ID mapper at UniProt to get a listing of the proteins. UniProt kindly allows you to download this subset in XML, which I did in order to quickly extract the information I was interested in.

Here’s what a bit of the XML looks like:

Continue reading this article ›