My Work Setup

I just got a new laptop, so I thought I would detail my work setup for those that care (most likely me next time I have to setup a laptop…).  Sorry PC folks, but I find that if I am going to work, I do so exclusively on OS X at this point.

System Setup

iTerm2: The first thing I always install is iTerm2.  It is a terminal emulator that I prefer to the default terminal that ships with OS X.  After downloading it, make sure to manually check for updates the first time to get the latest bits.  I prefer to use docked terminal windows but always forget how to set that up.  To do so, under the “Profiles” tab of the Preferences screen, in the Windows section change the style to “Top of Screen”.


Homebrew:  I can’t reccomend Homebrew enough.  If you are installing packages manually, or even worse, using Mac Ports, then Homebrew will change your life.  To install it just run

ruby -e "$(curl -fsSkL"

from any terminal window and follow the on screen prompts.  Once the installation is complete, update your brew recipes with

brew update

XCode:  If we want  brew to actually work, we will need to install the Command Line Tools for XCode so that we can actually compile things from source.  To do so you will need to go to Apple’s Developer Center.  Download the command line tools appropriate for your operating system.

ZSH:  Next I install ZShell in place of the default Bash shell.  You can do this with:

brew install zsh

After the installation is complete set ZSH as your default shell with the following command

chsh -s /bin/zsh

Next install Oh-My-ZSH.  Check out the github repo for more info, but you can use the automatic installer

curl -L | sh

To customize your prompt, edit the ~/.zshrc file.  I normally change my theme to nebirhos, but any theme that has good git/rvm integration will work well.  At the end of your .zshrc file you can also set the plugins to include.  I normally use

plugins=(brew bundler cap gem git git-flow osx pip pow rails rails3 ruby rvm sublime)

Git:  While you already have git installed by default, I like to manage my version of git with brew.  So install it

brew install git

Note that you may need to change the order of your path variables.  After your installation is done, try

which git

and if it returns anything other than /usr/local/bin/git then make sure that the first entry of your path variable is /usr/local/bin in your .zshrc file.  This will ensure that brew installed binaries are used.

Academic Work

Sublime Text 2:  Sublime Text 2 is my go to for doing any type of code editing or authoring.  I am a somewhat recent convert (from Textmate), but glad to have made the switch.  It is Python based, so it can be configured using Python.  Similarly, all of the plugins are written in Python so it is easy to customize.

The first few things to setup after you get Sublime installed are: Package Control for easier package management (just press cmd+shift+p and type ‘install’ to access it), and of course your theme.  I am a big fan of Solarized (Dark).

To run R or Python code from Sublime I use the SublimeREPL package.

LaTeX:  First install MacTex so that you have the command line tools to build latex documents.  It is big (around 2GB).  Additionally, install Skim if you want a PDF reader that integrates nicely into this setup.

Next  install the LatexTools package for Sublime Text 2.  This will give you everything you need to build from ST2 by pressing Cmd+B

Python:  While OS X ships with a version of Python installed, I manage my Python installations with brew. For some of the advantages of doing it this way, see this wiki article.

brew install python

Open a new terminal tab and install any python packages that you will need, e.g.

pip install tweepy

Installing scipy and matplotlib can sometimes be a pain, so again I try to use homebrew to accomplish this.  First you will need to tap some external homebrew recipes, and then you will need to install some prerequisites:

brew tap samueljohn/python
brew tap homebrew/science
brew install gfortran
pip install nose
brew install scipy
brew install matplotlib

When you are all done, open a python console and check that you can import scipy and matplotlib to verify.

R:  Before we can install R, we must first install XQuartz so that R can draw its graphs.  After XQuartz is done installing (just use the dmg), you can install R using brew

brew install R
sudo ln -s "/usr/local/opt/r/R.framework" /Library/Frameworks


RVM: I use RVM to both handle compiling different Ruby versions as well as create project specific gemsets.  I’ve heard good things about rbenv, but thus far seen no reason to switch.  I also like to set my system ruby to 1.9.3

curl -L | bash -s stable --ruby
rvm install 1.9.3
rvm use 1.9.3 --default

The above should install rubygems by default.  Using that you can install Bundler.

gem install bundler

Some versions of Ruby may require the autoconf library which is not installed by default.  To make sure that you can build these install it first

brew install automake

SSH Keys:  In order to securely interact with github and other remote servers we ned to set up SSH keys for our machine.  Github has a good tutorial here.

cd ~/.ssh
ssh-keygen -t rsa -C ""

Git-Flow:  For my development projects (especially web based ones) I like to use git-flow. Read more about it on the website, but it is a simple, logical way to manage branching for applications that need to remain relatively stable.

brew install git-flow

Postgres:  Use brew to install postgres on your local machine.  Follow the instructions at the end of the installation if you would like postgres to start when you restart your machine.  I also create a ‘rails’ user for my local rails apps.

brew install postgres
initdb /usr/local/var/postgres -E utf8
createuser rails -s

If you want a GUI to interface with the DB check out pgAdmin.

POW:  To run rails applications locally I use POW.  This allows me to run my rails apps at by just adding a symlink in the ~/.pow directory.

curl | sh

ElasticSearch:  I find that ElasticSearch is my current default for fulltext search needs.  It is Lucene based and really easy to work with.

brew install elasticsearch
ln -sfv /usr/local/opt/elasticsearch/*.plist ~/Library/LaunchAgents
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.elasticsearch.plist

Useful Apps

Dropbox: I use Dropbox for syncing and sharing files between my machines.  I am trying to get my colleagues to move away from Dropbox as a coauthoring technique and rely on distributed source control, but occasionally use it for that as well.

1Password:  For storing, syncing and generating credentials across my various machines I use 1Password.  When used in tandem with dropbox it is easy to use lengthy random passwords for all of my various accounts without memorizing (or even knowing) them.

Skitch: For getting screenshots and annotating them I still use Skitch.

MailPlane:  Almost all of my email accounts are Gmail based, so I use MailPlane to manage them.  You get the gmail interface but some nice extra features as well.

Skype:  For video chat and screen shares I use Skype.  It’s free and it works.

That is it for now.  With all of this installed, I can finally get back to work….

2012 Spring Workshop on Computational Social Science

The Spring Workshop on Computational Social Science, a joint effort put together by Harvard, MIT and Northeastern, will be going on this week Wednesday, Thursday and Friday.  I will be attending the Thursday session on network visualization and am looking forward to meeting the other attendees as well as getting my hands dirty with Gephi.  For those can’t make it, it will be live streamed.  To watch, either use the previous link or check out this embedded stream:

Course Complete!

So… it has been a while since my last update.  This spring’s semester wrapped up well for both me, as well as the students in my Python for social scientists course.

The feedback that I received was that the course was very useful for the students involved, some of whom have already written a number of programs that they are using in their research, though the syllabus could use some minor tweaks.  The syllabus as it is written on the website will be updated with some of the feedback that I received, namely, to spend a little bit more time on databases.  I tried to cram both relational database theory as well as “here is how to use sql” into one lecture and I don’t think it was covered with enough depth.  I  think that in the future I may also cover the use of linux tools to facilitate large file manipulation, as this is something that data scientists frequently have to do.

One topic that I was not able to cover, but I think could be useful, would be an introduction to map-reduce, possibly with a hands on example using EMR.  It was definitely outside of the scope of this course, but seems like something that people that work with large corpora of text will run into eventually.

While the course did not use a textbook, next time, if the course were taught in Python (which it probably will be) I will suggest that students buy Think Complexity: Complexity Science  and Computational Modeling by Allen Downey.  While it does not devote time to web scraping, it does have some good coverage of Python concepts and has some great examples of how to implement well known models in Python.  If I could somehow justify teaching the course in Ruby, The Bastards Book of Ruby would be a perfect supplement.

The students’ final projects were great and covered a wide range of technical areas.  One group built an application that integrated with twitter and google translate’s apis to create a dataset about world leader communication.  A different student started out building a web crawler and then realized that all of the data was in PDF format, so they got to learn about how to use PDFMiner to extract the information that was needed.  Another discovered that web scraping ia much harder with ajax heavy websites and ended up learning to use Selenium to solve this problem.  Finally, one student jumped into the world of Python UI and built a GUI for playing a game that used Lanchester’s Laws to simulate the outcome.  The goal for the final project was to do something that could ultimately end up turning into a paper and I think on that front all of the groups did well.  I am excited to see where these projects end up.


Computation Frameworks for the Social Sciences (aka I’m teaching a class)

This spring I am teaching my first course!  It is a pretty small seminar, 8-10 graduate students, but they all seem excited about the material.  It is a “Programming for Political Scientists” course that will use Python to both teach people how to write good software as well as show them how people are using software in the discipline currently.  I hope to spend the first half of the course covering basic software engineering and computer science concepts before moving on to some specific applications.  Hopefully, by the end of the course all of the students will have built something that they can use to further their research agendas (e.g. a web scraper to supplement a data set).

The class website is up at where you can find my schedule outline as well as the homeworks.  One thing that has worked well so far was requiring that everyone work through Zed Shaw‘s Learn Python the Hard Way before class.  This got everyone (note that most had not done any serious programming before) on the same page and ready to start tackling some more advanced ideas.

For those of you that have experience in the space, I would welcome any feedback on the syllabus (or anything else).  As the course moves along I will try to post about any changes that I make, findings, etc. for those interested in teaching a similar course elsewhere.

Line numbers on embedded Gists

For all of you bloggers out there that like embedding gists but are frustrated by the lack of line numbers, I found a nice CSS solution.  After a little googling, I found this solution by potch.  It looks as though the structure of an embedded gist has changed slightly so I modified the css to look like the following:

Include this somewhere in your stylesheets and viola, you get nice line numbers for all of your embedded gists (in modern browsers at least).

UPDATE: This no longer appears to be needed!


As I said in my last post, I am working on setting up a statistical web service using R.  I decided to use the Rook library to do so and wanted to give a brief overview of how Rook works for others who might be interested.

The best way to learn is often to just dig right in and go line by line, so here is a simple rook application:

Line 6 creates a new rook webserver object.  This is what will respond to HTTP requests from the browser.

Line 7 adds an “application” to the webserver.  When you add an application to a Rook server you need to name a route and specify what should happen when a user requests that action.  The route is specified by line 8 and tells the Rook server that when a user requests “summarize” it should execute the code specified in lines 9-21.  For some reason Rook prefaces all routes with the word “custom”, so the url to access the route we specified would be http://server:port/custom/summarize

The interesting part of the application occurs in the function that is assigned to app on line 9.  Rook wraps the parameters of the HTTP request in some nice accessors which can be used as seen in line 10.  While the docs specify all of the information available, the important part of the HTTP request for our app is the params() method.  This returns the union of any variables passed via query string and POST.  For the rails developers it is exactly the Rails params hash.

Line 12 parses the input.  This application is expecting an array of numbers separated by commas assigned to the numbers parameter.  Note that if this isn’t specified the app will break in its current state.

After parsing the input parameters we compute some simple summary statistics on this array of numbers and store them in the list results (14-16).

Rook uses the Rook::Response object to fashion a proper HTTP response.  After instantiating the response on line 18, we call the write() method on the response object.  This sends whatever string it is passed to the output stream which is returned to the requestor (in our case the web browser).  Note that I am returning the results in JSON format and should probably set HTTP headers as well if I was going to deploy this to production.  Line 20 flushes the output stream and returns it to the requestor, call this when you are done constructing your response.

Finally, to start our server and see the thing in action we can use the browse() method (line 24).  We are specifying the action that we want to browse to as the parameter.  This should pop open a web browser pointing to your Rook application–and you should get an error:

Error in strsplit(req$params()$numbers, ",") : non-character argument

This is because we didn’t validate the input and our application is expecting an array of numbers.  So, lets pass them through the query string.  Append the following to the URL in your browser: “?numbers=1,2,3,4,5″ and refresh.  No you should get the following:


And you have a functioning Rook server!

If you want to add other functionality you can just make addition add() calls on the rook object and the new applications will be added.  If you want to change the functionality of an existing application make sure that you remove (rook$remove(‘summarize’)) it and then re-add it (or just restart everything).

Rook and R Webservices

Recently I have been working on setting up a webservice that does some non trivial statistical work.  Normally, my go to when building web services is Ruby/Rails due to ease of use, then I offload anything computationally intensive to something more optimized (e.g. a C or Java application on the same box).  In this case however, partly because of my co-author’s skill, partly discipline norms, and a whole lot of R being awesome for this sort of thing, the statistical work is going to be done in R.

While it would have been possible to still build the webservice wrapper in Ruby and then either use one of the existing Ruby wrappers for R (or even spinning up an R process on its own), I wanted to see if I could build the whole thing in R.  As is almost always the case, I was not the first person to think of this, and most of the hard work has already been done.

Rook is an R package on CRAN written by Jeffrey Horner.  For those of you familiar with Ruby and Rack, Rook is very similar.  It provides an interface, using R’s built in rApache webserver, to handle http requests, handle routing, etc.  The more I read about it, the more I was convinced that it was a great solution to the problem that I was working on.  Now if only I get it to run on Heroku….

Well, running Rook on Heroku was surprisingly simple thanks to Noah Lorang’s example which you can find here.

So, how do you get started?  Almost everything that you’ll need is in the README in Noah’s repository, but there are a couple of tricky things to note.

First make sure that you have a Heroku account.  It is free to sign up and you get one free full-time process per project (one single web server in our case).  There are numerous resources (including their excellent help files) to get you through this part.

Next you can either walk through the instructions in Noah’s example (which I ended up doing), or you can do the much easier thing by cloning his repository.  If you do this, then you should be able to deploy it directly.

After you get a running instance of Rook, you will want to write some R code.  To run your own custom R script, replace the “/app/demo.R” in rackup file ( with the path of your script.  Otherwise, you can just put your code in demo.R.

Because Heroku is a read-only file system, you will need to include any R packages in your source tree so that they are “installed” when you deploy.  Initially you will just have R and Rook installed (if you cloned the existing project).  Because some packages require native compilation, you really should do that compilation on one of heroku’s servers.  In order to do this, you need to ssh into your app server:

heroku run bash

Once you are in the bash shell on your app server you can load R.  From there install any packages that will be dependencies for you project.  When you exit the R shell, do not log out of the ssh session.  If you do the app will reset and you will need to start over (remember it is read only so the file system changes persist only as long as your session).  You then need to figure out how you want to get these changes off of your heroku instance.  First zip them up (I zipped up the whole bin directoy):

tar -cvzf mybin.tar.gz ./bin

Then you can either scp it off of the machine (as Noah suggests):

scp mybin.tar.gz

Or, if you do not have access to a destination that you can scp (heroku does not have ftp installed), you can do the roundabout method of setting up github as a remote, checking in the tarball, pushing it to github and then cloning that repository locally.  Once you have the tarball on your machine, just untar it into your repositories bin directory, checkin, and deploy.  You will now have access to those packages in R.

I’ll writeup how to actually use Rook in my next post.


Naive Bayes with Laplacean Smoothing

In, we just covered Naive Bayesian Classifiers, and it couldn’t have been more perfectly timed.  Prior to that lecture series, one of the projects that I am working on required that I build a classifier for a large body of data that was getting funneled into the system.  I spent quite a bit of time searching for the best way to do this, hoping that there would be a rubygem that could save me some effort, but much to my chagrin, nothing quite fit the bill–so I started in on building my own.

The basic idea behind a Naive Bayes classifier is that we have some set of documents that have been categorized (into n categories) and want to use this information about our existing labeled documents to predict the category of new, not yet labeled, documents.  It is a pretty direct use of Bayes rule and is probably best understood through an example.

Say you have 5 documents:

  • {subject: ‘Must read!’, text: ‘Get Viagra cheap!’, label: ‘spam’}
  • {subject: ‘Gotta see this’, text: ‘Viagra.  You can get it at cut rates’, label: ‘spam’}
  • {subject: ‘Call me tomorrow’, text: ‘We need to talk about scheduling.  Call me.’, label: ‘not spam’}
  • {subject: ‘That was hilarious’, text: ‘Just saw that link you sent me’, label: ‘not spam’}
  • {subject: ‘dinner at 7′, text: ‘I got us a reservation tomorrow at 7′, label: ‘not spam’}
We have 2 spam message, and 3 real messages.  Each of these messages has a subject and some text that we can use to train our classifier.
Given a new message:
  • {subject: ‘See it to believe it’, text: ‘Best rates you’ll see’, label: ?}
What is the probability that it is a spam message?  Using Bayes rule we can compute it in the following way:
All of these values can be computed by inspection of the previous documents:
Note that in the case of Naive Bayes you assume independence of your variables (which is probably not true given that the english language is structured).
P(subject|spam)=\prod_{word \in subject}P(word|spam)
So for example, in the document we want to classify:
P('see' \in subject|spam) = \frac{1}{5}
You will note that the document to classify has some words that are not in any of the existing classified documents (e.g. ‘believe’).  This will give those conditional probabilities a value of 0, thus making the numerator 0 even though there is definitely a greater than 0 chance of this item being classified as spam.
The solution to this problem is known as Laplacean Smoothing.  In order to perform smoothing you pick some parameter k.  In our case we can set k=1.  This smoothing parameter is added to all probabilities as they are calculated and a normalizing constant is added to the denominator to make it a valid probability.
Thus, with a smoother of size 1:
P('believe' \in subject|spam)=\frac{0 + 1}{5+5*1}
Where does the 5*1 come from in the denominator?  Well we have a smoothing factor of 1 and we have 5 different known values for words in the subject, thus in order to make the known values a true probability distribution we need to add that to the denominator (so it sums to 1).
Like I said, that math here is pretty straightforward if you can buy the assumptions.  And, even if you can’t it seems to work pretty well.
So, how did I end up using it in my app?  I built a pretty simple gem to do classification called Classyfier.  It was based loosely on Bayes Motel but I cleaned up and reorganized some things (as well as added smoothing). I anticipate that I will be adding more features to this package as my need for more sophisticated classification grows.  For more info on how to use the gem see the example below or just checkout the test file.


Exponential growth and resource consumption

In August, while on the road moving to Duke, I heard a great interview on NPR with David Suzuki.  During this interview he talked about the relationship between anything that grows exponentially and the resources that it consumes.  In particular he focused on how we are likely to misperceive the remaining level of resources because humans are bad at understanding exponential growth (see anything by Kurzweil).

I wanted to retell his analogy because it has really stuck with me and then provide a visualization that I found helpful.

Imagine 2 bacteria in a petri dish.  The petri dish is many, many times bigger than they are.  To keep the math simple lets say that it is 1 billion times larger.  The bacteria in the dish look around and realize that they have a bounty of agar on which to live and thus anticipate much prosperity.  So the bacteria, being bacteria, replicate.  After 1 unit of time (lets say they replicate every minute), there are now 4 bacteria.  These 4 bacteria look around and see that they have this vast petri dish at their disposal and thus keep replicating.  Since replicating is fun, this goes on for a while.  Eventually after 20 turns of doubling in size there are 1,048,576 (2^{20} ) bacteria.  They realize that they have used up just more than 0.1% of their resources and there is much rejoicing and still more replicating.

By the time that 29 minutes have passed there are now 536,870,912 bacteria.  They have used over half of their resources, but they look around and see that they still have half of them left and it took them 29 turns to get here, so they have to have a little time to figure out how to get out of this mess.  On the 30th turn, the population doubles again and now there are 1,073,741,824 and they are out of room.  When a population grows by doubling in size, it by definition will go from 50% resource utilization to 100% in one time period.  That was both obvious (mathematically) and shocking to me.  How fast is the human population growing?  How close are we to running out of resources on earth?

Well, if there was any doubt about whether human population growth was exponential here is a graph of estimated growth over the past 12,000 years (courtesy of wikipedia):

Human Growth Chart

And the CIA factbook estimates the current population growth to be about %1.092 year over year.

Here is a graph that just visualizes the thought experiment proposed by Suzuki:

The point of this graph is just to show how quickly the resource level can plunge to zero and how as an inductive species it is easy to fall into the trap of not understanding the rate at which exponential growth can take off by using historical data to improperly predict the future.

Note: For the graph I just picked a time period of 1000 and then decided to drive it to 0 at that point.  Given that there is a finite amount of resources you will get the same graph form, one way or the other after fixing the resource size.  Below is a gist in R to play around with and see how the curve forms change under different growth rates and how the warning time (as a percentage of the species’ history) shrinks as the size of the resource goes up.

When I found out about, this fall I was really excited. It is an online Intro to AI course taught by Sebastian Thrun and Peter Norvig out of Stanford University.  It is free, they are smart and I hadn’t thought about AI problems since undergrad.  Well, it has finally started–and so far so good.

There were a few technical glitches as things got rolling, but that was mostly due to the insanely high demand for the course.  At one point, according to their twitter account, they were getting over 7000 web requests per second.  They now claim to have over 160,000 students registered for the course.  As of today everything seems to be running very smoothly.

When I first heard the numbers regarding how big the course was I wasn’t sure how they were going to administer homework at that scale.  Well, today I turned in my first homework and it is rather cleverly done.  All of the lectures are short video clips (1-6 minutes so far), and at end of each of them they pose some sort of quiz question.  They way that they are presented is pretty clever however, in that they draw out the question in the video via pen and paper and then superimpose an html form on top of it so that you can submit your responses.  For example, here is a multiple choice quiz question:

This ends up working surprisingly well.  So, for the homework assignments they do the same thing.  One of the presenters draws out a question (a maze, or graph, etc) and then asks some questions about it, the form gets superimposed and you submit your answers.  Thus far it has been really great and I highly recommend that anyone who is interested try it out next time the courses are offered.  They also have a Machine Learning course being taught by Andrew Ng as well as a Database course by Jennifer Widom all being done in the same format.