boot2docker Wrapper Script

I put together this simple wrapper function for simplifying boot2docker interactions on my Macbook.

Note: I refer to .bashrc as the target as it's the more common shell. However, this has been tested and I personally use it within zsh.

Basic Usage

$ docker reload
# purge current boot2docker environment

$ docker up
# start boot2docker and export environment

$ docker reset|restart|reup
# stop and restart boot2docker

$ docker clean
# remove all orphaned image shards and all containers that aren't running - DANGEROUS

$ docker [etc]
# all other arguements are passed directly passed through to docker

The Functions

# file: ~/.bashrc

# Function -- Docker/Boot2Docker
function docker_shellinit {
  local _shellinit="$(boot2docker shellinit)"
  eval "$(echo ${_shellinit})"
  echo "${_shellinit}" > ~/.boot2dockerrc

function docker_reup {
  echo "+ running vpn fix"

  echo "+ resetting vbox route"

  local _iface="$(VBoxManage showvminfo boot2docker-vm --machinereadable | grep hostonlyadapter | cut -d '"' -f 2)"
  echo "++ sudo route -n add -net -interface ${_iface}"

  sudo route -n add -net -interface ${_iface} && \

function docker_reset {
  echo "+ clearing docker variables"

function docker_up {
  echo "+ starting boot2docker"
  boot2docker up
  return $b2dSTATUS

function docker_down {
  echo "+ stopping boot2docker"
  boot2docker down
  return 0

function docker_clean {
  echo "+ clean containers"
  docker ps -a | grep 'Exited ' | awk '{ print $NF }' | xargs docker rm
  docker ps -a | grep -v 'Up ' | awk '{ print $NF }' | xargs docker rm

  echo "+ clean images"
  docker images | grep '^<none>' | awk '{ print $3 }' | xargs docker rmi

function b2d {
  case "$@" in
    return 0;;
    return $?;;
    return $?;;
    return $?;;
    return $?;;
  boot2docker $@

docker_exec="$(which docker)"
function docker {
  case "$@" in
    return 0;;
    return $?;;
    return $?;;
    return $?;;
    return $?;;
  $docker_exec $@


$ curl -s >> ~/.bashrc

Executing BASH from Python

Recently, I've been playing with Python, I thought I would toss up this writeup I found on executing commands via Python I found as a follow up to my Executing BASH Commmands in Ruby post.

Continue reading

Docker Tips

Here are a few tips that I've found useful while delving in to Docker. For an introduction to Docker, see my post on the YP Engineering Blog. Enjoy!

Making smaller images.

Docker image size does matter. The larger your image, that more unweildy it starts to feel. Pulling a 50MB image is far preferable to pulling a 2G image. Some tips on building smaller images:

  • Use the smallest linux distro which meets your needs; busybox < debian < centos < ubuntu. I try to use progrium/busybox whenever possible (which isn't all that often without serious work), otherwise, I tend to use debian.
  • Install as little as possible to meet your needs -- apt-get build-essential is going to bloat your image, don't use it unless absolutly necessary.
  • Do as much as you can in a single RUN, as opposed to breaking things up. The downside to this is longer builds with less caching. However, it can make a huge difference in resulting image size. I once took an image from 1.3G to 555MB, just by collapsing all my commands to a single RUN. Additionally, clean up after yourself in that same RUN if possible. Example:

      # BAD
      RUN apt-get install git
      RUN apt-get install wget
      RUN apt-get install build-essential
      RUN tar xzf somefile.tgz
      RUN cd somefile
      RUN ./configure && make && install
      # GOOD
      RUN \
          apt-get install -y git wget build-essential && \
          curl -sSL -O && \
          tar xzf somefile.tgz && \ 
          cd somefile && ./configure && make && install && \
          cd - && rm -rf somefile somefile.tgz && \
          apt-get remove -y build-essential && \
          apt-get autoremove -y && apt-get clean

Search private registry.

sudo docker search <private domain>/<term>

Removing images and containers in bulk.

# remove all containers
sudo docker rm $(sudo docker ps -a -q)
#... or ...
sudo docker ps -aq | xargs sudo docker rm

# remove all images
sudo docker rmi $(sudo docker images -q)
#... or ...
sudo docker images -q | xargs sudo docker rmi

# remove specific images in bulk
sudo docker rmi myimage:{tagone,tagtwo,tagfive}

# remove image containing TERM
sudo docker rmi $(sudo docker images | grep TERM | awk '{ print $3 }')
#... or ...
sudo docker images | grep TERM | awk '{ print $3 }' | xargs sudo docker rmi

# remove all non-running containers
sudo docker ps -a | grep Exited | awk '{ print $NF }' | xargs sudo docker rm

Interacting with the most recent continer started.

# view last container
sudo docker ps -l 

# view last container sha only
sudo docker ps -lq

# stop, start, attach, logs, etc. last container
# $ sudo docker <action> $(sudo docker ps -lq)
sudo docker start $(sudo docker ps -lq)
sudo docker stop $(sudo docker ps -lq)
sudo docker logs $(sudo docker ps -lq)
sudo docker attach $(sudo docker ps -lq)

Pushing to a private registry.

# assuming image 'jmervine/centos6-nodejs'
#               <current image name>    <private registry>:<port>/<image name>
sudo docker tag jmervine/centos6-nodejs
sudo docker push

# I then recommend removing your old image to avoid accidentally pushing it to the public registry.
sudo docker rmi jmervine/centos6-nodejs


# running randomly assigning host port
sudo docker run -d -p 3000 image/name

# running with exposed ports randomly assigned on host
sudo docker run -d -P image/name

# printing randomly assigned ports (only)
sudo docker port image_name | awk -F':' '{ print $NF }'

Copying Files TO Containers

# Directly in to a running container.
sudo docker exec -it <container_name|name> \
    bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt"

# When running a container.
sudo docker run -i <container_id|name> \
    bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt; /bin/bash ./"

# Via Docker volume.
# - where 'file.txt' is /path/to/host/dir/file.txt
sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name>

#... or ...
sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name>
cp /path/to/host/file.txt /path/to/host/dir/file.txt

# Via file system -- untested as of yet.
sudo cp -v /path/to/host/file.txt \
    /var/lib/docker/aufs/mnt/**$(sudo docker inspect -f '{{.Id}}' <container_id|name>)**/root/path/to/container/file.txt

Based on comments in

Building Docker Machine

What's Docker Machine?

Machine makes it really easy to create Docker hosts on local hypervisors and cloud providers. It creates servers, installs Docker on them, then configures the Docker client to talk to them

This wasn't as clear as I was hoping, so here's what I did.


$ uname -sm
Darwin x86_64

$ docker version
Client version: 1.3.0
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f
OS/Arch (client): darwin/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

$ mkdir -p $GOPATH/src/
$ cd $GOPATH/src/
$ git clone
$ cd machine

$ make test
$ ./script/build -os=darwin -arch=amd64
$ mv docker-machine_darwin-amd64 $GOBIN/docker-machine


$ uname -sio
Linux x86_64 GNU/Linux

$ sudo docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

$ mkdir -p $GOPATH/src/
$ cd $GOPATH/src/
$ git clone
$ cd machine

$ sudo make test
$ sudo ./script/build -os=linux -arch=amd64
$ sudo chown $USER: docker-machine_linux-amd64
$ mv docker-machine_linux-amd64 $GOBIN/docker-machine

Ruby / RoR... Why not?

In response to common question: "I'd be curious as to why your friends want to move away from the RoR/Ruby space..."

Continue reading

Published on in Ruby

Notes on Performance Testing

A couple weeks ago I did an ad hoc talk at the LAWebSpeed Meetup, hosted by MaxCDN, on general performance testing in the web world. I was asked to put together a list of the tools I spoke about and some brief notes on them...

Continue reading

Node.js Hello Framework

A far from complete collection of Hello World examples for various Node.js web frameworks.

Continue reading

YSlow.js: Release 0.3.1

Links: readme | package | source | tests


Previous Version

  • 0.3.1
  • 0.3.0
    • Fixing error handling in run. Now passes both error and results to callback. See README examples for details. This update is not backwards compatabile with previous releases.
  • 0.2.1
    • Removing unused dependainces.
  • 0.2.0
    • Top down refactor using updated Phapper.
    • Includes better pathing support for finding included yslow.js.
    • Downloads and install yslow.js if it can't be found, which should never happen.
    • Adding limited support for Windows.
  • 0.1.2
    • Fixing critical issue in NODE_PATH search when working with global installations.
  • 0.1.1
    • Refactored to use Phapper, way cleaner and less code.
    • Refactored tests for change to Phapper.
    • Refactored stubs..
    • Adding functional tests.
  • 0.0.1
    • Initial release.

Phapper.js: Release 0.1.9

Links: readme | package | source | tests


  • Fixing a minor bug with the install script.

Previous Versions

  • 0.1.8
    • Updating PhantomJS version to 1.9.7.
  • 0.1.6
    • Replacing exec-sync with execSync for easier Mac installation.
  • 0.1.5
    • Removing unused dependancies.
  • 0.1.4
    • Fixing small issue with passed in arguments on init.
    • Added ability to pass exec object, see readme examples.
    • Cleaned up tests, added more.
    • Cleaned up make test / npm test.
    • Allowing for passing of cwd to sync function.
  • 0.1.3
    • Adding windows handling and phantomjs version overide.
    • Updating readme.
  • 0.1.2
    • Adding phantomjs install.
    • Adding better phantomjs path support.
  • 0.1.1
    • Refactored to not require JSON stdout parse.
    • Refactored run and runSync return values, see readme.
  • 0.0.1
    • Initial release.

Github Webhooks with git-fish

I wrote git-fish – a Github Webhook listener – to provide a simple and modular method for executing an autodeployment on when adding or updating a post. I designed it to as simple and as modular as possible. While written in Node.js, I tend to use it execute simple bash scripts, like the deployment script:


cd /home/jmervine/
make deploy/soft

With this combination, I can use [Github] as my psudo-CMS, to create and update posts and when I save an addition or change, it becomes visable on the site in seconds (including, updating code and purging cache).

For detailed information on setting up and using git-fish or my other see my git-fish project page.


HTTPerf.js: Release 0.1.0

Removing runSync. Refactoring run to support sending spawned process SIGINT to capture current report from httperf and exit.

Continue reading

Forking in Node.js / Threading HTTPerf with HTTPerf.js

Occasionally, we want to generate load beyond what a single httperf thread can handle, especially when working in Node.js, where the connection limits can get very high. The code sample below does that, but also serves as an example of how to use the cluster module to fork actions and collect the resulting data. Enjoy!

Continue reading

Simple Timing in Node.js

I just stumbled upon a cool feature of Node.js for adding timing to applications using console.time and console.timeEnd`.

// pointless example that show all parts
setTimeout(function() {
}, 500);

// => timer: 511ms

Note: I've heard (and in somecases proven) that in most cases console. method are not asynchronous (i.e. blocking) and therefore should never be used in production code. Notice that in the above example, console.time and console.timeEnd appear to have about 11ms of overhead on my machine.

NPM Registry's

I'm starting this list with the plan on adding as many as I can find. Please shoot me any known public registry's in the comments below. (AUS)


npm install --registry npm-foo

Published on in Node.js

Jade Bootstrap Layout Template

After using the Express command line generation untility, you get a very basic layout.jade. Here's the standard modifications I make for use with BootstrapCDN.

Continue reading

Notes: Mosh IPTable Rules

I recently picked up a Note 3 and with the larger screen I found myself want to use it for shelling in to my machines. So I started playing with Mosh on one of my boxes. I (like hopefully most of you) set strick IPTable rules to keep things locked down as much as possible. I quickly found that (obviously) things weren't working due to this.

To make things work, I added this line to /etc/sysconfig/iptables:

-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT

Here's the diff:

diff --git a/tmp/iptables.old b/etc/sysconfig/iptables
index d4229ca..b950f1f 100644
--- a/tmp/iptables.old
+++ b/etc/sysconfig/iptables
@@ -8,6 +8,7 @@
 -A INPUT -p icmp -j ACCEPT 
 -A INPUT -i lo -j ACCEPT 
 -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT 
+-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT
 -A INPUT -j REJECT --reject-with icmp-host-prohibited 
 -A FORWARD -j REJECT --reject-with icmp-host-prohibited 

Once you've added the line, simply restart IPTables like so:

sudo /etc/init.d/iptables condrestart 


Bundle Faster, Save Time

Bundler just annouced 1.4.0.pre.1 with --jobs support, which allows for multithreading gem installs -- I haven't looked at the code, but my guess is it's making use of the JOBS flag support in gmake (which multithreads C compilation) for navtive libs.

Anyway, here's my quick timing comparison on bundling a very large project with hundrends of gems:

rm -rf vendor/bundle
bundle install --path vendor/bundle
# time: 5:31.40 total

rm -rf vendor/bundle
gem install bundler -v 1.4.0.pre.1 --pre
bundle install --jobs=4 --path vendor/bundle
# time: 3:10.38 total


Published on in Ruby

Tweet: Node.js, Eating Crow

Published on

Pretty Sleep using Node.js

$ node -e 'n=0; setInterval(function(){n++;if(n>=20){console.log(".");process.exit(0);}process.stdout.write(".");},1000);'

Twitter Bootstrap Theme for Hexo

Just finished building a Twitter Bootstrap theme for Hexo.

Published on

Command-line Google Search via `find`

Just for fun...

$ find -type m -name "glendale, ca coffee"
$ find -type m -name "glendale, ca smog check"
$ find -type g -name "cool bash commands"

The code:

# "Install" (I use that term loosely)
# - Paste the function below in your .bashrc / .profile / .zshrc / etc.
# Usage: find /usr/local -type [m|g] -name [KEYWORD]
# * -type m : google maps search
# * -type g : google search
# * all other types pass through to find
# Notes: 
# Tested on Ubuntu with ZSH. Comment's, suggestions, etc. welcome.
function find {
  if [ `uname -s` = "Darwin" ]; then
  test "$browser" || browser=`which chromium-browser`
  test "$browser" || browser=`which google-chrome`
  test "$browser" || browser=`which firefox`
  query="`echo "$@" | sed -e 's:^[a-z\/\~\.]* ::' -e 's/-type [mg]//' -e 's/-name//'`"
  if [[ $@ =~ "-type m" ]]; then
    $browser "$query" 2>&1 > /dev/null &
  elif [[ $@ =~ "-type g" ]]; then
    $browser "$query" 2>&1 > /dev/null &
    /usr/bin/find $@

get the gist

Published on

Benchmarking with YSlow.js on Node.js

In my last post on this topic (Benchmarking with HTTPerf.js and NodeUnit) I covered benchmarking application render times from the server to first byte. In this post, I'm going cover basic client benchmarking using YSlow and PhantomJS via YSlow.js on Node.js.

Continue reading

RT: Benchmarking with HTTPerf.js and NodeUnit

Published on

Nginx Build Script

A simple (no frills) Nginx build script.

# This script has been tested on CentOS 5 && Ubuntu 12.04.3
# By: Joshua Mervine <joshua at mervine dot net> (
# Note: Does not run `make install` unless run with `INSTALL=true`.

set -e
test "$DEBUG" && set -x


PAGESPEED=true # set to false to disable pagespeed


# parallelize builds
export JOBS=8

# We don't want to compile against random artifacts
export PATH=/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin

# bail if no yup or apt-get
(which apt-get || which yum) || exit 1
which apt-get && sudo apt-get install build-essential zlib1g-dev libpcre3 libpcre3-dev libssl-dev wget
which yum && sudo yum install gcc-c++ pcre-dev pcre-devel zlib-devel make openssl-devel wget

# delete libs is rebuild
rm -rf nginx-* pcre-* zlib-* release-* *.tar.gz ngx_pagespeed-release-*

if $PAGESPEED; then
  wget$ -O release-$
  unzip release-$
  cd ngx_pagespeed-release-$PSVER/
  tar -xzvf $PSOLVER.tar.gz
  cd ..

tar xzf nginx-$VERSION.tar.gz

# for rewrite_module
wget$PCREVER/pcre-$PCREVER.tar.gz/download \
        -O pcre-$PCREVER.tar.gz
tar xzf pcre-$PCREVER.tar.gz

# for gzip module
tar xzf zlib-$ZLIBVER.tar.gz

cd nginx-$VERSION

OPTIONS=" --with-pcre=../pcre-$PCREVER

if $PAGESPEED; then
        OPTIONS+=" --add-module=$BUILD_ROOT/ngx_pagespeed-release-$PSVER"

./configure $OPTIONS

test $INSTALL && sudo make install

get the gist

Published on

Benchmarking with HTTPerf.js and NodeUnit

I covered this in the HTTPerf.js README a bit, but wanted to take a deer look at how I'm using HTTPerf.js to benchmark web applications.

Continue reading