mervine.net

Testing with Docker

Here are a few tips that I've found useful while delving in to Docker.

Gather Windows on Ubuntu

On my Ubuntu laptop, I found that when the display goes to sleep, I would consistantly have this issue where some of my active windows would end up on a hidden workspace. Super annoying, especially because I found myself having to kill the app to get them back. After a little googling, I found those following script, which when coupled with a custom keybinding did the trick.

I placed the following file in $HOME/bin, which I already have added to my path, and named it gather.

#!/bin/bash
# From: https://github.com/mezga0153/offscreen-window-restore

width=`xrandr | grep current | awk {'print $8'}`

`wmctrl -l -G | awk -v w=$width '{
    if ($8 != "unity-dash" && $8 != "Hud") {
        if ($3 >= w || $3 < 0) {
            system("wmctrl -i -r " $1 " -e 0," sqrt($3*$3) % w ",-1,-1,-1");
        }
    }
}'`

I then ensured that it was executable with chmod 755 $HOME/bin/gather. Once done, I added a keybind, for me Super-g, via System Settings > Keyboard > Shortcuts > Custom Shortcuts.

Note:

I had to install wmctrl as well for this to work, with:

apt-get install -y wmctrl

Enjoy!

boot2docker Wrapper Script

I put together this simple wrapper function for simplifying boot2docker interactions on my Macbook.

Note: I refer to .bashrc as the target as it's the more common shell. However, this has been tested and I personally use it within zsh.

Basic Usage

$ docker reload
# purge current boot2docker environment

$ docker up
# start boot2docker and export environment

$ docker reset|restart|reup
# stop and restart boot2docker

$ docker clean
# remove all orphaned image shards and all containers that aren't running - DANGEROUS

$ docker [etc]
# all other arguements are passed directly passed through to docker

The Functions

# file: ~/.bashrc

#############################################################
# Function -- Docker/Boot2Docker
#############################################################
function docker_shellinit {
  local _shellinit="$(boot2docker shellinit)"
  eval "$(echo ${_shellinit})"
  echo "${_shellinit}" > ~/.boot2dockerrc
}

function docker_reup {
  echo "+ running vpn fix"
  docker_down

  echo "+ resetting vbox route"

  local _iface="$(VBoxManage showvminfo boot2docker-vm --machinereadable | grep hostonlyadapter | cut -d '"' -f 2)"
  echo "++ sudo route -n add -net 192.168.59.0/24 -interface ${_iface}"

  sudo route -n add -net 192.168.59.0/24 -interface ${_iface} && \
    docker_up
}

function docker_reset {
  echo "+ clearing docker variables"
  unset DOCKER_HOST
  unset DOCKER_CERT_PATH
  unset DOCKER_TLS_VERIFY
  docker_shellinit
}

function docker_up {
  echo "+ starting boot2docker"
  boot2docker up
  b2dSTATUS=$?
  docker_reset
  return $b2dSTATUS
}

function docker_down {
  echo "+ stopping boot2docker"
  boot2docker down
  return 0
}

function docker_clean {
  echo "+ clean containers"
  docker ps -a | grep 'Exited ' | awk '{ print $NF }' | xargs docker rm
  docker ps -a | grep -v 'Up ' | awk '{ print $NF }' | xargs docker rm

  echo "+ clean images"
  docker images | grep '^<none>' | awk '{ print $3 }' | xargs docker rmi
}

function b2d {
  case "$@" in
  reload)
    docker_reset
    return 0;;
  reset|fix|reup|fuck)
    docker_reup
    return $?;;
  up)
    docker_up
    return $?;;
  down)
    docker_down
    return $?;;
  clean)
    docker_clean
    return $?;;
  esac
  boot2docker $@
}

docker_exec="$(which docker)"
function docker {
  case "$@" in
  reload)
    docker_reset
    return 0;;
  reset|fix|reup|fuck)
    docker_reup
    return $?;;
  up)
    docker_up
    return $?;;
  down)
    docker_down
    return $?;;
  clean)
    docker_clean
    return $?;;
  esac
  $docker_exec $@
}

Installation

$ curl -s https://gist.githubusercontent.com/jmervine/6713d10ab05fecd6e1aa/raw/5c5f7020696e23dffa6f046816239574f42767ee/boot2dockerrc.sh >> ~/.bashrc

Executing BASH from Python

Recently, I've been playing with Python, I thought I would toss up this writeup I found on executing commands via Python I found as a follow up to my Executing BASH Commmands in Ruby post.

Docker Tips

Here are a few tips that I've found useful while delving in to Docker. For an introduction to Docker, see my post on the YP Engineering Blog. Enjoy!

Making smaller images.

Docker image size does matter. The larger your image, that more unweildy it starts to feel. Pulling a 50MB image is far preferable to pulling a 2G image. Some tips on building smaller images:

  • Use the smallest linux distro which meets your needs; busybox < debian < centos < ubuntu. I try to use progrium/busybox whenever possible (which isn't all that often without serious work), otherwise, I tend to use debian.
  • Install as little as possible to meet your needs -- apt-get build-essential is going to bloat your image, don't use it unless absolutly necessary.
  • Do as much as you can in a single RUN, as opposed to breaking things up. The downside to this is longer builds with less caching. However, it can make a huge difference in resulting image size. I once took an image from 1.3G to 555MB, just by collapsing all my commands to a single RUN. Additionally, clean up after yourself in that same RUN if possible. Example:

      # BAD
      RUN apt-get install git
      RUN apt-get install wget
      RUN apt-get install build-essential
      COPY http://somesite.com/somefile.tgz
      RUN tar xzf somefile.tgz
      RUN cd somefile
      RUN ./configure && make && install
    
      # GOOD
      RUN \
          apt-get install -y git wget build-essential && \
          curl -sSL -O http://somesite.com/somefile.tgz && \
          tar xzf somefile.tgz && \ 
          cd somefile && ./configure && make && install && \
          cd - && rm -rf somefile somefile.tgz && \
          apt-get remove -y build-essential && \
          apt-get autoremove -y && apt-get clean
    

Search private registry.

sudo docker search <private domain>/<term>

Removing images and containers in bulk.

# remove all containers
sudo docker rm $(sudo docker ps -a -q)
#... or ...
sudo docker ps -aq | xargs sudo docker rm

# remove all images
sudo docker rmi $(sudo docker images -q)
#... or ...
sudo docker images -q | xargs sudo docker rmi

# remove specific images in bulk
sudo docker rmi myimage:{tagone,tagtwo,tagfive}

# remove image containing TERM
sudo docker rmi $(sudo docker images | grep TERM | awk '{ print $3 }')
#... or ...
sudo docker images | grep TERM | awk '{ print $3 }' | xargs sudo docker rmi

# remove all non-running containers
sudo docker ps -a | grep Exited | awk '{ print $NF }' | xargs sudo docker rm

Interacting with the most recent continer started.

# view last container
sudo docker ps -l 

# view last container sha only
sudo docker ps -lq

# stop, start, attach, logs, etc. last container
#
# $ sudo docker <action> $(sudo docker ps -lq)
sudo docker start $(sudo docker ps -lq)
sudo docker stop $(sudo docker ps -lq)
sudo docker logs $(sudo docker ps -lq)
sudo docker attach $(sudo docker ps -lq)

Pushing to a private registry.

# assuming image 'jmervine/centos6-nodejs'
#
#               <current image name>    <private registry>:<port>/<image name>
sudo docker tag jmervine/centos6-nodejs docker.myregstry.com:5000/jmervine/centos6-nodejs
sudo docker push docker.myregstry.com:5000/jmervine/centos6-nodejs

# I then recommend removing your old image to avoid accidentally pushing it to the public registry.
sudo docker rmi jmervine/centos6-nodejs

Ports

# running randomly assigning host port
sudo docker run -d -p 3000 image/name

# running with exposed ports randomly assigned on host
sudo docker run -d -P image/name

# printing randomly assigned ports (only)
sudo docker port image_name | awk -F':' '{ print $NF }'

Copying Files TO Containers

# Directly in to a running container.
sudo docker exec -it <container_name|name> \
    bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt"

# When running a container.
sudo docker run -i <container_id|name> \
    bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt; /bin/bash ./start.sh"

# Via Docker volume.
# - where 'file.txt' is /path/to/host/dir/file.txt
sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name>

#... or ...
sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name>
cp /path/to/host/file.txt /path/to/host/dir/file.txt

# Via file system -- untested as of yet.
sudo cp -v /path/to/host/file.txt \
    /var/lib/docker/aufs/mnt/**$(sudo docker inspect -f '{{.Id}}' <container_id|name>)**/root/path/to/container/file.txt

Based on comments in http://stackoverflow.com/questions/22907231/copying-files-from-host-to-docker-container

Building Docker Machine

What's Docker Machine?

Machine makes it really easy to create Docker hosts on local hypervisors and cloud providers. It creates servers, installs Docker on them, then configures the Docker client to talk to them

This wasn't as clear as I was hoping, so here's what I did.

Mac

$ uname -sm
Darwin x86_64

$ docker version
Client version: 1.3.0
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f
OS/Arch (client): darwin/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

$ mkdir -p $GOPATH/src/github.com/docker
$ cd $GOPATH/src/github.com/docker
$ git clone https://github.com/docker/machine.git
$ cd machine

$ make test
$ ./script/build -os=darwin -arch=amd64
$ mv docker-machine_darwin-amd64 $GOBIN/docker-machine

Linux

$ uname -sio
Linux x86_64 GNU/Linux

$ sudo docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

$ mkdir -p $GOPATH/src/github.com/docker
$ cd $GOPATH/src/github.com/docker
$ git clone https://github.com/docker/machine.git
$ cd machine

$ sudo make test
$ sudo ./script/build -os=linux -arch=amd64
$ sudo chown $USER: docker-machine_linux-amd64
$ mv docker-machine_linux-amd64 $GOBIN/docker-machine

Ruby / RoR... Why not?

In response to common question: "I'd be curious as to why your friends want to move away from the RoR/Ruby space..."

Published on in Ruby

Notes on Performance Testing

A couple weeks ago I did an ad hoc talk at the LAWebSpeed Meetup, hosted by MaxCDN, on general performance testing in the web world. I was asked to put together a list of the tools I spoke about and some brief notes on them...

Node.js Hello Framework

A far from complete collection of Hello World examples for various Node.js web frameworks.

YSlow.js: Release 0.3.1

Links: readme | package | source | tests

0.3.2

Previous Version

  • 0.3.1
  • 0.3.0
    • Fixing error handling in run. Now passes both error and results to callback. See README examples for details. This update is not backwards compatabile with previous releases.
  • 0.2.1
    • Removing unused dependainces.
  • 0.2.0
    • Top down refactor using updated Phapper.
    • Includes better pathing support for finding included yslow.js.
    • Downloads and install yslow.js if it can't be found, which should never happen.
    • Adding limited support for Windows.
  • 0.1.2
    • Fixing critical issue in NODE_PATH search when working with global installations.
  • 0.1.1
    • Refactored to use Phapper, way cleaner and less code.
    • Refactored tests for change to Phapper.
    • Refactored stubs..
    • Adding functional tests.
  • 0.0.1
    • Initial release.

Phapper.js: Release 0.1.9

Links: readme | package | source | tests

0.1.9

  • Fixing a minor bug with the install script.

Previous Versions

  • 0.1.8
    • Updating PhantomJS version to 1.9.7.
  • 0.1.6
    • Replacing exec-sync with execSync for easier Mac installation.
  • 0.1.5
    • Removing unused dependancies.
  • 0.1.4
    • Fixing small issue with passed in arguments on init.
    • Added ability to pass exec object, see readme examples.
    • Cleaned up tests, added more.
    • Cleaned up make test / npm test.
    • Allowing for passing of cwd to sync function.
  • 0.1.3
    • Adding windows handling and phantomjs version overide.
    • Updating readme.
  • 0.1.2
    • Adding phantomjs install.
    • Adding better phantomjs path support.
  • 0.1.1
    • Refactored to not require JSON stdout parse.
    • Refactored run and runSync return values, see readme.
  • 0.0.1
    • Initial release.

Github Webhooks with git-fish

I wrote git-fish – a Github Webhook listener – to provide a simple and modular method for executing an autodeployment on mervine.net when adding or updating a post. I designed it to as simple and as modular as possible. While written in Node.js, I tend to use it execute simple bash scripts, like the mervine.net deployment script:

#!/bin/bash

cd /home/jmervine/mervine.net
make deploy/soft

With this combination, I can use [Github] as my psudo-CMS, to create and update posts and when I save an addition or change, it becomes visable on the site in seconds (including, updating code and purging cache).

For detailed information on setting up and using git-fish or my other see my git-fish project page.

Enjoy!

HTTPerf.js: Release 0.1.0

Removing runSync. Refactoring run to support sending spawned process SIGINT to capture current report from httperf and exit.

Forking in Node.js / Threading HTTPerf with HTTPerf.js

Occasionally, we want to generate load beyond what a single httperf thread can handle, especially when working in Node.js, where the connection limits can get very high. The code sample below does that, but also serves as an example of how to use the cluster module to fork actions and collect the resulting data. Enjoy!

Simple Timing in Node.js

I just stumbled upon a cool feature of Node.js for adding timing to applications using console.time and console.timeEnd`.

// pointless example that show all parts
console.time('timer');
setTimeout(function() {
    console.timeEnd('timer');
}, 500);

// => timer: 511ms

Note: I've heard (and in somecases proven) that in most cases console. method are not asynchronous (i.e. blocking) and therefore should never be used in production code. Notice that in the above example, console.time and console.timeEnd appear to have about 11ms of overhead on my machine.

NPM Registry's

I'm starting this list with the plan on adding as many as I can find. Please shoot me any known public registry's in the comments below.

http://registry.npmjs.vitecho.com/
http://npm.nodejs.org.au:5984/registry/_design/app/_rewrite (AUS)

Usage:

npm install --registry http://registry.npmjs.vitecho.com/ npm-foo

Published on in Node.js

Jade Bootstrap Layout Template

After using the Express command line generation untility, you get a very basic layout.jade. Here's the standard modifications I make for use with BootstrapCDN.

Notes: Mosh IPTable Rules

I recently picked up a Note 3 and with the larger screen I found myself want to use it for shelling in to my machines. So I started playing with Mosh on one of my boxes. I (like hopefully most of you) set strick IPTable rules to keep things locked down as much as possible. I quickly found that (obviously) things weren't working due to this.

To make things work, I added this line to /etc/sysconfig/iptables:

-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT

Here's the diff:

diff --git a/tmp/iptables.old b/etc/sysconfig/iptables
index d4229ca..b950f1f 100644
--- a/tmp/iptables.old
+++ b/etc/sysconfig/iptables
@@ -8,6 +8,7 @@
 -A INPUT -p icmp -j ACCEPT 
 -A INPUT -i lo -j ACCEPT 
 -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT 
+-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT
 -A INPUT -j REJECT --reject-with icmp-host-prohibited 
 -A FORWARD -j REJECT --reject-with icmp-host-prohibited 
 COMMIT

Once you've added the line, simply restart IPTables like so:

sudo /etc/init.d/iptables condrestart 

Enjoy!

Bundle Faster, Save Time

Bundler just annouced 1.4.0.pre.1 with --jobs support, which allows for multithreading gem installs -- I haven't looked at the code, but my guess is it's making use of the JOBS flag support in gmake (which multithreads C compilation) for navtive libs.

Anyway, here's my quick timing comparison on bundling a very large project with hundrends of gems:

rm -rf vendor/bundle
bundle install --path vendor/bundle
# time: 5:31.40 total

rm -rf vendor/bundle
gem install bundler -v 1.4.0.pre.1 --pre
bundle install --jobs=4 --path vendor/bundle
# time: 3:10.38 total

Enjoy!

Published on in Ruby

Tweet: Node.js, Eating Crow

Published on

Pretty Sleep using Node.js

$ node -e 'n=0; setInterval(function(){n++;if(n>=20){console.log(".");process.exit(0);}process.stdout.write(".");},1000);'

Twitter Bootstrap Theme for Hexo

Just finished building a Twitter Bootstrap theme for Hexo.

Published on

Command-line Google Search via `find`

Just for fun...

$ find -type m -name "glendale, ca coffee"
$ find -type m -name "glendale, ca smog check"
$ find -type g -name "cool bash commands"

The code:

# "Install" (I use that term loosely)
# - Paste the function below in your .bashrc / .profile / .zshrc / etc.
# Usage: find /usr/local -type [m|g] -name [KEYWORD]
# * -type m : google maps search
# * -type g : google search
# * all other types pass through to find
# Notes: 
# Tested on Ubuntu with ZSH. Comment's, suggestions, etc. welcome.
function find {
  if [ `uname -s` = "Darwin" ]; then
    $browser="open"
  fi
  test "$browser" || browser=`which chromium-browser`
  test "$browser" || browser=`which google-chrome`
  test "$browser" || browser=`which firefox`
  query="`echo "[email protected]" | sed -e 's:^[a-z\/\~\.]* ::' -e 's/-type [mg]//' -e 's/-name//'`"
  if [[ $@ =~ "-type m" ]]; then
    $browser "http://maps.google.com/?q=$query" 2>&1 > /dev/null &
  elif [[ $@ =~ "-type g" ]]; then
    $browser "http://www.google.com/search?q=$query" 2>&1 > /dev/null &
  else
    /usr/bin/find $@
  fi
}

get the gist

Published on

Benchmarking with YSlow.js on Node.js

In my last post on this topic (Benchmarking with HTTPerf.js and NodeUnit) I covered benchmarking application render times from the server to first byte. In this post, I'm going cover basic client benchmarking using YSlow and PhantomJS via YSlow.js on Node.js.

RT: Benchmarking with HTTPerf.js and NodeUnit

Published on