Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released 5 October 1991 by Linus Torvalds.

Articles on Linux

Testing with Docker

Here are a few tips that I've found useful while delving in to Docker.

Gather Windows on Ubuntu

On my Ubuntu laptop, I found that when the display goes to sleep, I would consistantly have this issue where some of my active windows would end up on a hidden workspace. Super annoying, especially because I found myself having to kill the app to get them back. After a little googling, I found those following script, which when coupled with a custom keybinding did the trick.

I placed the following file in $HOME/bin, which I already have added to my path, and named it gather.

# From:

width=`xrandr | grep current | awk {'print $8'}`

`wmctrl -l -G | awk -v w=$width '{
    if ($8 != "unity-dash" && $8 != "Hud") {
        if ($3 >= w || $3 < 0) {
            system("wmctrl -i -r " $1 " -e 0," sqrt($3*$3) % w ",-1,-1,-1");

I then ensured that it was executable with chmod 755 $HOME/bin/gather. Once done, I added a keybind, for me Super-g, via System Settings > Keyboard > Shortcuts > Custom Shortcuts.


I had to install wmctrl as well for this to work, with:

apt-get install -y wmctrl


boot2docker Wrapper Script

I put together this simple wrapper function for simplifying boot2docker interactions on my Macbook.

Note: I refer to .bashrc as the target as it's the more common shell. However, this has been tested and I personally use it within zsh.

Basic Usage

$ docker reload
# purge current boot2docker environment

$ docker up
# start boot2docker and export environment

$ docker reset|restart|reup
# stop and restart boot2docker

$ docker clean
# remove all orphaned image shards and all containers that aren't running - DANGEROUS

$ docker [etc]
# all other arguements are passed directly passed through to docker

The Functions

# file: ~/.bashrc

# Function -- Docker/Boot2Docker
function docker_shellinit {
  local _shellinit="$(boot2docker shellinit)"
  eval "$(echo ${_shellinit})"
  echo "${_shellinit}" > ~/.boot2dockerrc

function docker_reup {
  echo "+ running vpn fix"

  echo "+ resetting vbox route"

  local _iface="$(VBoxManage showvminfo boot2docker-vm --machinereadable | grep hostonlyadapter | cut -d '"' -f 2)"
  echo "++ sudo route -n add -net -interface ${_iface}"

  sudo route -n add -net -interface ${_iface} && \

function docker_reset {
  echo "+ clearing docker variables"

function docker_up {
  echo "+ starting boot2docker"
  boot2docker up
  return $b2dSTATUS

function docker_down {
  echo "+ stopping boot2docker"
  boot2docker down
  return 0

function docker_clean {
  echo "+ clean containers"
  docker ps -a | grep 'Exited ' | awk '{ print $NF }' | xargs docker rm
  docker ps -a | grep -v 'Up ' | awk '{ print $NF }' | xargs docker rm

  echo "+ clean images"
  docker images | grep '^<none>' | awk '{ print $3 }' | xargs docker rmi

function b2d {
  case "$@" in
    return 0;;
    return $?;;
    return $?;;
    return $?;;
    return $?;;
  boot2docker $@

docker_exec="$(which docker)"
function docker {
  case "$@" in
    return 0;;
    return $?;;
    return $?;;
    return $?;;
    return $?;;
  $docker_exec $@


$ curl -s >> ~/.bashrc

Docker Tips

Here are a few tips that I've found useful while delving in to Docker. For an introduction to Docker, see my post on the YP Engineering Blog. Enjoy!

Making smaller images.

Docker image size does matter. The larger your image, that more unweildy it starts to feel. Pulling a 50MB image is far preferable to pulling a 2G image. Some tips on building smaller images:

  • Use the smallest linux distro which meets your needs; busybox < debian < centos < ubuntu. I try to use progrium/busybox whenever possible (which isn't all that often without serious work), otherwise, I tend to use debian.
  • Install as little as possible to meet your needs -- apt-get build-essential is going to bloat your image, don't use it unless absolutly necessary.
  • Do as much as you can in a single RUN, as opposed to breaking things up. The downside to this is longer builds with less caching. However, it can make a huge difference in resulting image size. I once took an image from 1.3G to 555MB, just by collapsing all my commands to a single RUN. Additionally, clean up after yourself in that same RUN if possible. Example:

      # BAD
      RUN apt-get install git
      RUN apt-get install wget
      RUN apt-get install build-essential
      RUN tar xzf somefile.tgz
      RUN cd somefile
      RUN ./configure && make && install
      # GOOD
      RUN \
          apt-get install -y git wget build-essential && \
          curl -sSL -O && \
          tar xzf somefile.tgz && \ 
          cd somefile && ./configure && make && install && \
          cd - && rm -rf somefile somefile.tgz && \
          apt-get remove -y build-essential && \
          apt-get autoremove -y && apt-get clean

Search private registry.

sudo docker search <private domain>/<term>

Removing images and containers in bulk.

# remove all containers
sudo docker rm $(sudo docker ps -a -q)
#... or ...
sudo docker ps -aq | xargs sudo docker rm

# remove all images
sudo docker rmi $(sudo docker images -q)
#... or ...
sudo docker images -q | xargs sudo docker rmi

# remove specific images in bulk
sudo docker rmi myimage:{tagone,tagtwo,tagfive}

# remove image containing TERM
sudo docker rmi $(sudo docker images | grep TERM | awk '{ print $3 }')
#... or ...
sudo docker images | grep TERM | awk '{ print $3 }' | xargs sudo docker rmi

# remove all non-running containers
sudo docker ps -a | grep Exited | awk '{ print $NF }' | xargs sudo docker rm

Interacting with the most recent continer started.

# view last container
sudo docker ps -l 

# view last container sha only
sudo docker ps -lq

# stop, start, attach, logs, etc. last container
# $ sudo docker <action> $(sudo docker ps -lq)
sudo docker start $(sudo docker ps -lq)
sudo docker stop $(sudo docker ps -lq)
sudo docker logs $(sudo docker ps -lq)
sudo docker attach $(sudo docker ps -lq)

Pushing to a private registry.

# assuming image 'jmervine/centos6-nodejs'
#               <current image name>    <private registry>:<port>/<image name>
sudo docker tag jmervine/centos6-nodejs
sudo docker push

# I then recommend removing your old image to avoid accidentally pushing it to the public registry.
sudo docker rmi jmervine/centos6-nodejs


# running randomly assigning host port
sudo docker run -d -p 3000 image/name

# running with exposed ports randomly assigned on host
sudo docker run -d -P image/name

# printing randomly assigned ports (only)
sudo docker port image_name | awk -F':' '{ print $NF }'

Copying Files TO Containers

# Directly in to a running container.
sudo docker exec -it <container_name|name> \
    bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt"

# When running a container.
sudo docker run -i <container_id|name> \
    bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt; /bin/bash ./"

# Via Docker volume.
# - where 'file.txt' is /path/to/host/dir/file.txt
sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name>

#... or ...
sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name>
cp /path/to/host/file.txt /path/to/host/dir/file.txt

# Via file system -- untested as of yet.
sudo cp -v /path/to/host/file.txt \
    /var/lib/docker/aufs/mnt/**$(sudo docker inspect -f '{{.Id}}' <container_id|name>)**/root/path/to/container/file.txt

Based on comments in

Building Docker Machine

What's Docker Machine?

Machine makes it really easy to create Docker hosts on local hypervisors and cloud providers. It creates servers, installs Docker on them, then configures the Docker client to talk to them

This wasn't as clear as I was hoping, so here's what I did.


$ uname -sm
Darwin x86_64

$ docker version
Client version: 1.3.0
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f
OS/Arch (client): darwin/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

$ mkdir -p $GOPATH/src/
$ cd $GOPATH/src/
$ git clone
$ cd machine

$ make test
$ ./script/build -os=darwin -arch=amd64
$ mv docker-machine_darwin-amd64 $GOBIN/docker-machine


$ uname -sio
Linux x86_64 GNU/Linux

$ sudo docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

$ mkdir -p $GOPATH/src/
$ cd $GOPATH/src/
$ git clone
$ cd machine

$ sudo make test
$ sudo ./script/build -os=linux -arch=amd64
$ sudo chown $USER: docker-machine_linux-amd64
$ mv docker-machine_linux-amd64 $GOBIN/docker-machine

Notes: Mosh IPTable Rules

I recently picked up a Note 3 and with the larger screen I found myself want to use it for shelling in to my machines. So I started playing with Mosh on one of my boxes. I (like hopefully most of you) set strick IPTable rules to keep things locked down as much as possible. I quickly found that (obviously) things weren't working due to this.

To make things work, I added this line to /etc/sysconfig/iptables:

-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT

Here's the diff:

diff --git a/tmp/iptables.old b/etc/sysconfig/iptables
index d4229ca..b950f1f 100644
--- a/tmp/iptables.old
+++ b/etc/sysconfig/iptables
@@ -8,6 +8,7 @@
 -A INPUT -p icmp -j ACCEPT 
 -A INPUT -i lo -j ACCEPT 
 -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT 
+-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT
 -A INPUT -j REJECT --reject-with icmp-host-prohibited 
 -A FORWARD -j REJECT --reject-with icmp-host-prohibited 

Once you've added the line, simply restart IPTables like so:

sudo /etc/init.d/iptables condrestart 


Pretty Sleep using Node.js

$ node -e 'n=0; setInterval(function(){n++;if(n>=20){console.log(".");process.exit(0);}process.stdout.write(".");},1000);'

Notes: Disable SSH Strict Host Checking

What automating tasks with ssh, it can be annoying to be prompted to confirm the authenticity of host.

The authenticity of host ' (NN.NN.NN.NN)' can't be established.
RSA key fingerprint is af:78:f8:fb:8a:ae:dd:55:f0:40:51:29:68:27:7e:7c.
Are you sure you want to continue connecting (yes/no)? 

Here's a simple way around that:

# This automatically adds the fingerprint to 
# ~/.ssh/known_hosts
ssh -o StrictHostKeyChecking=no

# This doesn't add fingerprint
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no

Simple Rules for Stable, Performant and Maintainable Apps

Here are some simple and basic rules I've found for building applications that are maintainable and perform well, both in speed and stability. These rules, at one level or another, can apply to smaller modules and libraries, scripts, as well as fully featured large scale applications (both web and otherwise). In this post I will focus more on web applications, since that's where I've spent most of my time. At a higher level, they should apply to almost all areas of software development.

Notes: Finding Older Files

In this example, I'm finding all files older then 14 days and removing them.

$ find . -type f -mtime +14 | xargs rm

In this example, I'm finding top level directoies in /foo/bar/bah that are more then 30 days old and removing them.

$ find /foo/bar/bah -type d -mtime +30 -maxdepth 0 | xargs rm -rf

I typically use this for removing, which is why I've used that as an example. This type of thing is great for a cronjob...

# remove old files daily at 1am
* 1 * * * find /foo/bar/bah -type d -mtime +30 -maxdepth 0 | xargs rm -rf

WiFi/Bluetooth Hack for the HP Folio 13

Someone asked for this recently, I'm not going to go in to great detail at this time, but I found it while looking through my gists and thought I would share.

Simple replace your /etc/rc.local with the following and it should unlock the hardware airplane mode on startup.

#!/bin/sh -e
# rc.local
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
# In order to enable or disable this script just change the execution
# bits.
# By default this script does nothing.

rfkill unblock wifi
sleep 5
rmmod -w -s hp-wmi
#modprobe -r -s hp-wmi

exit 0

get the gist

Installing Node.js from Source

Note: This has been tested on Ubuntu 12.01. It should also work on most CentOS (Redhat) version and Mac's, but I haven't personally tested it.

Copy and save this to a script called something like and run bash It expects sudo access.

#!/usr/bin/env bash
set -x
cd /tmp
rm -rf node

set -ue
git clone git://

cd node
git checkout v0.10.20

./configure --prefix=/usr
sudo make install

# vim: ft=sh:

Get the gist.

Installing Nginx with Lua Module

Download the script.

Or run the following command:

bash < <(curl -s

Note: This script has been tested on Ubuntu 12.04.2 LTS but should work on just about any unix based distro, as everything is compiled from source.

Requires wget and basic build essentials.

What's it do?

  • Download LuaJIT 2.0.1
  • Install LuaJIT 2.0.1
  • Download Nginx Development Kit (NDK)
  • Download lua-nginx-module
  • Download Nginx 1.2.8
  • configure Nginx with lua-nginx-module
  • Install Nginx

Binary Results:

  • /opt/nginx/sbin/nginx
  • /usr/local/bin/lua
  • /usr/local/bin/luajit

Lib Results:

  • /usr/local/lib/*lua*

Include Results:

  • /usr/local/include/luajit-2.0/*

Starting Nginx

LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH /opt/nginx/sbin/nginx -c /path/to/nginx.conf

Update existing Nginx init:

Stop Nginx: sudo /etc/init.d/nginx stop

Patch /etc/init.d/nginx like so:

< PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
< DAEMON=/usr/sbin/nginx
> export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
> # ensure default configuration location
> test "$DAEMON_OPTS" || DAEMON_OPTS="-c /etc/nginx/nginx.conf"
> PATH=/opt/nginx/sbin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
> DAEMON=/opt/nginx/sbin/nginx`

Note: the above may not be the best way, but it's what I had to do to get it to work and I didn't have a ton of time to mess with it.

Install MySQL on CentOS 6

A simple script to install MySQL on CentOS 6.

# sudo bash < <(curl -s

set -x
cd /tmp

rpm -qa | grep mysql-libs && yum remove -y mysql-libs

yum install -y MySQL-shared-5.6.10-1.el6.x86_64.rpm
yum install -y MySQL-client-5.6.10-1.el6.x86_64.rpm
yum install -y MySQL-server-5.6.10-1.el6.x86_64.rpm
yum install -y MySQL-devel-5.6.10-1.el6.i686.rpm

Link to gist.

Using curl to Test HTTPS with Self Signed Certs

$ curl https://<domain>/path/to.html --insecure

Also see: HTTPS: Creating Slef-signed Certs.

HTTPS: Creating Self-signed Certs

Occasionally, I need to create self-signed certs when testing application through https. This isn't really the best way to do it, as it will require anyone visiting to confirm a security exception, but it's useful in a pinch.

Installing Node.js on CentOS

I wrote the following script to install Node.js on CentOS to handle a Rails missing a JavaScript runtime environment error.

#!/usr/bin/env bash
set -ue
sudo echo "Ensure sudo access."
sudo touch /etc/yum.repos.d/naulinux-extras.repo
sudo sh -c "echo '[naulinux-extras]
name=NauLinux Extras
' > /etc/yum.repos.d/naulinux-extras.repo"
sudo yum --enablerepo=naulinux-extras install nodejs

Securing Redis via IPTables

Here's a simple script to secure Redis via IPTables (tested on CentOS 6.3):

#!/usr/bin/env bash


# this script will add an ip address to iptables
# allowing the ip address to connect to redis

# should be run with localhost first

if ! test "$IPADDRESS"; then
    echo "Please enter the IP Address you want to be able to connection to Redis."
    exit 1

sudo iptables -A INPUT -s $IPADDRESS -p tcp -m tcp --dport 6379 -j ACCEPT
sudo bash -c 'iptables-save > /etc/sysconfig/iptables'

Then run as follows:

$ ./ localhost
$ ./ 555.555.555.555 # < your ip goes here

Published on in Linux

Killing Caps Lock on Ubuntu

  1. Create xmodmap file:

     $ xmodmap -pke > ~/.xmodmap
  2. Edit the newly created ~/.xmodmap file, changing the line starting with keycode 66 = to map to a key of your choice. Here's an example where I'm mapping Caps Lock to the Escape key:

     keycode  66 = Escape NoSymbol Escape
  3. Load your new map, disabling Caps Lock:

     xmodmap ~/.xmodmap
  4. (optionally) You can set this to autostart when you launch Unity by creating the following file:

     $ cat .config/autostart/xmodmap.desktop
     [Desktop Entry]
     Exec=xmodmap ~/.xmodmap

Fast Hostname Completion with ZSH

Credit goes to this post for this:

In your ~/.zshrc

local knownhosts
knownhosts=( ${${${${(f)"$(<$HOME/.ssh/known_hosts)"}:#[0-9]*}%%\ *}%%,*} )
zstyle ':completion:*:(ssh|scp|sftp):*' hosts $knownhosts

In your ~/.ssh/config

HashKnownHosts no

Minimum Version Checking with BASH/ZSH

Thanks to @retr0h:

[[ $(zsh --version | awk '{print $2}') > 4.3.17 ]]

# usage

if [[ $(zsh --version | awk '{print $2}') > 4.3.17 ]]; then
    # do someting that only higher zsh versions support
    # do something else for low versions

This was my origitional (not so sexy solution).

The following line will print zsh version information if the version is greater then or equal to 4.3.17, otherwise it will return blank:

zsh --version | awk '{print $2}' | awk -F'.' ' ( $1 > 4 || ( $1 == 4 && $2 > 3 ) || ( $1 == 4 && $2 == 3 && $3 >= 17 ) ) '

An example usage would be something like:

#!/usr/bin/env bash
if test "$( zsh --version | awk '{print $2}' | awk -F'.' ' ( $1 > 4 || ( $1 == 4 && $2 > 3 ) || ( $1 == 4 && $2 == 3 && $3 >= 17 ) ) ' )"
    # do someting that only higher zsh versions support
    # do something else for low versions

Fixing Backlight on the HP Folio 13 when using Linux

This is something I ran in to when origionally setting up my Folio, which I did not post on. However, today a co-worker asked me how to solve this problem, so I thought I should jot it down for future reference.

  1. Open your grub config: sudo vi /etc/default/grub.
  2. Update the line containing GRUB_CMDLINE_LINUX_DEFAULT, adding acpi_backlight=vendor to the end. It should look something like this when you're done:

     # file: /etc/default/grub
     # ...
     GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=on acpi_backlight=vendor"
     # ...
  3. Save and update grub with sudo update-grub.

  4. Reboot and you should have a display.

Note: You will need to use an external monitor or drop in to grub's preboot command line and add the above to be able to see your screen before adding the above option.

Published on in Linux

RVM, irb, readline and Ubuntu

It's as easy as:

sudo apt-get install libreadline-gplv2-dev
rvm remove ruby-1.9.3-p194
rvm install ruby-1.9.3-p194



  1. Do not use rvm pkg install readline.
  2. "1.9.3-p194" is an example, should work with most versions.

Published on in Ruby, Linux

Web Pasting with "gistcli"

Found this simple Python script which allows for cli gist posts -- (thanks to pranavk).

You can install it like so

$ mkdir ~/bin
$ echo "export PATH=~/bin:$PATH" >> ~/.zshrc
$ cd ~/bin
$ wget
$ chmod 755 gistcli
$ source ~/.zshrc

Usage examples

# simple echo to gist
echo "test gist" | gistcli

# file to gist
gistcli -f myfile.txt

# private
echo "ssssh, don't tell anyone!" | gistcli -p

# from tty, EOF from '.' on it's own line
gistcli -
Foo, bar bah bing!

Note: There's also a slightly more mature Ruby gist cli tool at but I had issues getting it to work with my RMV setup.

IPTables Rules Examples

25 Most Frequently Used Linux IPTables Rules Examples (external)

"At a first glance, IPTables rules might look cryptic. In this article, I’ve given 25 practical IPTables rules that you can copy/paste and use it for your needs. These examples will act as a basic templates for you to tweak these rules to suite your specific requirement."

How-To: Redirecting network traffic to a new IP using IPtables (external)

"While doing a server migration, it happens that some traffic still go to the old machine because the DNS servers are not yet synced or simply because some people are using the IP address instead of the domain name.... By using iptables and its masquerade feature, it is possible to forward all traffic to the old server to the new IP. This tutorial will show which command lines are required to make this possible. In this article, it is assumed that you do not have iptables running, or at least no nat table rules for chain PREROUTING and POSTROUTING."

Published on in Linux

Fixing Wireless on the HP Folio 13 when using Linux

Okay, this post is a bit off topic but I spent almost two days non-stop working on this to figure it out and it's nowhere out there on the web.

Published on in Linux

BASH Arrays

Basic Array handling in BASH, because I always forget.

# basic itteration
items=( a b c d e )

for i in "${items[@]}"
    echo -n $i

#=> abcde

# update specific array slot

# access specific array slot
echo ${items[1]}

#=> foo