Random notes go here!
This might feel a little redundant to the hacks section, and truthfully a lot of things that I add here, I might also include there. The main difference is that I'm going to strive to only put things here which are setup in a more reusable way. If it doesn't work out, I'll probably end up combining the sections later.
Here are a few tips that I've found useful while delving in to Docker. For an introduction to Docker, see my post on the YP Engineering Blog. Enjoy!
Docker image size does matter. The larger your image, that more unweildy it starts to feel. Pulling a 50MB image is far preferable to pulling a 2G image. Some tips on building smaller images:
busybox
< debian
< centos
< ubuntu
. I try to use progrium/busybox
whenever possible (which isn't all that often without serious work), otherwise, I tend to use debian
.apt-get build-essential
is going to bloat your image, don't use it unless absolutly necessary.Do as much as you can in a single RUN
, as opposed to breaking things up. The downside to this is longer builds with less caching. However, it can make a huge difference in resulting image size. I once took an image from 1.3G to 555MB, just by collapsing all my commands to a single RUN
. Additionally, clean up after yourself in that same RUN
if possible. Example:
# BAD RUN apt-get install git RUN apt-get install wget RUN apt-get install build-essential COPY http://somesite.com/somefile.tgz RUN tar xzf somefile.tgz RUN cd somefile RUN ./configure && make && install # GOOD RUN \ apt-get install -y git wget build-essential && \ curl -sSL -O http://somesite.com/somefile.tgz && \ tar xzf somefile.tgz && \ cd somefile && ./configure && make && install && \ cd - && rm -rf somefile somefile.tgz && \ apt-get remove -y build-essential && \ apt-get autoremove -y && apt-get clean
sudo docker search <private domain>/<term>
# remove all containers sudo docker rm $(sudo docker ps -a -q) #... or ... sudo docker ps -aq | xargs sudo docker rm # remove all images sudo docker rmi $(sudo docker images -q) #... or ... sudo docker images -q | xargs sudo docker rmi # remove specific images in bulk sudo docker rmi myimage:{tagone,tagtwo,tagfive} # remove image containing TERM sudo docker rmi $(sudo docker images | grep TERM | awk '{ print $3 }') #... or ... sudo docker images | grep TERM | awk '{ print $3 }' | xargs sudo docker rmi # remove all non-running containers sudo docker ps -a | grep Exited | awk '{ print $NF }' | xargs sudo docker rm
# view last container sudo docker ps -l # view last container sha only sudo docker ps -lq # stop, start, attach, logs, etc. last container # # $ sudo docker <action> $(sudo docker ps -lq) sudo docker start $(sudo docker ps -lq) sudo docker stop $(sudo docker ps -lq) sudo docker logs $(sudo docker ps -lq) sudo docker attach $(sudo docker ps -lq)
# assuming image 'jmervine/centos6-nodejs' # # <current image name> <private registry>:<port>/<image name> sudo docker tag jmervine/centos6-nodejs docker.myregstry.com:5000/jmervine/centos6-nodejs sudo docker push docker.myregstry.com:5000/jmervine/centos6-nodejs # I then recommend removing your old image to avoid accidentally pushing it to the public registry. sudo docker rmi jmervine/centos6-nodejs
# running randomly assigning host port sudo docker run -d -p 3000 image/name # running with exposed ports randomly assigned on host sudo docker run -d -P image/name # printing randomly assigned ports (only) sudo docker port image_name | awk -F':' '{ print $NF }'
# Directly in to a running container. sudo docker exec -it <container_name|name> \ bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt" # When running a container. sudo docker run -i <container_id|name> \ bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt; /bin/bash ./start.sh" # Via Docker volume. # - where 'file.txt' is /path/to/host/dir/file.txt sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name> #... or ... sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name> cp /path/to/host/file.txt /path/to/host/dir/file.txt # Via file system -- untested as of yet. sudo cp -v /path/to/host/file.txt \ /var/lib/docker/aufs/mnt/**$(sudo docker inspect -f '{{.Id}}' <container_id|name>)**/root/path/to/container/file.txt
Based on comments in http://stackoverflow.com/questions/22907231/copying-files-from-host-to-docker-container
Machine makes it really easy to create Docker hosts on local hypervisors and cloud providers. It creates servers, installs Docker on them, then configures the Docker client to talk to them
This wasn't as clear as I was hoping, so here's what I did.
$ uname -sm Darwin x86_64 $ docker version Client version: 1.3.0 Client API version: 1.15 Go version (client): go1.3.3 Git commit (client): c78088f OS/Arch (client): darwin/amd64 Server version: 1.4.1 Server API version: 1.16 Go version (server): go1.3.3 Git commit (server): 5bc2ff8 $ mkdir -p $GOPATH/src/github.com/docker $ cd $GOPATH/src/github.com/docker $ git clone https://github.com/docker/machine.git $ cd machine $ make test $ ./script/build -os=darwin -arch=amd64 $ mv docker-machine_darwin-amd64 $GOBIN/docker-machine
$ uname -sio Linux x86_64 GNU/Linux $ sudo docker version Client version: 1.4.1 Client API version: 1.16 Go version (client): go1.3.3 Git commit (client): 5bc2ff8 OS/Arch (client): linux/amd64 Server version: 1.4.1 Server API version: 1.16 Go version (server): go1.3.3 Git commit (server): 5bc2ff8 $ mkdir -p $GOPATH/src/github.com/docker $ cd $GOPATH/src/github.com/docker $ git clone https://github.com/docker/machine.git $ cd machine $ sudo make test $ sudo ./script/build -os=linux -arch=amd64 $ sudo chown $USER: docker-machine_linux-amd64 $ mv docker-machine_linux-amd64 $GOBIN/docker-machine
I just stumbled upon a cool feature of Node.js for adding timing to applications using console.time
and console.timeEnd`.
// pointless example that show all parts console.time('timer'); setTimeout(function() { console.timeEnd('timer'); }, 500); // => timer: 511ms
Note: I've heard (and in somecases proven) that in most cases
console.
method are not asynchronous (i.e. blocking) and therefore should never be used in production code. Notice that in the above example,console.time
andconsole.timeEnd
appear to have about 11ms of overhead on my machine.
After using the Express command line generation untility, you get a very basic layout.jade. Here's the standard modifications I make for use with BootstrapCDN.
I recently picked up a Note 3 and with the larger screen I found myself want to use it for shelling in to my machines. So I started playing with Mosh on one of my boxes. I (like hopefully most of you) set strick IPTable rules to keep things locked down as much as possible. I quickly found that (obviously) things weren't working due to this.
To make things work, I added this line to /etc/sysconfig/iptables
:
-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT
Here's the diff:
diff --git a/tmp/iptables.old b/etc/sysconfig/iptables index d4229ca..b950f1f 100644 --- a/tmp/iptables.old +++ b/etc/sysconfig/iptables @@ -8,6 +8,7 @@ -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT +-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT
Once you've added the line, simply restart IPTables like so:
sudo /etc/init.d/iptables condrestart
Enjoy!
function median(values) { values.sort( function(a,b) {return a - b;} ); var half = Math.floor(values.length/2); if (values.length % 2) { return values[half]; } else { return (values[half-1] + values[half]) / 2.0; } }
What automating tasks with ssh, it can be annoying to be prompted to confirm the authenticity of host
.
The authenticity of host 'host.example.com (NN.NN.NN.NN)' can't be established. RSA key fingerprint is af:78:f8:fb:8a:ae:dd:55:f0:40:51:29:68:27:7e:7c. Are you sure you want to continue connecting (yes/no)?
Here's a simple way around that:
# This automatically adds the fingerprint to # ~/.ssh/known_hosts ssh -o StrictHostKeyChecking=no host.example.com # This doesn't add fingerprint ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no host.example.com
In this example, I'm finding all files older then 14 days and removing them.
$ find . -type f -mtime +14 | xargs rm
In this example, I'm finding top level directoies in /foo/bar/bah
that are more then 30 days old and removing them.
$ find /foo/bar/bah -type d -mtime +30 -maxdepth 0 | xargs rm -rf
I typically use this for removing, which is why I've used that as an example. This type of thing is great for a cronjob...
# remove old files daily at 1am * 1 * * * find /foo/bar/bah -type d -mtime +30 -maxdepth 0 | xargs rm -rf
Being a Dev Ops guy, I write gems, scripts and hacks that tend to hook in to system binaries. I know, it would probably be better to write C bindings, but heretofore that hasn't been something I really want to tackle. Additionally, I almost always write tests for my stuff. I've come across the problem where my tests pass locally, but not on Travis-CI.
Someone asked for this recently, I'm not going to go in to great detail at this time, but I found it while looking through my gists and thought I would share.
Simple replace your /etc/rc.local
with the following and it should unlock the hardware airplane mode on startup.
#!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. rfkill unblock wifi sleep 5 rmmod -w -s hp-wmi #modprobe -r -s hp-wmi exit 0
restart :: restart application using forever
# This set's your local directory to to your NODE_PATH NODE_EXEC = NODE_PATH=.:$(NODE_PATH) # This is for local (non `-g`) npm installs. # NODE_MODS = ./node_modules/.bin # Some good `forever` options. FOREVER_OPTS = -p ./logs \ -l server_out.log \ -o ./logs/server_out.log \ -e ./logs/server_err.log \ --append \ --plain \ --minUptime 1000 \ --spinSleepTime 1000 start: setup/dirs # starting app in server mode $(NODE_EXEC) $(NODE_MODS)/forever $(FOREVER_OPTS) [email protected] server.js stop: # stopping app in server mode $(NODE_EXEC) $(NODE_MODS)/forever $(FOREVER_OPTS) [email protected] server.js restart: setup/dirs # restarting app in server mode $(NODE_EXEC) $(NODE_MODS)/forever $(FOREVER_OPTS) [email protected] server.js setup/dirs: # creating required directories for `forever` mkdir -p logs pids
Note: This has been tested on Ubuntu 12.01. It should also work on most CentOS (Redhat) version and Mac's, but I haven't personally tested it.
Copy and save this to a script called something like install_node.sh
and run bash install_node.sh
. It expects sudo
access.
#!/usr/bin/env bash set -x cd /tmp rm -rf node set -ue git clone git://github.com/joyent/node.git cd node git checkout v0.10.20 ./configure --prefix=/usr make sudo make install # vim: ft=sh:
require 'sinatra' get '/' do "Hello World" end
var app = require('express'); app.get('/', function(req, res) { res.send("Hello World!"); }); app.listen(8000);
A simple script to install MySQL on CentOS 6.
#!/usr/bin/bash # # sudo bash < <(curl -s https://gist.github.com/jmervine/5373441/raw/) set -x cd /tmp wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-shared-5.6.10-1.el6.x86_64.rpm/from/http://cdn.mysql.com/ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-client-5.6.10-1.el6.x86_64.rpm/from/http://cdn.mysql.com/ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-server-5.6.10-1.el6.x86_64.rpm/from/http://cdn.mysql.com/ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-devel-5.6.10-1.el6.i686.rpm/from/http://cdn.mysql.com/ rpm -qa | grep mysql-libs && yum remove -y mysql-libs yum install -y MySQL-shared-5.6.10-1.el6.x86_64.rpm yum install -y MySQL-client-5.6.10-1.el6.x86_64.rpm yum install -y MySQL-server-5.6.10-1.el6.x86_64.rpm yum install -y MySQL-devel-5.6.10-1.el6.i686.rpm