Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released 5 October 1991 by Linus Torvalds.
On my Ubuntu laptop, I found that when the display goes to sleep, I would consistantly have this issue where some of my active windows would end up on a hidden workspace. Super annoying, especially because I found myself having to kill the app to get them back. After a little googling, I found those following script, which when coupled with a custom keybinding did the trick.
I placed the following file in $HOME/bin
, which I already have added to my
path, and named it gather
.
#!/bin/bash # From: https://github.com/mezga0153/offscreen-window-restore width=`xrandr | grep current | awk {'print $8'}` `wmctrl -l -G | awk -v w=$width '{ if ($8 != "unity-dash" && $8 != "Hud") { if ($3 >= w || $3 < 0) { system("wmctrl -i -r " $1 " -e 0," sqrt($3*$3) % w ",-1,-1,-1"); } } }'`
I then ensured that it was executable with chmod 755 $HOME/bin/gather
. Once done,
I added a keybind, for me Super-g
, via System Settings > Keyboard >
Shortcuts > Custom Shortcuts.
Note:
I had to install
wmctrl
as well for this to work, with:apt-get install -y wmctrl
Enjoy!
I put together this simple wrapper function for simplifying boot2docker
interactions on my Macbook.
Note: I refer to
.bashrc
as the target as it's the more common shell. However, this has been tested and I personally use it within zsh.
$ docker reload # purge current boot2docker environment $ docker up # start boot2docker and export environment $ docker reset|restart|reup # stop and restart boot2docker $ docker clean # remove all orphaned image shards and all containers that aren't running - DANGEROUS $ docker [etc] # all other arguements are passed directly passed through to docker
# file: ~/.bashrc ############################################################# # Function -- Docker/Boot2Docker ############################################################# function docker_shellinit { local _shellinit="$(boot2docker shellinit)" eval "$(echo ${_shellinit})" echo "${_shellinit}" > ~/.boot2dockerrc } function docker_reup { echo "+ running vpn fix" docker_down echo "+ resetting vbox route" local _iface="$(VBoxManage showvminfo boot2docker-vm --machinereadable | grep hostonlyadapter | cut -d '"' -f 2)" echo "++ sudo route -n add -net 192.168.59.0/24 -interface ${_iface}" sudo route -n add -net 192.168.59.0/24 -interface ${_iface} && \ docker_up } function docker_reset { echo "+ clearing docker variables" unset DOCKER_HOST unset DOCKER_CERT_PATH unset DOCKER_TLS_VERIFY docker_shellinit } function docker_up { echo "+ starting boot2docker" boot2docker up b2dSTATUS=$? docker_reset return $b2dSTATUS } function docker_down { echo "+ stopping boot2docker" boot2docker down return 0 } function docker_clean { echo "+ clean containers" docker ps -a | grep 'Exited ' | awk '{ print $NF }' | xargs docker rm docker ps -a | grep -v 'Up ' | awk '{ print $NF }' | xargs docker rm echo "+ clean images" docker images | grep '^<none>' | awk '{ print $3 }' | xargs docker rmi } function b2d { case "$@" in reload) docker_reset return 0;; reset|fix|reup|fuck) docker_reup return $?;; up) docker_up return $?;; down) docker_down return $?;; clean) docker_clean return $?;; esac boot2docker $@ } docker_exec="$(which docker)" function docker { case "$@" in reload) docker_reset return 0;; reset|fix|reup|fuck) docker_reup return $?;; up) docker_up return $?;; down) docker_down return $?;; clean) docker_clean return $?;; esac $docker_exec $@ }
$ curl -s https://gist.githubusercontent.com/jmervine/6713d10ab05fecd6e1aa/raw/5c5f7020696e23dffa6f046816239574f42767ee/boot2dockerrc.sh >> ~/.bashrc
Here are a few tips that I've found useful while delving in to Docker. For an introduction to Docker, see my post on the YP Engineering Blog. Enjoy!
Docker image size does matter. The larger your image, that more unweildy it starts to feel. Pulling a 50MB image is far preferable to pulling a 2G image. Some tips on building smaller images:
busybox
< debian
< centos
< ubuntu
. I try to use progrium/busybox
whenever possible (which isn't all that often without serious work), otherwise, I tend to use debian
.apt-get build-essential
is going to bloat your image, don't use it unless absolutly necessary.Do as much as you can in a single RUN
, as opposed to breaking things up. The downside to this is longer builds with less caching. However, it can make a huge difference in resulting image size. I once took an image from 1.3G to 555MB, just by collapsing all my commands to a single RUN
. Additionally, clean up after yourself in that same RUN
if possible. Example:
# BAD RUN apt-get install git RUN apt-get install wget RUN apt-get install build-essential COPY http://somesite.com/somefile.tgz RUN tar xzf somefile.tgz RUN cd somefile RUN ./configure && make && install # GOOD RUN \ apt-get install -y git wget build-essential && \ curl -sSL -O http://somesite.com/somefile.tgz && \ tar xzf somefile.tgz && \ cd somefile && ./configure && make && install && \ cd - && rm -rf somefile somefile.tgz && \ apt-get remove -y build-essential && \ apt-get autoremove -y && apt-get clean
sudo docker search <private domain>/<term>
# remove all containers sudo docker rm $(sudo docker ps -a -q) #... or ... sudo docker ps -aq | xargs sudo docker rm # remove all images sudo docker rmi $(sudo docker images -q) #... or ... sudo docker images -q | xargs sudo docker rmi # remove specific images in bulk sudo docker rmi myimage:{tagone,tagtwo,tagfive} # remove image containing TERM sudo docker rmi $(sudo docker images | grep TERM | awk '{ print $3 }') #... or ... sudo docker images | grep TERM | awk '{ print $3 }' | xargs sudo docker rmi # remove all non-running containers sudo docker ps -a | grep Exited | awk '{ print $NF }' | xargs sudo docker rm
# view last container sudo docker ps -l # view last container sha only sudo docker ps -lq # stop, start, attach, logs, etc. last container # # $ sudo docker <action> $(sudo docker ps -lq) sudo docker start $(sudo docker ps -lq) sudo docker stop $(sudo docker ps -lq) sudo docker logs $(sudo docker ps -lq) sudo docker attach $(sudo docker ps -lq)
# assuming image 'jmervine/centos6-nodejs' # # <current image name> <private registry>:<port>/<image name> sudo docker tag jmervine/centos6-nodejs docker.myregstry.com:5000/jmervine/centos6-nodejs sudo docker push docker.myregstry.com:5000/jmervine/centos6-nodejs # I then recommend removing your old image to avoid accidentally pushing it to the public registry. sudo docker rmi jmervine/centos6-nodejs
# running randomly assigning host port sudo docker run -d -p 3000 image/name # running with exposed ports randomly assigned on host sudo docker run -d -P image/name # printing randomly assigned ports (only) sudo docker port image_name | awk -F':' '{ print $NF }'
# Directly in to a running container. sudo docker exec -it <container_name|name> \ bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt" # When running a container. sudo docker run -i <container_id|name> \ bash -c "echo \"$(cat /path/to/host/file.txt)\" > /path/to/container/file.txt; /bin/bash ./start.sh" # Via Docker volume. # - where 'file.txt' is /path/to/host/dir/file.txt sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name> #... or ... sudo docker run -v /path/to/host/dir:/path/to/container/dir <container_id|name> cp /path/to/host/file.txt /path/to/host/dir/file.txt # Via file system -- untested as of yet. sudo cp -v /path/to/host/file.txt \ /var/lib/docker/aufs/mnt/**$(sudo docker inspect -f '{{.Id}}' <container_id|name>)**/root/path/to/container/file.txt
Based on comments in http://stackoverflow.com/questions/22907231/copying-files-from-host-to-docker-container
Machine makes it really easy to create Docker hosts on local hypervisors and cloud providers. It creates servers, installs Docker on them, then configures the Docker client to talk to them
This wasn't as clear as I was hoping, so here's what I did.
$ uname -sm Darwin x86_64 $ docker version Client version: 1.3.0 Client API version: 1.15 Go version (client): go1.3.3 Git commit (client): c78088f OS/Arch (client): darwin/amd64 Server version: 1.4.1 Server API version: 1.16 Go version (server): go1.3.3 Git commit (server): 5bc2ff8 $ mkdir -p $GOPATH/src/github.com/docker $ cd $GOPATH/src/github.com/docker $ git clone https://github.com/docker/machine.git $ cd machine $ make test $ ./script/build -os=darwin -arch=amd64 $ mv docker-machine_darwin-amd64 $GOBIN/docker-machine
$ uname -sio Linux x86_64 GNU/Linux $ sudo docker version Client version: 1.4.1 Client API version: 1.16 Go version (client): go1.3.3 Git commit (client): 5bc2ff8 OS/Arch (client): linux/amd64 Server version: 1.4.1 Server API version: 1.16 Go version (server): go1.3.3 Git commit (server): 5bc2ff8 $ mkdir -p $GOPATH/src/github.com/docker $ cd $GOPATH/src/github.com/docker $ git clone https://github.com/docker/machine.git $ cd machine $ sudo make test $ sudo ./script/build -os=linux -arch=amd64 $ sudo chown $USER: docker-machine_linux-amd64 $ mv docker-machine_linux-amd64 $GOBIN/docker-machine
I recently picked up a Note 3 and with the larger screen I found myself want to use it for shelling in to my machines. So I started playing with Mosh on one of my boxes. I (like hopefully most of you) set strick IPTable rules to keep things locked down as much as possible. I quickly found that (obviously) things weren't working due to this.
To make things work, I added this line to /etc/sysconfig/iptables
:
-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT
Here's the diff:
diff --git a/tmp/iptables.old b/etc/sysconfig/iptables index d4229ca..b950f1f 100644 --- a/tmp/iptables.old +++ b/etc/sysconfig/iptables @@ -8,6 +8,7 @@ -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT +-A INPUT -p udp -m udp --dport 60001:61000 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT
Once you've added the line, simply restart IPTables like so:
sudo /etc/init.d/iptables condrestart
Enjoy!
$ node -e 'n=0; setInterval(function(){n++;if(n>=20){console.log(".");process.exit(0);}process.stdout.write(".");},1000);'
What automating tasks with ssh, it can be annoying to be prompted to confirm the authenticity of host
.
The authenticity of host 'host.example.com (NN.NN.NN.NN)' can't be established. RSA key fingerprint is af:78:f8:fb:8a:ae:dd:55:f0:40:51:29:68:27:7e:7c. Are you sure you want to continue connecting (yes/no)?
Here's a simple way around that:
# This automatically adds the fingerprint to # ~/.ssh/known_hosts ssh -o StrictHostKeyChecking=no host.example.com # This doesn't add fingerprint ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no host.example.com
Here are some simple and basic rules I've found for building applications that are maintainable and perform well, both in speed and stability. These rules, at one level or another, can apply to smaller modules and libraries, scripts, as well as fully featured large scale applications (both web and otherwise). In this post I will focus more on web applications, since that's where I've spent most of my time. At a higher level, they should apply to almost all areas of software development.
In this example, I'm finding all files older then 14 days and removing them.
$ find . -type f -mtime +14 | xargs rm
In this example, I'm finding top level directoies in /foo/bar/bah
that are more then 30 days old and removing them.
$ find /foo/bar/bah -type d -mtime +30 -maxdepth 0 | xargs rm -rf
I typically use this for removing, which is why I've used that as an example. This type of thing is great for a cronjob...
# remove old files daily at 1am * 1 * * * find /foo/bar/bah -type d -mtime +30 -maxdepth 0 | xargs rm -rf
Someone asked for this recently, I'm not going to go in to great detail at this time, but I found it while looking through my gists and thought I would share.
Simple replace your /etc/rc.local
with the following and it should unlock the hardware airplane mode on startup.
#!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. rfkill unblock wifi sleep 5 rmmod -w -s hp-wmi #modprobe -r -s hp-wmi exit 0
Note: This has been tested on Ubuntu 12.01. It should also work on most CentOS (Redhat) version and Mac's, but I haven't personally tested it.
Copy and save this to a script called something like install_node.sh
and run bash install_node.sh
. It expects sudo
access.
#!/usr/bin/env bash set -x cd /tmp rm -rf node set -ue git clone git://github.com/joyent/node.git cd node git checkout v0.10.20 ./configure --prefix=/usr make sudo make install # vim: ft=sh:
Or run the following command:
bash < <(curl -s https://gist.github.com/jmervine/5407622/raw/nginx_w_lua.bash)
Note: This script has been tested on
Ubuntu 12.04.2 LTS
but should work on just about any unix based distro, as everything is compiled from source.Requires wget and basic build essentials.
configure
Nginx with lua-nginx-moduleBinary Results:
/opt/nginx/sbin/nginx
/usr/local/bin/lua
/usr/local/bin/luajit
Lib Results:
/usr/local/lib/*lua*
Include Results:
/usr/local/include/luajit-2.0/*
LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH /opt/nginx/sbin/nginx -c /path/to/nginx.conf
Stop Nginx: sudo /etc/init.d/nginx stop
Patch /etc/init.d/nginx
like so:
13,14c13,18 < PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin < DAEMON=/usr/sbin/nginx --- > export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH > > # ensure default configuration location > test "$DAEMON_OPTS" || DAEMON_OPTS="-c /etc/nginx/nginx.conf" > PATH=/opt/nginx/sbin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin > DAEMON=/opt/nginx/sbin/nginx`
Note: the above may not be the best way, but it's what I had to do to get it to work and I didn't have a ton of time to mess with it.
A simple script to install MySQL on CentOS 6.
#!/usr/bin/bash # # sudo bash < <(curl -s https://gist.github.com/jmervine/5373441/raw/) set -x cd /tmp wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-shared-5.6.10-1.el6.x86_64.rpm/from/http://cdn.mysql.com/ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-client-5.6.10-1.el6.x86_64.rpm/from/http://cdn.mysql.com/ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-server-5.6.10-1.el6.x86_64.rpm/from/http://cdn.mysql.com/ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-devel-5.6.10-1.el6.i686.rpm/from/http://cdn.mysql.com/ rpm -qa | grep mysql-libs && yum remove -y mysql-libs yum install -y MySQL-shared-5.6.10-1.el6.x86_64.rpm yum install -y MySQL-client-5.6.10-1.el6.x86_64.rpm yum install -y MySQL-server-5.6.10-1.el6.x86_64.rpm yum install -y MySQL-devel-5.6.10-1.el6.i686.rpm
$ curl https://<domain>/path/to.html --insecure
Also see: HTTPS: Creating Slef-signed Certs.
Occasionally, I need to create self-signed certs when testing application through https. This isn't really the best way to do it, as it will require anyone visiting to confirm a security exception, but it's useful in a pinch.
I wrote the following script to install Node.js on CentOS to handle a Rails missing a JavaScript runtime environment error.
#!/usr/bin/env bash set -ue sudo echo "Ensure sudo access." sudo touch /etc/yum.repos.d/naulinux-extras.repo sudo sh -c "echo '[naulinux-extras] name=NauLinux Extras baseurl=http://downloads.naulinux.ru/pub/NauLinux/6.2/\$basearch/Extras/RPMS/ enabled=0 gpgcheck=1 gpgkey=http://downloads.naulinux.ru/pub/NauLinux/RPM-GPG-KEY-linux-ink ' > /etc/yum.repos.d/naulinux-extras.repo" sudo yum --enablerepo=naulinux-extras install nodejs
Here's a simple script to secure Redis via IPTables (tested on CentOS 6.3):
#!/usr/bin/env bash # redis_secure.sh # this script will add an ip address to iptables # allowing the ip address to connect to redis # should be run with localhost first IPADDRESS="$1" if ! test "$IPADDRESS"; then echo "Please enter the IP Address you want to be able to connection to Redis." exit 1 fi sudo iptables -A INPUT -s $IPADDRESS -p tcp -m tcp --dport 6379 -j ACCEPT sudo bash -c 'iptables-save > /etc/sysconfig/iptables'
Then run as follows:
$ ./redis_secure.sh localhost
$ ./redis_secure.sh 555.555.555.555 # < your ip goes here
Create xmodmap
file:
$ xmodmap -pke > ~/.xmodmap
Edit the newly created ~/.xmodmap
file, changing the line starting with keycode 66 =
to map to a key of your choice. Here's an example where I'm mapping Caps Lock to the Escape key:
keycode 66 = Escape NoSymbol Escape
Load your new map, disabling Caps Lock:
xmodmap ~/.xmodmap
(optionally) You can set this to autostart when you launch Unity by creating the following file:
$ cat .config/autostart/xmodmap.desktop [Desktop Entry] Type=Application Exec=xmodmap ~/.xmodmap Hidden=false NoDisplay=false X-GNOME-Autostart-enabled=true Name[en_US]=xmodmap Name=xmodmap Comment[en_US]= Comment=
Credit goes to this post for this: http://nion.modprobe.de/blog/archives/521-hostname-completion-with-zsh.html
In your ~/.zshrc
local knownhosts knownhosts=( ${${${${(f)"$(<$HOME/.ssh/known_hosts)"}:#[0-9]*}%%\ *}%%,*} ) zstyle ':completion:*:(ssh|scp|sftp):*' hosts $knownhosts
In your ~/.ssh/config
HashKnownHosts no
[[ $(zsh --version | awk '{print $2}') > 4.3.17 ]] # usage if [[ $(zsh --version | awk '{print $2}') > 4.3.17 ]]; then # do someting that only higher zsh versions support else # do something else for low versions fi
This was my origitional (not so sexy solution).
The following line will print
zsh
version information if the version is greater then or equal to 4.3.17, otherwise it will return blank:zsh --version | awk '{print $2}' | awk -F'.' ' ( $1 > 4 || ( $1 == 4 && $2 > 3 ) || ( $1 == 4 && $2 == 3 && $3 >= 17 ) ) 'An example usage would be something like:
#!/usr/bin/env bash if test "$( zsh --version | awk '{print $2}' | awk -F'.' ' ( $1 > 4 || ( $1 == 4 && $2 > 3 ) || ( $1 == 4 && $2 == 3 && $3 >= 17 ) ) ' )" then # do someting that only higher zsh versions support else # do something else for low versions fi
This is something I ran in to when origionally setting up my Folio, which I did not post on. However, today a co-worker asked me how to solve this problem, so I thought I should jot it down for future reference.
sudo vi /etc/default/grub
.Update the line containing GRUB_CMDLINE_LINUX_DEFAULT
, adding acpi_backlight=vendor
to the end. It should look something like this when you're done:
# file: /etc/default/grub # ... GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=on acpi_backlight=vendor" # ...
Save and update grub with sudo update-grub
.
Note: You will need to use an external monitor or drop in to grub's preboot command line and add the above to be able to see your screen before adding the above option.
It's as easy as:
sudo apt-get install libreadline-gplv2-dev rvm remove ruby-1.9.3-p194 rvm install ruby-1.9.3-p194
Done.
rvm pkg install readline
.Found this simple Python script which allows for cli gist posts -- (thanks to pranavk).
You can install it like so
$ mkdir ~/bin $ echo "export PATH=~/bin:$PATH" >> ~/.zshrc $ cd ~/bin $ wget https://raw.github.com/pranavk/gist-cli/master/gistcli $ chmod 755 gistcli $ source ~/.zshrc
Usage examples
# simple echo to gist echo "test gist" | gistcli # file to gist gistcli -f myfile.txt # private echo "ssssh, don't tell anyone!" | gistcli -p # from tty, EOF from '.' on it's own line gistcli - Foo, bar bah bing! .
Note: There's also a slightly more mature Ruby gist cli tool at github.com/defunkt/gist but I had issues getting it to work with my RMV setup.
"At a first glance, IPTables rules might look cryptic. In this article, I’ve given 25 practical IPTables rules that you can copy/paste and use it for your needs. These examples will act as a basic templates for you to tweak these rules to suite your specific requirement."
"While doing a server migration, it happens that some traffic still go to the old machine because the DNS servers are not yet synced or simply because some people are using the IP address instead of the domain name.... By using iptables and its masquerade feature, it is possible to forward all traffic to the old server to the new IP. This tutorial will show which command lines are required to make this possible. In this article, it is assumed that you do not have iptables running, or at least no nat table rules for chain PREROUTING and POSTROUTING."
Okay, this post is a bit off topic but I spent almost two days non-stop working on this to figure it out and it's nowhere out there on the web.
Basic Array handling in BASH, because I always forget.
# basic itteration items=( a b c d e ) for i in "${items[@]}" do echo -n $i done #=> abcde # update specific array slot items[1]="foo" # access specific array slot echo ${items[1]} #=> foo