PostgreSQL Backup and Restore Between Docker-composed Containers

The importance of backup and recovery really only becomes clear in the face of catastrophic data loss. I’ve got a slick little Padrino app that’s starting to generate traffic (and ad revenue). As such, it would be a real shame if my data got lost and I had to start from scratch.

docker-compose

This is what I’m working with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# docker-compose.yml
nginx:
restart: always
build: ./
volumes:
# Page content
- ./:/home/app/webapp
links:
- postgres
environment:
- PASSENGER_APP_ENV=production
- RACK_ENV=production
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=daniel@example.com
postgres:
restart: always
image: postgres
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=secretpassword
volumes_from:
- myapp_data

It’s the old Compose Version 1 syntax, but what follows should still apply. As with all such compositions, I write database data to a data-only container. Though the data persists apart from the Dockerized Postgres container, it still needs to be running (e.g., docker-compose up -d).

Dump the data

Assuming the containers are up and running, the appropriate command looks like this:

1
docker-compose exec -u <your_postgres_user> <postgres_service_name> pg_dump -Fc <database_name_here> > db.dump

Given the composition above, the command I actually execute is this:

1
docker-compose exec --user root postgres pg_dump -Fc myapp_production > db.dump

At this point, the db.dump file can be transfered to a remote server through whatever means are appropriate (I set this all up in capistrano to make it super easy).

Restore the data

Another assumption: a new database is up and running on the remote backup machine (ideally using the same docker-compose.yml file above).

The restore command looks like this:

1
docker-compose exec -i -u <your_postgres_user> <postgres_service_name> pg_restore -C -d postgres < db.dump

The command I execute is this:

1
docker-compose exec -i -u root postgres pg_restore -C -d postgres < db.dump

Done!

Nginx Proxy, Let's Encrypt Companion, and Docker Compose Version 3

I recently discovered that I don’t need to manually create data-only containers with docker-compose anymore. A welcome feature, but one that comes with all the usual migration overhead. I rely heavily on nginx-proxy and letsencrypt-nginx-proxy-companion. Getting it all to work in the style of docker-compose version 3 took a bit of doing.

My previous tried and true approach is getting pretty stale. It is time to up my Docker game…

My Site

nginx-proxy proxies multiple site, but for demonstration purposes, I’m only serving up one with nginx. I like to put all my individual Docker compositions in their own directories:

1
mkdir mysite && cd mysite

Optional

The following assumes you have some sort of site you want to serve up from the mysite/ directory. If not, just create a simple Hello, world! HTML page. Copy and paste the following to index.html:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello, world!</title>
</head>
<body>
Hello, world!
</body>
</html>

docker-compose

It’s awesome that I can create data-only containers in my docker-compose.yml, but now I’ve got to manually create a network bridge:

1
docker network create nginx-proxy

Proxied containers also need to know about this network in their own docker-compose.yml files…

Copy and paste the code below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: always
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=site.example.com
- LETSENCRYPT_EMAIL=email@example.com
volumes:
- ./:/usr/share/nginx/html
networks:
default:
external:
name: nginx-proxy

This will serve up files from the current directory (i.e., the same one that contains the new index.html page, if created).

Start docker-compose:

1
docker-compose up -d

The site won’t be accessible yet. That comes next.

nginx-proxy

As before, put the nginx-proxy Docker compositions in its own directory:

1
2
cd ..
mkdir nginx-proxy && cd nginx-proxy

Create a directory in which to store the Let’s Encrypt certificates:

1
mkdir certs

Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
volumes:
vhost:
networks:
default:
external:
name: nginx-proxy

This allows nginx-proxy to combine forces with letsencrypt-nginx-proxy-companion, all in one docker-compose file.

Start docker-compose:

1
docker-compose up -d

If all is well, you should be able to access your site at the address configured.

A Home-Based Ubuntu 16.04 Production Server with Salvaged Equipment

Preface

As I’ve often griped before, my cloud service provider (cloudatcost.com) is not exactly reliable. I’m currently on Day 3 waiting for their tech support to address several downed servers. Three days isn’t even that bad considering I’ve waited up to two weeks in the past. In any case, though I wish them success, I’m sick of their nonsense and am starting to migrate my servers out of the cloud and into my house. As a cloud company that routinely loses its customers’ data, it’s prudent to prepare for their likely bankruptcy and closure.

I have an old smashed-up AMD Quad Core laptop I’m going to use as a server. The screen was totally broken, so as a laptop it’s kind of useless anyway. It’s a little lightweight on resources (only 4 GB of RAM), but this is much more than I’m used to. I used unetbootin to create an Ubuntu 16.04 bootable USB and installed the base system.

What follows are the common steps I take when setting up a production server. This bare minimum approach is a process I repeat frequently, so it’s worth documenting here. Once the OS is installed…

Change the root password

The install should put the created user into the sudo group. Change the root password with that user:

1
2
3
sudo su
passwd
exit

Update OS

An interesting thing happened during install… I couldn’t install additional software (i.e., open-ssh), so I skipped it. When it came time to install vim, I discovered I didn’t have access to any of the repositories. The answer here shed some light on the situation, but didn’t really resolve anything.

I ended up copying the example sources.list from the documentation to fix the problem:

1
sudo cp /usr/share/doc/apt/examples/sources.list /etc/apt/sources.list

I found out later that this contained all repositories for Ubuntu 14.04. So I ended up manually pasting this in /etc/apt/sources.list:

1
2
3
4
5
6
7
8
9
10
11
12
# deb cdrom:[Ubuntu 16.04 LTS _Xenial Xerus_ - Release amd64 (20160420.1)]/ xenial main restricted
deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
# deb http://archive.ubuntu.com/ubuntu xenial-proposed main restricted universe multiverse
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner

After that, update/upgrade worked (make sure it doesn’t actually work before messing around):

1
2
sudo apt update
sudo apt upgrade

Install open-ssh

The first thing I do is configure my machine for remote access. As above, I couldn’t install open-ssh during the OS installation, for some reason. After sources.list was sorted out, it all worked:

1
sudo apt-get install openssh-server

Check the official Ubuntu docs for configuration tips.

While I’m here, though, I need to set a static IP…

1
sudo vi /etc/network/interfaces

Paste this (or similar) under # The primary network interface, as per lewis4u.

1
2
3
4
5
auto enp0s25
iface enp0s25 inet static
address 192.168.0.150
netmask 255.255.255.0
gateway 192.168.0.1

Then flush, restart, and verify that the settings are correct:

1
2
3
sudo ip addr flush enp0s25
sudo systemctl restart networking.service
ip add

Start ssh

My ssh didn’t start running automatically after install. I did this to make ssh run on startup:

1
sudo systemctl enable ssh

And then I did this, which actually starts the service:

1
sudo service ssh start

Open a port on the router

This step, of course, depends entirely on the make and model of router behind which the server is operating. For me, I access the adminstrative control panel by logging in at 192.168.0.1 on my LAN.

I found the settings I needed to configure on my Belkin router under Firewall -> Virtual Servers. I want to serve up web apps (both HTTP and HTTPS) and allow SSH access. As such, I configured three access points by providing the following information for each:

  1. Description
  2. Inbound ports (i.e., 22, 80, and 443)
  3. TCP traffic type (no UDP)
  4. The private/static address I just set on my server
  5. Inbound private ports (22, 80, and 443 respectively)

Set up DNS

Again, this depends on where you registered your domain. I pointed a domain I have registered with GoDaddy to my modems’s IP address, which now receives requests and forwards them to my server.

Login via SSH

With my server, router, and DNS all properly configured, I don’t need to be physically sitting in front of my machine anymore. As such, I complete the following steps logged in remotely.

Set up app user

I like to have one user account control app deployment. Toward that end, I create an app user and add him to the sudo group:

1
2
sudo adduser app
sudo adduser app sudo

Install the essentials

git

Won’t get far without git:

1
sudo apt install git

vim

My favourite editor is vim, which is not installed by default.

1
sudo apt install vim

NERDTree

My favourite vim plugin:

1
2
3
4
mkdir -p ~/.vim/autoload ~/.vim/bundle
cd ~/.vim/autoload
wget https://raw.github.com/tpope/vim-pathogen/HEAD/autoload/pathogen.vim
vim ~/.vimrc

Add this to the .vimrc file:

1
2
3
4
call pathogen#infect()
map <C-n> :NERDTreeToggle<CR>
set softtabstop=2
set expandtab

Save and exit:

1
2
cd ~/.vim/bundle
git clone https://github.com/scrooloose/nerdtree.git

Now, when running vim, hit ctrl-n to toggle the file tree view.

Docker

Current installation instructions can be found here. The distilled process is as follows:

1
2
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

You can verify the key fingerprint:

1
sudo apt-key fingerprint 0EBFCD88

Which should return something like this:

1
2
3
4
pub 4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <docker@docker.com>
sub 4096R/F273FCD8 2017-02-22

Add repository and update:

1
2
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update

Install:

1
sudo apt install docker-ce

Create a docker user group:

1
sudo groupadd docker

Add yourself to the group:

1
sudo usermod -aG docker $USER

Add the app user to the group as well:

1
sudo usermod -aG docker $USER

Logout, login, and test docker without sudo:

1
docker run hello-world

If everything works, you should see the usual Hello, World! message.

Configure docker to start on boot:

1
sudo systemctl enable docker

docker-compose

This downloads the current stable version. Cross reference it with that offered here.

1
2
3
su
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Install command completion while still root:

1
2
curl -L https://raw.githubusercontent.com/docker/compose/master/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
exit

Test:

1
docker-compose --version

node

These steps are distilled from here.

1
2
3
cd ~
curl -sL https://deb.nodesource.com/setup_6.x -o nodesource_setup.sh
sudo bash nodesource_setup.sh

Now install:

1
sudo apt-get install nodejs build-essential

Ruby

The steps followed conclude with installing rails. I only install ruby:

1
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev nodejs

Using rbenv:

1
2
3
4
5
6
7
8
9
10
11
cd
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
exec $SHELL
rbenv install 2.4.0

That last step can take a while…

1
2
rbenv global 2.4.0
ruby -v

Install Bundler:

1
gem install bundler

Done!

There it is… all my usual favourites on some busted up piece of junk laptop. I expect it to be exactly 1000% more reliable than cloudatcost.com.

mongodump and mongorestore Between Docker-composed Containers

I’m trying to refine the process by which I backup and restore Dockerized MongoDB containers. My previous effort is basically a brute-force copy-and-paste job on the container’s data directory. It works, but I’m concerned about restoring data between containers installed with different versions of MongoDB. Apparently this is tricky enough even with the benefit of recovery tools like mongodump and mongorestore, which is what I’m using below.

In short, I need to dump my data from a data-only MongoDB container, bundle the files uploaded to my Express application, and restore it all on another server. Here’s how I did it…

Dump the data

I’m a big fan of docker-compose. I use it to manage all my containers. The following method requires that the composition be running so that mongodump can be run against the running Mongo container (which, in turn, accesses the data-only container). Assuming the name of the container is myapp_mongo_1

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongodump --host $MONGO_PORT_27017_TCP_ADDR'

This will create a root-owned directory called myapp-mongo-dump in your current directory. It contains all the BSON and JSON meta-data for this database. For convenience, I change ownership of this resource:

1
sudo chown -R user:user myapp-mongo-dump

Then, for transport, I archive the directory:

1
tar zcvf myapp-mongo-dump.tar.gz myapp-mongo-dump

Archive the uploaded files

My app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar zcvf uploads.tar.gz uploads

Now I have two archived files: myapp-mongo-dump.tar.gz and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp myapp-mongo-dump.tar.gz uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
2
tar zxvf uploads.tar.gz
tar zxvf myapp-mongo-dump.tar.gz

Then I restore the data to the data-only container through the running Mongo instance (assumed to be called myapp_mongo_1):

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongorestore --host $MONGO_PORT_27017_TCP_ADDR'

With that, all data is restored. I didn’t even have to restart my containers to begin using the app on its new server.

MongoDB backup and restore between Dockerized Node apps

My bargain-basement cloud service provider, CloudAtCost recently lost one of my servers and all the data on it. This loss was exasperated by the fact that I didn’t backup my MongoDB data somewhere else. Now I’m working out the exact process after the fact so that I don’t suffer this loss again (it’s happened twice now with CloudAtCost, but hey, the price is right).

The following is a brute-force backup and recovering process. I suspect this approach has its weaknesses in that it may depend upon version-consistency between the MongoDB containers. This is not ideal for someone like myself who always installs the latest version when creating new containers. I aim to develop a more flexible process soon.

Context

I have a server running Ubuntu 16.04, which, in turn is serving up a Dockerized Express application (Nginx, MongoDB, and the app itself). The MongoDB data is backed up in a data-only container. To complicate matters, the application allows file uploads, which are being stored on the file system in the project’s root.

I need to dump the data from the data-only container, bundle the uploaded files, and restore it all on another server. Here’s how I did it…

Dump the data

I use docker-compose to manage my containers. To obtain the name of the MongoDB data-only container, I simply run docker ps -a. Assuming the name of the container is myapp_mongo_data

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data/db

This will put a file called backup.tar in the app’s root directory. It may belong to the root user. If so, run sudo chown user:user backup.tar.

Archive the uploaded files

The app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar -zcvf uploads.tar.gz uploads

Now I have two archived files: backup.tar and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp backup.tar uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
tar -zxvf uploads.tar.gz

Then I restore the data to the data container:

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Remove and restart containers

The project containers don’t need to be running when you restore the data in the previous step. If they are running, however, once the data is restored, remove the running containers and start again with docker-compose:

1
2
3
docker-compose stop
docker-compose rm
docker-compose up -d

I’m sure there is a reasonable explanation as to why removing the containers is necessary, but I don’t know what it is yet. In any case, removing the containers isn’t harmful because all the data is on the data-only container anyway.

Warning

As per the introduction, this process probably depends on version consistency between MongoDB containers.

Silhouette Cameo 3, Inkscape, InkCut, and Ubuntu

My wife and I had a hard time deciding between the Silhouette Cameo 3 and the Cricut Explore Air 2. We went with the Silhouette Machine because it seemed easier to get running on Ubuntu 16.04, though neither machines are particularly open source friendly. We had to dust off our old MacBook Air to upgrade the cutter’s firmware.

System and dependencies

I did the usual system prep before adding the software upon which Inkscape and InkCut depend:

1
2
sudo apt-get update
sudo apt-get upgrade

You don’t realize you need these packages until you try using InkCut for the first time and it crashes:

1
2
sudo apt-get install python-pip python-gtk2-dev python-cups
pip install pyserial

Inkscape

I have no long-term interest in using the hokey design software that comes with the Cameo machine, preferring to use the open source Inkscape vector graphics tool instead.

Added the Inkscape repository and install:

1
2
3
sudo add-apt-repository ppa:inkscape.dev/stable
sudo apt-get update
sudo apt-get install inkscape

Make sure you run Inkscape at least once before installing InkCut:

1
inkscape

This’ll create the .config folder into which the InkCut file will be moved (below).

InkCut

There are a few ways to send vector data straight from Inkscape to the Silhouette Cameo machine. I chose InkCut because it seems like the most popular and viable. This is also why I chose the Cameo over the Cricut machine… InkCut doesn’t work with Cricut.

1
2
3
cd ~
wget https://downloads.sourceforge.net/project/inkcut/InkCut-1.0.tar.gz
tar -xzvf InkCut-1.0.tar.gz -C .config/inkscape/extensions/

This InkCut tutorial was very helpful, but it still took a bit of trial and error to figure out how to get the software to talk to the Silhouette Cameo…

Ubuntu 16.04

Having never really owned a conventional printer, it hadn’t occurred to me that I would need to install any drivers. Turns out that’s how I got it working.

Open your System Settings:

[Open System Settings]

Open the Printers option:

[Click Printers]

Add a printer:

[Add Printer]

Hopefully you see your device in the list:

[Find device in list]

The drivers for generic printing devices will suffice in this situation:

[Select Generic] [Text-only driver]

Change your cutter’s name, if you like. I left these settings untouched:

[Printer description]

Not sure what would happen if you attempted to print a test page. I cancelled:

[Cancel test page]

If all is well, you should see the device you just added:

[Silhouette device added]

Send Vector to Silhouette Cameo

Now that Inkscape and InkCut are installed and the OS and the Silhouette Cameo are friends, it’s time to configure InkCut to send data to the cutter.

The following is a simple project I whipped up for demo purposes. I want to use the Silhouette pen (instead of the cutter) to draw this curve.

In order for this to work with InkCut, you need to select all the curves you want to plot. I simply hit Ctrl-A to select everything (also Edit > Select All):

[Select all the curves you want to plot]

Select InkCut from the Inkscape menu as shown

[Extensions > Cutter/Plotter > InkCut v1.0]

The InkCut General tab gives you some options to configure in order to accommodate your cutting/drawing medium. It also allows you the opportunity to preview your vector. Click the Properties button:

[Click Properties]

The settings with which I was successful are shown below:

[Successful settings shown]

I show the Serial settings here just for reference. I have not been able to get this working successfully without installing the printer in Ubuntu first. Make sure the machine is turned on and Test Connection. The Silhouette machine should activate and perform a few short moves.

I copied these settings from the InkCut tutorial linked above:

[InkCut tutorial settings]

Finally, send your vector to the plotter/cutter:

[Commence plotting/cutting!]

Conclusion

It was quite a thrill to finally get the Silhouette Cameo working with Ubuntu, though there are still some kinks to work out. I noticed that the curve I plotted had lines drawn when the pen should have been lifted off the paper. My preliminary research indicates that this is a problem in translation between the HPGL and GPGL protocols. More to come…

Nginx, Let's Encrypt, and Docker Compose

DEPRECATED!

As of 2017-7-31, this no longer suits my purposeno longer suits my purposes. Here’s my updated docker-compose version 3 approach.

Introduction

I’ve used StartSSL for years. Then I discovered Let’s Encrypt. All my old StartSSL certificates are expiring, so I needed to work out a process by which I could swap them out for Let’s Encrypt certs.

jwilder’s excellent nginx-proxy has been my go-to for easy certificate configuration for some time now. I was relieved to learn that I am still be able to leverage this tool with the help of the docker-letsencrypt-nginx-proxy-companion container.

This document outlines the process by which Let’s Encrypt certificates are managed for a single nginx container behind an nginx-proxy accompanied by the docker-letsencrypt-nginx-proxy-companion. docker-compose is used to manage the overall configuration. It was proven on Ubuntu 16.04. Naturally, it is assumed that Docker and Compose are already installed. Copying and pasting the commands provided should lead to a successful deployment.

My Site

nginx-proxy proxies multiple site. I’m only serving one with nginx. I like to put all my individual Docker compositions in their own directories:

1
mkdir mysite && cd mysite

Optional

The following assumes you have some sort of site you want to serve up from the mysite/ directory. If not, just create a simple Hello, world! HTML page. Copy and paste the following to index.html:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello, world!</title>
</head>
<body>
Hello, world!
</body>
</html>

docker-compose

This configures docker-compose to serve up your site with nginx. Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
# docker-compose.yml
nginx:
image: nginx
restart: always
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=site.example.com
- LETSENCRYPT_EMAIL=email@example.com
volumes:
- ./:/usr/share/nginx/html

This will serve up files from the current directory (i.e., the same one that contains the new index.html page, if created).

Start docker-compose:

1
docker-compose up -d

The site won’t be accessible yet. That comes next.

nginx-proxy

As before, put the nginx-proxy Docker compositions in its own directory:

1
2
cd ..
mkdir nginx-proxy && cd nginx-proxy

Create a directory in which to store the Let’s Encrypt certificates:

1
mkdir certs

Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# docker-compose.yml
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx-proxy

This allows nginx-proxy to combine forces with docker-letsencrypt-nginx-proxy-companion, all in one docker-compose file.

Start docker-compose:

1
docker-compose up -d

If all is well, you should be able to access your site at the address configured.

Dockerized Etherpad-Lite with PostgreSQL

There is surprisingly little information out there on how to deploy etherpad-lite with postgres. There is,
on the other hand, quite a bit of information on how to deploy etherpad with docker. At the time of writing,
there is nothing on how to do it with docker-compose specifically. I rarely use docker apart from
docker-compose, and will go to great lengths to ensure I can dockerize my composition, be it for
etherpad or any other docker-appropriate application.

The following comprises my collection of notes on what it took to build an etherpad docker image and link
it to a postgres container with docker-compose. Much of it was inspired (i.e., shamelessly plagiarized) from
the fine work done by tvelocity on GitHub.

The following assumes that the required software (e.g., docker, docker-compose, etc.) is installed on an
Ubuntu 16.04 machine. It’s meant to be simple enough that the stated goal can be accomplished by simply
copying and pasting content into the various required files and executing the commands specified.

Setup

Create a directory in which to organize your docker composition.

1
mkdir my-etherpad && cd my-etherpad

Dockerfile

Using your favourite text editor (mine’s vim), copy and paste the following into your Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
FROM node:0.12
MAINTAINER Some Guy, someguy@example.com
# For postgres
RUN apt-get update
RUN apt-get install -y libpq-dev postgresql-client
# Clone the latest etherpad version
RUN cd /opt && git clone https://github.com/ether/etherpad-lite.git etherpad
WORKDIR /opt/etherpad
RUN bin/installDeps.sh && rm settings.json
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 9001
ENTRYPOINT ["/entrypoint.sh"]
CMD ["bin/run.sh", "--root"]

The node:0.12 image upon which this image is built was chosen purposefully. At the moment, etherpad-lite
does not run on node versions 6.0 and 6.1, which is
what you get if you build off the latest node image.

Also note the ENTRYPOINT ["/entrypoint.sh"] line. This implies we’ll need this entrypoint.sh file to
run every time we fire up an image.

entrypoint.sh

Create a file called entrypoint.sh and paste the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
#!/bin/bash
set -e
if [ -z "$POSTGRES_PORT_5432_TCP_ADDR" ]; then
echo >&2 'error: missing POSTGRES_PORT_5432_TCP environment variable'
echo >&2 ' Did you forget to --link some_postgres_container:postgres ?'
exit 1
fi
# If we're linked to PostgreSQL, and we're using the root user, and our linked
# container has a default "root" password set up and passed through... :)
: ${ETHERPAD_DB_USER:=root}
if [ "$ETHERPAD_DB_USER" = 'root' ]; then
: ${ETHERPAD_DB_PASSWORD:=$POSTGRES_ENV_POSTGRES_ROOT_PASSWORD}
fi
: ${ETHERPAD_DB_NAME:=etherpad}
ETHERPAD_DB_NAME=$( echo $ETHERPAD_DB_NAME | sed 's/\./_/g' )
if [ -z "$ETHERPAD_DB_PASSWORD" ]; then
echo >&2 'error: missing required ETHERPAD_DB_PASSWORD environment variable'
echo >&2 ' Did you forget to -e ETHERPAD_DB_PASSWORD=... ?'
echo >&2
echo >&2 ' (Also of interest might be ETHERPAD_DB_USER and ETHERPAD_DB_NAME.)'
exit 1
fi
: ${ETHERPAD_TITLE:=Etherpad}
: ${ETHERPAD_PORT:=9001}
: ${ETHERPAD_SESSION_KEY:=$(
node -p "require('crypto').randomBytes(32).toString('hex')")}
# Check if database already exists
RESULT=`PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U ${ETHERPAD_DB_USER} -h postgres \
-c "\l ${ETHERPAD_DB_NAME}"`
if [[ "$RESULT" != *"$ETHERPAD_DB_NAME"* ]]; then
# postgres database does not exist, create it
echo "Creating database ${ETHERPAD_DB_NAME}"
PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U${ETHERPAD_DB_USER} -h postgres \
-c "create database ${ETHERPAD_DB_NAME}"
fi
OWNER=`PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U ${ETHERPAD_DB_USER} -h postgres \
-c "SELECT u.usename
FROM pg_database d
JOIN pg_user u ON (d.datdba = u.usesysid)
WHERE d.datname = (SELECT current_database());"`
if [[ "$OWNER" != *"$ETHERPAD_DB_USER"* ]]; then
# postgres database does not exist, create it
echo "Setting database owner to ${ETHERPAD_DB_USER}"
PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U ${ETHERPAD_DB_USER} -h postgres \
-c "alter database ${ETHERPAD_DB_NAME} owner to ${ETHERPAD_DB_USER}"
fi
if [ ! -f settings.json ]; then
cat <<- EOF > settings.json
{
"title": "${ETHERPAD_TITLE}",
"ip": "0.0.0.0",
"port" :${ETHERPAD_PORT},
"dbType" : "postgres",
"dbSettings" : {
"user" : "${ETHERPAD_DB_USER}",
"host" : "postgres",
"password": "${ETHERPAD_DB_PASSWORD}",
"database": "${ETHERPAD_DB_NAME}"
},
EOF
if [ $ETHERPAD_ADMIN_PASSWORD ]; then
: ${ETHERPAD_ADMIN_USER:=admin}
cat <<- EOF >> settings.json
"users": {
"${ETHERPAD_ADMIN_USER}": {
"password": "${ETHERPAD_ADMIN_PASSWORD}",
"is_admin": true
}
},
EOF
fi
cat <<- EOF >> settings.json
}
EOF
fi
exec "$@"

This does a whole bunch of stuff. Most importantly, it creates the settings.json file which
configures the whole etherpad-lite application. The settings produced above connect the application
to the postgres database and setup an adminstrative user. There’s a whole bunch of stuff that
can be done to make this more configurable (e.g., setup SSL certs, et al). I’m all ears.

docker-compose.yml

You may have noticed a bunch of environment variables in the previous entrypoints.sh file.
Those get set in docker-compose.yml. Create that file and paste the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
etherpad:
restart: always
build: ./
ports:
- "9001:9001"
links:
- postgres
environment:
- ETHERPAD_DB_USER=etherpad
- ETHERPAD_DB_PASSWORD=foobarbaz
- ETHERPAD_DB_NAME=store
- ETHERPAD_ADMIN_PASSWORD=foobarbaz
postgres:
restart: always
image: postgres
environment:
- POSTGRES_USER=etherpad
- POSTGRES_PASSWORD=foobarbaz
volumes_from:
- etherpad_data

This sets everything up so that the etherpad Dockerfile gets built and linked to a
postgres container… well, everything except for one bit: it doesn’t create
the required etherpad_data data-only container.

Data-only container

Create the data-only container from the command line so that your data won’t be erased if your
other containers get smoked by accident or on purpose.

1
docker create --name etherpad_data -v /dbdata postgres /bin/true

Finally, fire ‘er all up!

1
docker-compose up

If you did everything correctly, your etherpad-lite application will be accessible on http://localhost:9001.

Bootstrapping a Rails 5 application with Rspec and Capybara

This is a guide for my Lighthouse Labs students who asked me to demonstrate unit-testing. Whenever a I initialize a new project, I have to relearn the whole setup process, so this serves as handy primer to both my students and myself.

Assuming Rails 5 is all set up and ready to go…

Create a new project

1
2
rails new myapp -T
cd myapp

The -T option skips creation of the default test files.

Add all the testing dependencies

There are a bunch of gems I regularly use for testing:

Add them all these gems to the development/test group in your application’s Gemfile:

1
2
3
4
5
6
7
8
9
10
# Gemfile
# ...
group :development, :test do
gem 'rspec-rails', '~> 3.5'
gem 'shoulda-matchers'
gem 'capybara'
gem 'factory_girl_rails'
end

Install, of course:

1
bundle install

Configuration

Naturally, each of these new testing gems requires a bit of configuration to get working.

rspec

rspec is one of many great test suites. I like it the best, but that’s probably just because it’s the one I’m used to. It’s also great because a lot of people use it. So if you don’t know how to test something, chances are someone on StackOverflow does.

The rspec-rails gem comes with a handy generator that does a bunch of the setup work for you:

1
rails generate rspec:install

This added a new spec/ directory with a couple of helper files to your project. All your tests are stored in this directory. Run your rspec tests like this:

1
bundle exec rspec

We haven’t written any tests yet so you should see something like this:

1
2
3
4
5
No examples found.
Finished in 0.00028 seconds (files took 0.16948 seconds to load)
0 examples, 0 failures

shoulda-matchers

shoulda-matchers make testing common Rails functionality much faster and easier. You can write identical tests in rspec all by itself, but with shoulda-matcher these common tests are reduced to one line.

To set this up, you paste the following to your spec/rails_helper.rb file:

1
2
3
4
5
6
Shoulda::Matchers.configure do |config|
config.integrate do |with|
with.test_framework :rspec
with.library :rails
end
end

The tests you write with shoulda-matcher get executed when you run your rspec tests.

capybara

capybara tests the totality of your application’s functionality by simulating how a real-world agent actually interacts with your app.

First, paste the following into your spec/rails_helper.rb file:

1
require 'capybara/rails'

Now, paste the following into your spec/spec_helper.rb file:

1
require 'capybara/rspec'

factory_girl_rails

This provides a nice way to spoof the models you create in your application. To use it, you need to add a couple of things to your spec/spec_helper.rb file.

First, paste the following requirement:

1
require 'factory_girl_rails'

Then, add the following to the RSpec.configure block:

1
2
3
4
5
6
7
8
9
10
# RSpec
RSpec.configure do |config|
# ...
config.include FactoryGirl::Syntax::Methods
# ...
end

Try it out

If everything is configured correctly, all the necessary test files will be created each time you generate some scaffolding. To see the options available, run

1
rails generate

This will show you a list of installed generators, including all the rspec and factory_girl stuff.

See what happens when you run

1
rails generate scaffold Agent

Take a peak in the spec/ directory now. You’ll see a bunch of boilerplate test code waiting for you to fill in the blanks with meaningful tests.

Before you run the tests, though, you’ll need to migrate the database:

1
bin/rails db:migrate RAILS_ENV=test

Now you can run the tests:

1
bundle exec rspec

You’ll see a whole bunch of pending tests like this:

1
2
3
4
5
Pending: (Failures listed here are expected and do not affect your suite's status)
1) AgentsController GET #index assigns all agents as @agents
# Add a hash of attributes valid for your model
# ./spec/controllers/agents_controller_spec.rb:40

Get testing!

Populating Dockerized PostgreSQL with dumped data from host

I needed to populate a local SQLite database to do some development testing for a client. Her web app has a lot of moving parts and left something to be desired in terms of testing. Most of these steps were adapted from this wonderful post.

Long story short: I needed to recreate an issue discovered in production, but didn’t have any data with which to write tests or even recreate the issue manually. The whole thing is a huge pain in the butt, so I dumped the production database from Heroku and attempted a brute-force reconstruction of the issue. The Heroku instructions produced a file called latest.dump.

Create/run the container

I like using Docker for this kind of stuff. I don’t even run any databases on my host machine anymore.

From the Docker PostgreSQL documentation…

1
docker run --name some-postgres -e POSTGRES_PASSWORD=secret -p 5432:5432 -d postgres

Restore to local database

This is where the dumped data gets written to the Dockerized PostgreSQL database.

1
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres --create --dbname=postgres latest.dump

Login

There are a couple of ways to find the database name. Either log in as follows, or go to the Heroku database dashboard. The password was set to secret when the container was created. It’s good idea to login anyway, if only to make sure the database is actually there.

1
psql -h localhost -p 5432 -U postgres --password

Dump the local database

I didn’t want to mess with the previous developer’s setup, so I took the wussy way out and used the existing SQLite configuration to run the app locally. This requires creating an SQLite DB with the PostgreSQL data dump. This command dumps that data:

1
pg_dump -h localhost -p 5432 -U postgres --password --data-only --inserts YOUR_DB_NAME > dump.sql

Create SQLite database

Again, these steps were adapted from this post.

Modify dump.sql

My favourite editor is vim. I used it to modify dump.sql.

Remove all the SET statements at the top of the file.

It’ll look something like this:

1
2
3
4
5
6
7
8
9
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;

Get rid of it all.

Remove all the setval queries

The post that I’m plagiarizing says these autoincrement IDs. SQLite doesn’t need this. Find and remove everything that looks like this (i.e., everything with a call to setval):

1
SELECT pg_catalog.setval('friendly_id_slugs_id_seq', 413, true);

Replace true with 't' and false with 'f'

Anything that looks like this:

1
2
INSERT INTO article_categories VALUES (4, 9, 13, true, '2011-11-22 06:29:07.966875', '2011-11-22 06:29:07.966875');
INSERT INTO article_categories VALUES (26, 14, NULL, false, '2011-12-07 09:09:52.794238', '2011-12-07 09:09:52.794238');

Needs to look like this:

1
2
INSERT INTO article_categories VALUES (4, 9, 13, 't', '2011-11-22 06:29:07.966875', '2011-11-22 06:29:07.966875');
INSERT INTO article_categories VALUES (26, 14, NULL, 'f', '2011-12-07 09:09:52.794238', '2011-12-07 09:09:52.794238');

Wrap it all in a single transaction

If you don’t do this, the import will take forever. At the top of the file put

1
BEGIN;

and at the very bottom of the file put

1
END;

Create the SQLite database

Finally, after all that hard work:

1
sqlite3 development.sqlite < dump.sql

The app was already set up to work with the development.sqlite database. There were a couple of error messages when creating it, but they didn’t seem to impact the app. Everything worked fine, and I found a nifty new way to use Docker.