Dockerized Etherpad-Lite with PostgreSQL

There is surprisingly little information out there on how to deploy etherpad-lite with postgres. There is,
on the other hand, quite a bit of information on how to deploy etherpad with docker. At the time of writing,
there is nothing on how to do it with docker-compose specifically. I rarely use docker apart from
docker-compose, and will go to great lengths to ensure I can dockerize my composition, be it for
etherpad or any other docker-appropriate application.

The following comprises my collection of notes on what it took to build an etherpad docker image and link
it to a postgres container with docker-compose. Much of it was inspired (i.e., shamelessly plagiarized) from
the fine work done by tvelocity on GitHub.

The following assumes that the required software (e.g., docker, docker-compose, etc.) is installed on an
Ubuntu 16.04 machine. It’s meant to be simple enough that the stated goal can be accomplished by simply
copying and pasting content into the various required files and executing the commands specified.

Setup

Create a directory in which to organize your docker composition.

1
mkdir my-etherpad && cd my-etherpad

Dockerfile

Using your favourite text editor (mine’s vim), copy and paste the following into your Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
FROM node:0.12
MAINTAINER Some Guy, someguy@example.com
# For postgres
RUN apt-get update
RUN apt-get install -y libpq-dev postgresql-client
# Clone the latest etherpad version
RUN cd /opt && git clone https://github.com/ether/etherpad-lite.git etherpad
WORKDIR /opt/etherpad
RUN bin/installDeps.sh && rm settings.json
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 9001
ENTRYPOINT ["/entrypoint.sh"]
CMD ["bin/run.sh", "--root"]

The node:0.12 image upon which this image is built was chosen purposefully. At the moment, etherpad-lite
does not run on node versions 6.0 and 6.1, which is
what you get if you build off the latest node image.

Also note the ENTRYPOINT ["/entrypoint.sh"] line. This implies we’ll need this entrypoint.sh file to
run every time we fire up an image.

entrypoint.sh

Create a file called entrypoint.sh and paste the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
#!/bin/bash
set -e
if [ -z "$POSTGRES_PORT_5432_TCP_ADDR" ]; then
echo >&2 'error: missing POSTGRES_PORT_5432_TCP environment variable'
echo >&2 ' Did you forget to --link some_postgres_container:postgres ?'
exit 1
fi
# If we're linked to PostgreSQL, and we're using the root user, and our linked
# container has a default "root" password set up and passed through... :)
: ${ETHERPAD_DB_USER:=root}
if [ "$ETHERPAD_DB_USER" = 'root' ]; then
: ${ETHERPAD_DB_PASSWORD:=$POSTGRES_ENV_POSTGRES_ROOT_PASSWORD}
fi
: ${ETHERPAD_DB_NAME:=etherpad}
ETHERPAD_DB_NAME=$( echo $ETHERPAD_DB_NAME | sed 's/\./_/g' )
if [ -z "$ETHERPAD_DB_PASSWORD" ]; then
echo >&2 'error: missing required ETHERPAD_DB_PASSWORD environment variable'
echo >&2 ' Did you forget to -e ETHERPAD_DB_PASSWORD=... ?'
echo >&2
echo >&2 ' (Also of interest might be ETHERPAD_DB_USER and ETHERPAD_DB_NAME.)'
exit 1
fi
: ${ETHERPAD_TITLE:=Etherpad}
: ${ETHERPAD_PORT:=9001}
: ${ETHERPAD_SESSION_KEY:=$(
node -p "require('crypto').randomBytes(32).toString('hex')")}
# Check if database already exists
RESULT=`PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U ${ETHERPAD_DB_USER} -h postgres \
-c "\l ${ETHERPAD_DB_NAME}"`
if [[ "$RESULT" != *"$ETHERPAD_DB_NAME"* ]]; then
# postgres database does not exist, create it
echo "Creating database ${ETHERPAD_DB_NAME}"
PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U${ETHERPAD_DB_USER} -h postgres \
-c "create database ${ETHERPAD_DB_NAME}"
fi
OWNER=`PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U ${ETHERPAD_DB_USER} -h postgres \
-c "SELECT u.usename
FROM pg_database d
JOIN pg_user u ON (d.datdba = u.usesysid)
WHERE d.datname = (SELECT current_database());"`
if [[ "$OWNER" != *"$ETHERPAD_DB_USER"* ]]; then
# postgres database does not exist, create it
echo "Setting database owner to ${ETHERPAD_DB_USER}"
PGPASSWORD=${ETHERPAD_DB_PASSWORD} psql -U ${ETHERPAD_DB_USER} -h postgres \
-c "alter database ${ETHERPAD_DB_NAME} owner to ${ETHERPAD_DB_USER}"
fi
if [ ! -f settings.json ]; then
cat <<- EOF > settings.json
{
"title": "${ETHERPAD_TITLE}",
"ip": "0.0.0.0",
"port" :${ETHERPAD_PORT},
"dbType" : "postgres",
"dbSettings" : {
"user" : "${ETHERPAD_DB_USER}",
"host" : "postgres",
"password": "${ETHERPAD_DB_PASSWORD}",
"database": "${ETHERPAD_DB_NAME}"
},
EOF
if [ $ETHERPAD_ADMIN_PASSWORD ]; then
: ${ETHERPAD_ADMIN_USER:=admin}
cat <<- EOF >> settings.json
"users": {
"${ETHERPAD_ADMIN_USER}": {
"password": "${ETHERPAD_ADMIN_PASSWORD}",
"is_admin": true
}
},
EOF
fi
cat <<- EOF >> settings.json
}
EOF
fi
exec "$@"

This does a whole bunch of stuff. Most importantly, it creates the settings.json file which
configures the whole etherpad-lite application. The settings produced above connect the application
to the postgres database and setup an adminstrative user. There’s a whole bunch of stuff that
can be done to make this more configurable (e.g., setup SSL certs, et al). I’m all ears.

docker-compose.yml

You may have noticed a bunch of environment variables in the previous entrypoints.sh file.
Those get set in docker-compose.yml. Create that file and paste the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
etherpad:
restart: always
build: ./
ports:
- "9001:9001"
links:
- postgres
environment:
- ETHERPAD_DB_USER=etherpad
- ETHERPAD_DB_PASSWORD=foobarbaz
- ETHERPAD_DB_NAME=store
- ETHERPAD_ADMIN_PASSWORD=foobarbaz
postgres:
restart: always
image: postgres
environment:
- POSTGRES_USER=etherpad
- POSTGRES_PASSWORD=foobarbaz
volumes_from:
- etherpad_data

This sets everything up so that the etherpad Dockerfile gets built and linked to a
postgres container… well, everything except for one bit: it doesn’t create
the required etherpad_data data-only container.

Data-only container

Create the data-only container from the command line so that your data won’t be erased if your
other containers get smoked by accident or on purpose.

1
docker create --name etherpad_data -v /dbdata postgres /bin/true

Finally, fire ‘er all up!

1
docker-compose up

If you did everything correctly, your etherpad-lite application will be accessible on http://localhost:9001.

Bootstrapping a Rails 5 application with Rspec and Capybara

This is a guide for my Lighthouse Labs students who asked me to demonstrate unit-testing. Whenever a I initialize a new project, I have to relearn the whole setup process, so this serves as handy primer to both my students and myself.

Assuming Rails 5 is all set up and ready to go…

Create a new project

1
2
rails new myapp -T
cd myapp

The -T option skips creation of the default test files.

Add all the testing dependencies

There are a bunch of gems I regularly use for testing:

Add them all these gems to the development/test group in your application’s Gemfile:

1
2
3
4
5
6
7
8
9
10
# Gemfile
# ...
group :development, :test do
gem 'rspec-rails', '~> 3.5'
gem 'shoulda-matchers'
gem 'capybara'
gem 'factory_girl_rails'
end

Install, of course:

1
bundle install

Configuration

Naturally, each of these new testing gems requires a bit of configuration to get working.

rspec

rspec is one of many great test suites. I like it the best, but that’s probably just because it’s the one I’m used to. It’s also great because a lot of people use it. So if you don’t know how to test something, chances are someone on StackOverflow does.

The rspec-rails gem comes with a handy generator that does a bunch of the setup work for you:

1
rails generate rspec:install

This added a new spec/ directory with a couple of helper files to your project. All your tests are stored in this directory. Run your rspec tests like this:

1
bundle exec rspec

We haven’t written any tests yet so you should see something like this:

1
2
3
4
5
No examples found.
Finished in 0.00028 seconds (files took 0.16948 seconds to load)
0 examples, 0 failures

shoulda-matchers

shoulda-matchers make testing common Rails functionality much faster and easier. You can write identical tests in rspec all by itself, but with shoulda-matcher these common tests are reduced to one line.

To set this up, you paste the following to your spec/rails_helper.rb file:

1
2
3
4
5
6
Shoulda::Matchers.configure do |config|
config.integrate do |with|
with.test_framework :rspec
with.library :rails
end
end

The tests you write with shoulda-matcher get executed when you run your rspec tests.

capybara

capybara tests the totality of your application’s functionality by simulating how a real-world agent actually interacts with your app.

First, paste the following into your spec/rails_helper.rb file:

1
require 'capybara/rails'

Now, paste the following into your spec/spec_helper.rb file:

1
require 'capybara/rspec'

factory_girl_rails

This provides a nice way to spoof the models you create in your application. To use it, you need to add a couple of things to your spec/spec_helper.rb file.

First, paste the following requirement:

1
require 'factory_girl_rails'

Then, add the following to the RSpec.configure block:

1
2
3
4
5
6
7
8
9
10
# RSpec
RSpec.configure do |config|
# ...
config.include FactoryGirl::Syntax::Methods
# ...
end

Try it out

If everything is configured correctly, all the necessary test files will be created each time you generate some scaffolding. To see the options available, run

1
rails generate

This will show you a list of installed generators, including all the rspec and factory_girl stuff.

See what happens when you run

1
rails generate scaffold Agent

Take a peak in the spec/ directory now. You’ll see a bunch of boilerplate test code waiting for you to fill in the blanks with meaningful tests.

Before you run the tests, though, you’ll need to migrate the database:

1
bin/rails db:migrate RAILS_ENV=test

Now you can run the tests:

1
bundle exec rspec

You’ll see a whole bunch of pending tests like this:

1
2
3
4
5
Pending: (Failures listed here are expected and do not affect your suite's status)
1) AgentsController GET #index assigns all agents as @agents
# Add a hash of attributes valid for your model
# ./spec/controllers/agents_controller_spec.rb:40

Get testing!

Populating Dockerized PostgreSQL with dumped data from host

I needed to populate a local SQLite database to do some development testing for a client. Her web app has a lot of moving parts and left something to be desired in terms of testing. Most of these steps were adapted from this wonderful post.

Long story short: I needed to recreate an issue discovered in production, but didn’t have any data with which to write tests or even recreate the issue manually. The whole thing is a huge pain in the butt, so I dumped the production database from Heroku and attempted a brute-force reconstruction of the issue. The Heroku instructions produced a file called latest.dump.

Create/run the container

I like using Docker for this kind of stuff. I don’t even run any databases on my host machine anymore.

From the Docker PostgreSQL documentation…

1
docker run --name some-postgres -e POSTGRES_PASSWORD=secret -p 5432:5432 -d postgres

Restore to local database

This is where the dumped data gets written to the Dockerized PostgreSQL database.

1
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres --create --dbname=postgres latest.dump

Login

There are a couple of ways to find the database name. Either log in as follows, or go to the Heroku database dashboard. The password was set to secret when the container was created. It’s good idea to login anyway, if only to make sure the database is actually there.

1
psql -h localhost -p 5432 -U postgres --password

Dump the local database

I didn’t want to mess with the previous developer’s setup, so I took the wussy way out and used the existing SQLite configuration to run the app locally. This requires creating an SQLite DB with the PostgreSQL data dump. This command dumps that data:

1
pg_dump -h localhost -p 5432 -U postgres --password --data-only --inserts YOUR_DB_NAME > dump.sql

Create SQLite database

Again, these steps were adapted from this post.

Modify dump.sql

My favourite editor is vim. I used it to modify dump.sql.

Remove all the SET statements at the top of the file.

It’ll look something like this:

1
2
3
4
5
6
7
8
9
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;

Get rid of it all.

Remove all the setval queries

The post that I’m plagiarizing says these autoincrement IDs. SQLite doesn’t need this. Find and remove everything that looks like this (i.e., everything with a call to setval):

1
SELECT pg_catalog.setval('friendly_id_slugs_id_seq', 413, true);

Replace true with 't' and false with 'f'

Anything that looks like this:

1
2
INSERT INTO article_categories VALUES (4, 9, 13, true, '2011-11-22 06:29:07.966875', '2011-11-22 06:29:07.966875');
INSERT INTO article_categories VALUES (26, 14, NULL, false, '2011-12-07 09:09:52.794238', '2011-12-07 09:09:52.794238');

Needs to look like this:

1
2
INSERT INTO article_categories VALUES (4, 9, 13, 't', '2011-11-22 06:29:07.966875', '2011-11-22 06:29:07.966875');
INSERT INTO article_categories VALUES (26, 14, NULL, 'f', '2011-12-07 09:09:52.794238', '2011-12-07 09:09:52.794238');

Wrap it all in a single transaction

If you don’t do this, the import will take forever. At the top of the file put

1
BEGIN;

and at the very bottom of the file put

1
END;

Create the SQLite database

Finally, after all that hard work:

1
sqlite3 development.sqlite < dump.sql

The app was already set up to work with the development.sqlite database. There were a couple of error messages when creating it, but they didn’t seem to impact the app. Everything worked fine, and I found a nifty new way to use Docker.

Simple email form with Sinatra, nginx-passenger, and Docker Compose

It seems there’s no nice way to tie Nginx and Phusion Passenger together with Docker Compose. There still isn’t, but I cooked up a solution that works for my purposes. Here I document that process by deploying a simple email signup form with Sinatra.

The landing page

Nothing fancy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Email signup</title>
</head>
<body>
<form action='/mail/signup' method='post'>
<label>Name</label>
<input id='name' type='text' placeholder='Larry Wall'>
<label>Email</label>
<input id='email' type='email' placeholder='larry@example.com'>
<label>Message</label>
<textarea id='message' rows='5' placeholder='Keep me posted...'></textarea>
<button id='send-email-button' type='submit'>Send</button>
</form>
</body>
</html>

Sinatra app

config.ru

1
2
3
# config.ru
require File.expand_path('app', File.dirname(__FILE__))
map('/mail') { run MailerController }

Gemfile

1
2
3
4
5
source "https://rubygems.org"
gem 'sinatra'
gem 'passenger'
gem 'pony'

Don’t forget to run bundle install.

app.rb

This is where all the mailer magic is set in motion:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# app.rb
require 'sinatra'
require 'pony'
class MailerController < Sinatra::Base
post '/signup' do
configure_pony
name = params[:name]
sender_email = params[:email]
message = params[:message]
logger.error params.inspect
begin
Pony.mail(
:from => 'info@example.com',
:to => 'info@example.com',
:reply_to => "#{name}<#{sender_email}>",
:subject => "#{name} wants to be added to the list",
:body => "#{message}",
)
halt 200
rescue
@exception = $!
erb :boom
end
end
def configure_pony
Pony.options = {
:via => :smtp,
:via_options => {
:address => 'mail.server.com',
:port => '587',
:user_name => 'info@example.com',
:password => 'secretpassword',
:authentication => :plain,
:enable_starttls_auto => true,
:domain => 'example.com'
}
}
end
end

boom.rb

Just in case something goes wrong… this will display any exceptions.

1
2
3
4
# boom.rb
No good!
<%= @exception %>

Docker

I hope someone comes up with a better way than this. It seems like this should be something achievable with Docker Compose alone. I can get close to that goal, but I still need to create a Dockerfile to make it all work.

Dockerfile

Phusion Passenger needs the app’s Gemfile. This Dockerfile points the image at that Gemfile and installs all the dependencies. It uses raphaeldelaghetto/nginx-passenger as its base.

1
2
3
4
5
6
7
8
# Dockerfile
FROM raphaeldelaghetto/nginx-passenger
MAINTAINER Some Guy
ADD Gemfile /usr/share/nginx/html/Gemfile
WORKDIR /usr/share/nginx/html
RUN bundle install

docker-compose.yml

This builds the Dockerfile above, which downloads the nginx-passenger image from the Docker repository. Optional SSL settings are commented out.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# docker-compose.yml
email-app:
restart: always
build: ./
ports:
- "80:80"
#- "443:443"
volumes:
# Page content
- ./:/usr/share/nginx/html
# Certs
#- /home/app/certs:/etc/nginx/ssl
# default.conf
- ./config:/etc/nginx/sites-enabled/

Nginx configuration

As you can see above, Docker Compose is going to look in the ./config directory for default.conf. Make sure the path exists:

1
mkdir config

Copy and paste the following into ./config/default.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#default.conf
server {
listen 80;
#listen 443 ssl;
server_name example.com;
#ssl_certificate example.com.crt;
#ssl_certificate_key example.com.key;
root /usr/share/nginx/html;
location /mail/ {
passenger_enabled on;
root /usr/share/nginx/html/public;
}
}

Optional SSL settings are commented out once again.

Recap

These are the files described above:

1
2
3
4
5
6
7
8
9
10
/home/app/email-app/
▾ config/
default.conf
app.rb
boom.erb
config.ru
docker-compose.yml
Dockerfile
Gemfile
index.html

Once everything is in place, fire ‘er up!

1
docker-compose up

Backup, migration, and recovery with WordPress and Docker Compose

It seems that I’m recording an inordinate amount of information concerning WordPress and Docker. This is, as usual, at the behest of my wife. She likes WordPress and I love her, so what can I do?

The benefit of all this WordPress/Docker monkey business is that I’m slowly discovering my own best practices concerning both. What follows is the next installment of my self-education.

Context

I had to do a mass migration of my myriad web applications. My wife’s sites were not spared the inconvenience. They all had been previously Dockerized, so it was really just a matter of doing a backup of the WordPress files and MySQL data and sending it over to a new server in the cloud. The server, of course, had Docker, Compose, et al already installed. The whole setup looked a little like this:

[System Topology]

Backup

This is the process I followed to backup the sites hosted on the system to be turfed.

WordPress

This bundles up all the WordPress container files into a single, zipped tar ball.

1
docker run --rm --volumes-from mysitecom_wordpress_1 -v $(pwd):/backup wordpress tar zcvf /backup/mysitecom_wordpress.tar.gz /var/www/html

MySQL

I don’t bother copying all the container files here. I just need a dump of the WordPress database. Peripheral files are unnecessary and unwanted.

1
docker exec -i mysitecom_mysql_1 mysqldump -uroot -psecretp@ssword wordpress > mysitecom_mysql.sql

Setup

Prep the site’s new home. Here I put Docker Compose to good use managing the running containers. I set up two peripheral data-only containers that do not execute. These are not defined in the docker-compose.yml file as a way of shielding the site’s data from accidental deletion.

First, create a directory and docker-compose.yml:

1
2
3
mkdir mysite.com
cd mysite.com
vim docker-compose.yml

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
wordpress:
image: wordpress
restart: always
links:
- mysql
environment:
- WORDPRESS_DB_PASSWORD=secretp@ssword
- VIRTUAL_HOST=mysite.com
expose:
- 80
volumes_from:
- mysitecom_wordpress_data
mysql:
image: mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=secretp@ssword
- MYSQL_DATABASE=wordpress
volumes_from:
- mysitecom_mysql_data

Note the VIRTUAL_HOST variable. This site is (and was) running behind an nginx-proxy image.

Next, create the non-executing data-only containers. These are kept at arm’s length from the running containers so that our data doesn’t go up in smoke whilst monkeying around with image upgrades and whatnot.

MySQL

1
docker create --name mysitecom_mysql_data -v /var/lib/mysql mysql

WordPress

1
docker create --name mysitecom_wordpress_data -v /var/www/html wordpress

It’s now safe to fire up the running containers:

1
docker-compose up -d

At this point, if you were to visit the site URL, you should see WordPress inviting you to set everything up. I, however, have existing site data.

Recovery

The backup files created above have been copied over to the new server and into my working directory (i.e., the one containing docker-compose.yml). Write those files to the running containers. All the data actually gets sent to the non-executing, data-only containers because of how we set up docker-compose.yml.

WordPress

1
docker run --rm --volumes-from mysitecom_wordpress_1 -v $(pwd):/backup wordpress tar zxvf /backup/mysitecom_wordpress.tar.gz -C /

MySQL

1
docker exec -i mysitecom_mysql_1 mysql -uroot -psecretp@ssword wordpress < mysitecom_mysql.sql

You should now be able to see your old site on its new server.

iRedMail setup and GoDaddy DNS records

I had it in mind to Dockerize email services on an Ubuntu server. I quickly realized email is a gongshow and opted for the fastest, easiest solution. This turned out to be iRedMail, which still proved a bit tricky when it came time to set up my GoDaddy DNS records.

Here’s what I did…

The system

  • Ubuntu 14.04 server
  • 1 vCPU
  • 2 GB (as recommended here)
  • 20 GB of storage

I buy my VMs from cloudatcost.com. They’re reasonably reliable and reasonably priced.

A (Host) records

Once your machine (wherever it be) is online, set the DNS A (Host) records right away. My DNS stuff is all managed at GoDaddy.

[First host record]

Then create another A record and point it to the mail subdomain:

[mail subdomain host record]

Prepare the environment

CloudAtCost sets creates a root user and sets the password. I ssh in and change it right away:

1
2
ssh root@rockyvalley.ca
passwd

There may be a compelling reason to create a non-root user, but since the iRedMail will be installed entirely as root, I’m going to skip that step until advised to do otherwise.

Set the domain name

First,

1
vim /etc/hostname

Change whatever’s inside to:

1
mail

and save. Then,

1
vim /etc/hosts

Change it to look like this:

1
2
3
4
5
6
127.0.0.1 mail.rockyvalley.ca mail localhost localhost.localdomain
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Change your domain name wherever appropriate (my example domain is rockyvalley.ca.

Reboot the machine.

Log back in:

1
ssh root@rockyvalley.ca

Execute

1
hostname -f

If you see something similar to

1
mail.rockyvalley.ca

then your server has been named appropriately.

Install iRedMail

Download the latest package.

1
2
3
4
cd /root
wget https://bitbucket.org/zhb/iredmail/downloads/iRedMail-0.9.2.tar.bz2
tar xjf iRedMail-0.9.2.tar.bz2
cd iRedMail-0.9.2

Execute the install script:

1
bash iRedMail.sh

This will install a bunch of stuff and then guide you through configuration. Press Enter to proceed past the intro screen.

Default mail storage path

[Default mail storage path]

Preferred web server

[Preferred web server]

Choose preferred backend used to store mail accounts

Use the space bar to select the database (here, PostgreSQL).

[Choose preferred backend used to store mail accounts]

Password for PostgreSQL administrator: postgres

[Password for PostgreSQL administrator: postgres]

Your first virtual domain

[Your first virtual domain]

Password for the administrator of your domain

[Password for the administrator of your domain]

Optional components

[Optional components]

Proceed with installation

[Proceed with installation]

I answered yes when asked:

1
2
< Question > Would you like to use firewall rules provided by iRedMail?
< Question > File: /etc/default/iptables, with SSHD port: 22. [Y|n]y

I answered no when asked:

1
< Question > Restart firewall now (with SSHD port 22)? [y|N]n

I figured it unwise to restart because I’m logged in to my server via ssh.

Upon sucessful completion, the installer will spit out some valuable information:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
********************************************************************
* URLs of installed web applications:
*
* - Webmail:
* o Roundcube webmail: httpS://mail.rockyvalley.ca/mail/
*
* - Web admin panel (iRedAdmin): httpS://mail.rockyvalley.ca/iredadmin/
*
* You can login to above links with same credential:
*
* o Username: postmaster@rockyvalley.ca
* o Password: somesecretpassword
*
*
********************************************************************
* Congratulations, mail server setup completed successfully. Please
* read below file for more information:
*
* - /root/iRedMail-0.9.2/iRedMail.tips
*
* And it's sent to your mail account postmaster@rockyvalley.ca.
*
********************* WARNING **************************************
*
* Rebooting your system is required to enable mail services.
*
********************************************************************

Reboot now.

Set up DNS records

MX

The A records have already been set up. Create an MX record (I’m using GoDaddy, so I deleted the existing records before proceeding):

[MX record]

SPF

This gets set as a TXT record at GoDaddy:

[SPF record]

DKIM

Log back into your server:

1
ssh root@rockyvalley.ca

Execute the following to determine your DKIM keys:

1
amavisd-new showkeys

This will return something like this:

1
2
3
4
5
6
7
8
9
10
; key#1, domain rockyvalley.ca, /var/lib/dkim/rockyvalley.ca.pem
dkim._domainkey.rockyvalley.ca. 3600 TXT (
"v=DKIM1; p="
"MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApTNgVVL2+vIIcq9xioc5"
"B/ydJxaQRZ1eBKkO7mhz2ir5k3DdWl+y65GYR8TbP3z3essbwOnPocqnwX81RoW1"
"VAhPYlHU57OLSXnk3qYcRDHpT/UU/dOGdFclpuAXazUg0l8QhTgadtxsIRDlckKg"
"Vr6II7knZUrhfm84uJ3w858OIrzy8KOSXXfc8npTn48iy4okJGbHvVxE05m6f9/g"
"ie63Z5XkIZeJu7Nj6O/IOVitZh3uiKoOlBHULKqpNtHtPrnZHHX51OLkiezUBvG+"
"slHGPK710iW5ITDy5qm/VaANigXBnPrdF3S3sZMFprwa9GhGSkrnnJ40eCJVFgCm"
"FQIDAQAB")

All the stuff between the brackets needs to be put onto one line, like this:

1
v=DKIM1; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApTNgVVL2+vIIcq9xioc5B/ydJxaQRZ1eBKkO7mhz2ir5k3DdWl+y65GYR8TbP3z3essbwOnPocqnwX81RoW1VAhPYlHU57OLSXnk3qYcRDHpT/UU/dOGdFclpuAXazUg0l8QhTgadtxsIRDlckKgVr6II7knZUrhfm84uJ3w858OIrzy8KOSXXfc8npTn48iy4okJGbHvVxE05m6f9/gie63Z5XkIZeJu7Nj6O/IOVitZh3uiKoOlBHULKqpNtHtPrnZHHX51OLkiezUBvG+slHGPK710iW5ITDy5qm/VaANigXBnPrdF3S3sZMFprwa9GhGSkrnnJ40eCJVFgCmFQIDAQAB

All this gets set as another TXT record:

[DKIM record]

This may take some time to propagate (a couple hours even). These commands will help confirm that everything is set up okay:

1
2
dig -t txt dkim._domainkey.rockyvalley.ca
nslookup -type=txt dkim._domainkey.rockyvalley.ca

You’ll see the DKIM TXT record you just set once everything has propagated.

Verify public key availability:

1
amavisd-new testkeys

You should see this, if successful:

1
TESTING#1: dkim._domainkey.rockyvalley.ca => pass

SSL/TLS

At this point, assuming time allowed for propagation, you should be able to send and receive email from the postmaster account. However, the certificates iRedMail sets up for you are self-signed, which means you get an ugly warning whenever you try to access your webmail. To fix this, you’ll need to get certs from a trusted certificate authority. I like to use startssl.com because they’re free.

Once obtained, transfer the certificates to the mail server:

1
scp rockyvalley.ca.tar.gz root@rockyvalley.ca:~

Login,

1
ssh root@rockyvalley.ca

unzip, decrypt, and lockdown:

1
2
3
4
tar -zxvf rockyvalley.ca.tar.gz
cd rockyvalley.ca
openssl rsa -in ssl.key -out iRedMail.key
chmod 400 iRedMail.key

Since I chose Nginx as my web server and StartSSL as my CA, I need to chain my ssl.crt with StartSSL’s intermediate certificate:

1
cat ssl.crt sub.class1.server.ca.pem > iRedMail.crt

The certificates are now ready to be put in place. The self-signed certificates are stored in:

  • /etc/ssl/certs/iRedMail.crt
  • /etc/ssl/private/iRedMail.key

The new certificates were already named appropriately during decryption and chaining, so now it is simply a matter of overwriting the existing self-signed certificates:

Copy the certs to the correct directories:

1
2
mv iRedMail.crt /etc/ssl/certs/
mv iRedMail.key /etc/ssl/private/

Reboot the machine.

I rebooted in lieu of restarting individual services. Once back online, test sending and receiving. Everything should be good to go.

Brute force Docker WordPress Nginx proxy demo

Dockerizing a dynamic Nginx-WordPress proxy is tricky business. I plan to bundle this all up in bash scripts, but for now I am simply documenting the steps I took to configure the following system in my local environment:

[System Topology]

What follows is not a production-ready path to deployment. Rather, it is a brute-force proof of concept.

MySQL

Start a detatched MySQL container.

1
docker run -d -e MYSQL_ROOT_PASSWORD=secretp@ssword --name consolidated_blog_mysql_image mysql:5.7.8

This one probably won’t cause any trouble, so I don’t need to see any output.

Main WordPress

This is the WordPress instance you encounter when you land on the domain’s root.

1
docker run --rm --link consolidated_blog_mysql_image:mysql -e WORDPRESS_DB_NAME=main_blog -e WORDPRESS_DB_PASSWORD=secretp@ssword -p 8081:80 --name main_blog_wordpress_image wordpress:4

Secondary WordPress blog

This is the WordPress instance you encounter when you land on the domains /blog path.

1
docker run --rm --link consolidated_blog_mysql_image:mysql -e WORDPRESS_DB_NAME=blog2 -e WORDPRESS_DB_PASSWORD=secretp@ssword -p 8083:80 --name blog2_wordpress_image wordpress:4

Notice the port. If I were to set it from 8083:80 to 8082:80, it will redirect back to 8081, and I don’t know why yet.

Nginx proxy

This is the tricky part. I need to obtain the IPs assigned to my WordPress containers and set them in my Nginx default.conf.

First, get the IP addresses of each running main_blog_wordpress_image container:

1
docker inspect -f '{{ .NetworkSettings.IPAddress }}' main_blog_wordpress_image

This will output the IP. Make note of it, because I need to copy it to the Nginx’s default.conf file.

1
172.17.0.181

Get the IP addresses of each running blog2_wordpress_image container:

1
docker inspect -f '{{ .NetworkSettings.IPAddress }}' blog2_wordpress_image

There’s a good chance it will be the next IP in line:

1
172.17.0.182

Now, create a default.conf file:

1
vim default.conf

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {
listen 80;
server_name localhost;
# Main blog
location / {
proxy_pass http://172.17.0.181/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# Secondary blog
location /blog/ {
proxy_pass http://172.17.0.182/;
}
}

Change the proxy_pass IPs accordingly.

Execute:

1
docker run --rm --name nginx-wordpress-proxy -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 nginx

The main blog should now be accessible at http://localhost. The secondary blog at http://localhost/blog. Set up different blogs on each WordPress instance to confirm the system is working as designed.

Unit testing bash with assert.sh and stub.sh

I’ve been having a lot of problems getting Docker Compose to link to data-only containers. I want to be able to set up a Dockerized WordPress/MySQL combo and be able to back up volume data and easily move sites between domains. After struggling with Compose, I finally decided to do this manually. I also want to be able to repeat this process with other Dockerized WordPress sites. Writing a bash script is the way to go, but how do I test it?

Context

Supposing I configure a docker-compose.yml file like this:

1
2
3
4
cd ~
mkdir mysite.com
cd mysite.com
vim docker-compose.yml

with contents that look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
wordpress:
image: wordpress
restart: always
links:
- mysql
environment:
- WORDPRESS_DB_PASSWORD=secretp@ssword
- VIRTUAL_HOST=mysite.com
expose:
- 80
mysql:
image: mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=secretp@ssword
- MYSQL_DATABASE=wordpress

This will create two containers named mysitecom_wordpress_1 and mysitecom_mysql_1. Both of these write data to the host file system that needs to be backed up and restored if I ever want to move from mysite.com to someothersite.com.

Manual backup and recovery

Backup

MySQL

1
docker run --rm --volumes-from mysitecom_mysql_1 -v $(pwd):/backup mysql tar cvf /backup/mysitecom_mysql.tar /var/lib/mysql

WordPress

1
docker run --rm --volumes-from mysitecom_wordpress_1 -v $(pwd):/backup wordpress tar cvf /backup/mysitecom_wordpress.tar /var/www/html

Create new WordPress/MySQL data-only containers

MySQL

1
docker create --name someothersitecom_mysql_data -v /var/lib/mysql mysql

WordPress

1
docker create --name someothersitecom_wordpress_data -v /var/www/html wordpress

Write the data to the new data-only containers

MySQL

1
docker run --rm --volumes-from someothersitecom_mysql_data -v $(pwd):/backup mysql tar xvf /backup/mysitecom_mysql.tar

WordPress

1
docker run --rm --volumes-from someothersitecom_wordpress_data -v $(pwd):/backup wordpress tar xvf /backup/mysitecom_wordpress.tar /var/www/html -C /

Deploy new WordPress/MySQL

MySQL

1
docker run -d --restart always --volumes-from someothersitecom_mysql_data -e MYSQL_ROOT_PASSWORD=secretp@ssword -e MYSQL_DATABASE=wordpress --name someothersitecom_mysql_image mysql

WordPress

1
docker run -d --restart always --volumes-from someothersitecom_wordpress_data --link someothersitecom_mysql_image:mysql -e WORDPRESS_DB_PASSWORD=secretp@ssword -e VIRTUAL_HOST=someothersite.com -p 80 --name someothersitecom_wordpress_image wordpress

Yikes! Imagine typing all that in over and over again.

The test utilities

I picked the assert.sh/stub.sh combo because I saw no other easy way to stub out Docker commands. If there is an easy way that I missed, I’d love to hear about it.

assert.sh

I’ll make project and dependency directories and install the assert.sh dependency there:

1
2
3
4
cd ~
mkdir -p docker-wordpress-mysql-utils/deps
cd docker-wordpress-mysql-utils/deps
wget https://raw.github.com/lehmannro/assert.sh/v1.1/assert.sh

stub.sh

From the project deps/ directory:

1
wget https://raw.githubusercontent.com/jimeh/stub.sh/master/stub.sh

Now, I simply need to source these in my test scripts.

TDD

The backup and redeploy procedure will be broken down into a series of bash scripts, each reflecting the individual steps. A test script will be written for each corresponding script.

Host data backup

Tests

The most important thing to test is that the Docker commands are formatted correctly given the command line options. At this point, I’m not ambitious enough to ensure that Docker itself is executing properly. I’ll just have to trust that it works as intended.

Suppose I have two containers (mysitecom_wordpress_1 and mysitecom_mysql_1), I want to be able to provide the mysite.com domain and have their data tarred to a directory that I specify. The command will look like this:

1
./backup_wordpress_and_mysql.sh mysite.com ~/backups

With this in mind, I need to be careful about how I structure my backup_wordpress_and_mysql.sh script. First I’ll create a tests directory in my project’s directory:

1
2
3
4
5
6
cd ~/docker-wordpress-mysql-utils
mkdir tests
cd tests
touch backup_wordpress_and_mysql_tests.sh
chmod 775 backup_wordpress_and_mysql_tests.sh
vim backup_wordpress_and_mysql_tests.sh

The tests look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/bash
## Source the test utilities
. ../deps/assert.sh
. ../deps/stub.sh
#
# Stub the docker command
#
stub docker
#
# Exit if not provided with exactly two command line arguments
#
assert_raises "bash ../backup_wordpress_and_mysql.sh" 1
assert_raises "bash ../backup_wordpress_and_mysql.sh mysite.com" 1
assert_raises "bash ../backup_wordpress_and_mysql.sh mysite.com backup extra" 1
# Make sure the Docker commands were called with the correct parameters
. ../backup_wordpress_and_mysql.sh mysite.com backup_dir
assert "stub_called_times docker" 2
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from mysitecom_mysql_1 -v backup_dir:/backup mysql tar cvf /backup/mysitecom_mysql.tar /var/lib/mysql"
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from mysitecom_wordpress_1 -v backup_dir:/backup wordpress tar cvf /backup/mysitecom_wordpress.tar /var/www/html"
assert_end backup_wordpress_and_mysql

Script

To get these tests to pass, I first need to create the script:

1
2
3
4
cd ~/docker-wordpress-mysql-utils
touch backup_wordpress_and_mysql.sh
chmod 775 backup_wordpress_and_mysql.sh
vim backup_wordpress_and_mysql.sh

The backup_wordpress_and_mysql.sh looks like this:

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash
if [ $# -ne 2 ]
then
echo "$0 source_domain destination_dir"
exit 1
fi
# Remove periods from domain
prefix=${1//.}
docker run --rm --volumes-from "$prefix"_mysql_1 -v $2:/backup mysql tar cvf /backup/"$prefix"_mysql.tar /var/lib/mysql
docker run --rm --volumes-from "$prefix"_wordpress_1 -v $2:/backup wordpress tar cvf /backup/"$prefix"_wordpress.tar /var/www/html

To run these tests, I execute the following from the project’s test directory:

1
2
cd test
./backup_wordpress_and_mysql_tests.sh

The remaining steps in the backup/recovery process follow a similar pattern…

Create data-only containers

Tests

As before, the only thing I’m really testing is that the Docker commands are formatted correctly given the command line option.

I follow the Compose-imposed naming conventions, even though I’m not using it to redeploy my site on a new domain. Here, assuming I want to move mysite.com to someothersite.com, I’ll provide the someothersite.com parameter to the new containers. The command will look like this:

1
./create_wordpress_and_mysql_data_only_containers.sh someothersite.com

Here are the tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/bin/bash
## Source the test utilities
. ../deps/assert.sh
. ../deps/stub.sh
#
# Stub the docker command
#
stub docker
#
# Exit if not provided with exactly one command line arguments
#
assert_raises "bash ../create_wordpress_and_mysql_data_only_containers.sh" 1
assert_raises "bash ../create_wordpress_and_mysql_data_only_containers.sh someothersite.com extra" 1
# Make sure the Docker commands were called with the correct parameters
. ../create_wordpress_and_mysql_data_only_containers.sh someothersite.com
assert "stub_called_times docker" 2
assert_raises "stub_called_with_exactly_times docker 1 create --name someothersitecom_mysql_data -v /var/lib/mysql mysql"
assert_raises "stub_called_with_exactly_times docker 1 create --name someothersitecom_wordpress_data -v /var/www/html wordpress"
assert_end create_wordpress_and_mysql_data_only_containers

Script

1
2
3
4
5
6
7
8
9
10
#!/bin/bash
if [ $# -ne 1 ]
then
echo "$0 domain"
exit 1
fi
prefix=${1//.}
docker create --name "$prefix"_mysql_data -v /var/lib/mysql mysql
docker create --name "$prefix"_wordpress_data -v /var/www/html wordpress

To run these tests, I do as before and execute the following from the project’s test directory:

1
./create_wordpress_and_mysql_data_only_containers_tests.sh

Write tar files to data-only containers

Tests

This time I need to provide the mysite.com domain, the new someothersite.com domain, and the directory in which the tar files are contained. The command will look like this:

1
./write_wordpress_and_mysql_data_only_containers.sh mysite.com someothersite.com backup_source_dir

The tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#!/bin/bash
## Source the test utilities
. ../deps/assert.sh
. ../deps/stub.sh
#
# Stub the docker command
#
stub docker
#
# Exit if not provided with exactly three command line arguments
#
assert_raises "bash ../write_wordpress_and_mysql_data_only_containers.sh" 1
assert_raises "bash ../write_wordpress_and_mysql_data_only_containers.sh mysite.com" 1
assert_raises "bash ../write_wordpress_and_mysql_data_only_containers.sh mysite.com someothersite.com" 1
assert_raises "bash ../write_wordpress_and_mysql_data_only_containers.sh mysite.com someothersite.com backup_source_dir extra" 1
# Make sure the Docker commands were called with the correct parameters
. ../write_wordpress_and_mysql_data_only_containers.sh mysitecom someothersite.com backup_source_dir
assert "stub_called_times docker" 2
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from someothersitecom_mysql_data -v backup_source_dir:/backup mysql tar xvf /backup/mysitecom_mysql.tar"
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from someothersitecom_wordpress_data -v backup_source_dir:/backup wordpress tar xvf /backup/mysitecom_wordpress.tar /var/www/html -C /"
assert_end write_wordpress_and_mysql_data_only_containers

Script

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash
if [ $# -ne 3 ]
then
echo "$0 old_domain new_domain backup_source_dir"
exit 1
fi
old_prefix=${1//.}
new_prefix=${2//.}
docker run --rm --volumes-from "$new_prefix"_mysql_data -v $3:/backup mysql tar xvf /backup/"$old_prefix"_mysql.tar
docker run --rm --volumes-from "$new_prefix"_wordpress_data -v $3:/backup wordpress tar xvf /backup/"$old_prefix"_wordpress.tar /var/www/html -C /

To run these tests, execute the following from the project’s test directory:

1
./write_wordpress_and_mysql_data_only_containers_tests.sh

Deploy WordPress and MySQL containers

Tests

This time I need to provide the destination site’s domain. The command will look like this:

1
./deploy_new_wordpress_and_mysql_containers.sh someothersite.com

The tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/bin/bash
## Source the test utilities
. ../deps/assert.sh
. ../deps/stub.sh
#
# Stub the docker command
#
stub docker
#
# Exit if not provided with exactly one command line argument
#
assert_raises "bash ../deploy_new_wordpress_and_mysql_containers.sh" 1
assert_raises "bash ../deploy_new_wordpress_and_mysql_containers.sh someothersite.com extra" 1
# Make sure the Docker commands were called with the correct parameters
. ../deploy_new_wordpress_and_mysql_containers.sh someothersite.com
assert "stub_called_times docker" 2
assert_raises "stub_called_with_exactly_times docker 1 run -d --restart always --volumes-from someothersitecom_mysql_data -e MYSQL_ROOT_PASSWORD=secretp@ssword -e MYSQL_DATABASE=wordpress --name someothersitecom_mysql_image mysql"
assert_raises "stub_called_with_exactly_times docker 1 run -d --restart always --volumes-from someothersitecom_wordpress_data --link someothersitecom_mysql_image:mysql -e WORDPRESS_DB_PASSWORD=secretp@ssword -e VIRTUAL_HOST=someothersite.com -p 80 --name someothersitecom_wordpress_image wordpress"
assert_end deploy_new_wordpress_and_mysql_containers

Script

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash
if [ $# -ne 1 ]
then
echo "$0 domain"
exit 1
fi
# Remove periods from domain
prefix=${1//.}
docker run -d --restart always --volumes-from "$prefix"_mysql_data -e MYSQL_ROOT_PASSWORD=secretp@ssword -e MYSQL_DATABASE=wordpress --name "$prefix"_mysql_image mysql
docker run -d --restart always --volumes-from "$prefix"_wordpress_data --link "$prefix"_mysql_image:mysql -e WORDPRESS_DB_PASSWORD=secretp@ssword -e VIRTUAL_HOST=$1 -p 80 --name "$prefix"_wordpress_image wordpress

To run these tests, execute the following from the project’s test directory:

1
./deploy_new_wordpress_and_mysql_containers_tests.sh

Put it all together…

There are four steps described here to backing up and restoring a WordPress site and its MySQL data. You may have occassion to execute each script one at a time, but that would generally be too much typing. Combined, these scripts take a total of three parameters:

  • The old domain
  • The new domain
  • The backup directory

I’m going to make one script that will execute each individual script. The command looks like this:

1
./move_compose_configured_wordpress_and_mysql.sh mysite.com someothersite.com backups

Tests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#!/bin/bash
# Source the test utilities
. ../deps/assert.sh
. ../deps/stub.sh
#
# Stub the docker command
#
stub docker
#
# Exit if not provided with exactly three command line arguments
#
assert_raises "bash ../move_compose_configured_wordpress_and_mysql.sh" 1
assert_raises "bash ../move_compose_configured_wordpress_and_mysql.sh mysite.com" 1
assert_raises "bash ../move_compose_configured_wordpress_and_mysql.sh mysite.com someothersite.com" 1
assert_raises "bash ../move_compose_configured_wordpress_and_mysql.sh mysite.com someothersite.com backup_source_dir extra" 1
# Make sure the Docker commands were called with the correct parameters
. ../move_compose_configured_wordpress_and_mysql.sh mysite.com someothersite.com backup_dir
assert "stub_called_times docker" 8
## backup_wordpress_and_mysql_tests
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from mysitecom_mysql_1 -v backup_dir:/backup mysql tar cvf /backup/mysitecom_mysql.tar /var/lib/mysql"
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from mysitecom_wordpress_1 -v backup_dir:/backup wordpress tar cvf /backup/mysitecom_wordpress.tar /var/www/html"
# create_wordpress_and_mysql_data_only_containers_tests
assert_raises "stub_called_with_exactly_times docker 1 create --name someothersitecom_mysql_data -v /var/lib/mysql mysql"
assert_raises "stub_called_with_exactly_times docker 1 create --name someothersitecom_wordpress_data -v /var/www/html wordpress"
# write_wordpress_and_mysql_data_only_containers_tests
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from someothersitecom_mysql_data -v backup_dir:/backup mysql tar xvf /backup/mysitecom_mysql.tar"
assert_raises "stub_called_with_exactly_times docker 1 run --rm --volumes-from someothersitecom_wordpress_data -v backup_dir:/backup wordpress tar xvf /backup/mysitecom_wordpress.tar /var/www/html -C /"
# deploy_new_wordpress_and_mysql_containers_tests
assert_raises "stub_called_with_exactly_times docker 1 run -d --restart always --volumes-from someothersitecom_mysql_data -e MYSQL_ROOT_PASSWORD=secretp@ssword -e MYSQL_DATABASE=wordpress --name someothersitecom_mysql_image mysql"
assert_raises "stub_called_with_exactly_times docker 1 run -d --restart always --volumes-from someothersitecom_wordpress_data --link someothersitecom_mysql_image:mysql -e WORDPRESS_DB_PASSWORD=secretp@ssword -e VIRTUAL_HOST=someothersite.com -p 80 --name someothersitecom_wordpress_image wordpress"
assert_end move_compose_configured_wordpress_and_mysql

Script

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash
if [ $# -ne 1 ]
then
echo "$0 domain"
exit 1
fi
# Remove periods from domain
prefix=${1//.}
docker run -d --restart always --volumes-from "$prefix"_mysql_data -e MYSQL_ROOT_PASSWORD=secretp@ssword -e MYSQL_DATABASE=wordpress --name "$prefix"_mysql_image mysql
docker run -d --restart always --volumes-from "$prefix"_wordpress_data --link "$prefix"_mysql_image:mysql -e WORDPRESS_DB_PASSWORD=secretp@ssword -e VIRTUAL_HOST=$1 -p 80 --name "$prefix"_wordpress_image wordpress

To run these tests, execute the following from the project’s test directory:

1
./move_compose_configured_wordpress_and_mysql_tests.sh

Testing is still pretty manual. Setting yourself up to do something like ./test.sh [test_file] takes a bit of reorganization…

Transfer a database between Docker MySQL images

I was feeling pretty pleased with myself having just figured out how to set up a private Docker registry, when I discovered an interesting thing about Docker’s official MySQL image: commits don’t persist database data! This is my fault for not understanding the documentation and how to work with data volumes.

In any case, my wife set up a Dockerized website in WordPress. We wanted to transfer it to a new domain. I set up a private registry to which to commit images of her data. I deployed everything only to discover that the data, both database and WordPress, are not stored in ther respective images. This was no good for my purposes, so I set out to persist the database and WordPress data by creating and mounting two data volume containers.

Here’s how I transfered everything between my Docker MySQL/WordPress images and their respective data volume containers…

Configure Docker Compose

Supposing we are transfering the website from originaldomain.com to somenewdomain.com, this is how to configure Compose:

1
2
3
4
cd ~
mkdir somenewdomain.com
cd somenewdomain.com
vim docker-compose.yml

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
wordpress:
image: wordpress
links:
- mysql
environment:
- WORDPRESS_DB_PASSWORD=secretp@ssword
- VIRTUAL_HOST=somenewdomain.com
expose:
- 80
volumes_from:
- somenewdomaincom_wordpress_data
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secretp@ssword
- MYSQL_DATABASE=wordpress
volumes_from:
- somenewdomaincom_database_data

The volumes_from settings point to containers which point to volumes on the host system…

Create the database volume container

1
docker create -v /data --name somenewdomaincom_database_data mysql /bin/true

Create the WordPress volume container

1
docker create -v /data --name somenewdomaincom_wordpress_data wordpress /bin/true

Fire it up!

1
docker-compose up -d

The two data volume containers don’t contain any data yet. That comes next.

The MySQL database

An aside: some investigating

It was important that I be able to verify for myself that the data wasn’t being persisted, so I needed to examine the databases in each image…

First, I needed to find out what IP my MySQL container was listening on.

1
docker inspect originaldomaincom_mysql | grep IPAddr

Then I used my local mysql installation to connect to the database on the container.

1
mysql -u root -h 172.17.1.163 -p

From the mysql> prompt, I looked at the databases:

1
show databases;

The database was simply called wordpress, so I took a look in there:

1
2
use wordpress;
show tables;

I found a bunch of woocommerce tables, which I knew to be my wife’s WordPress data. I repeated the process for the existing (but broken) somenewdomain_mysql container and discovered that the WordPress database didn’t even exist. My investigation confirmed what I had suspected: the data wasn’t being committed to the image.

Export the database

I needed to get the data out of the MySQL container so that I could write it to a new container pointing to a data volume container. What? To do this, I first needed to know the IP that the original (originaldomain.com) database is listening on:

1
docker inspect originaldomaincom_mysql | grep IPAddr

Then, setting that IP with the -h option, this exports all of the tables:

1
mysqldump -u root -h 172.17.1.163 -p wordpress > originaldomaincom_mysql.sql

Import the database

Find the IP of the new MySQL image:

1
docker inspect somenewdomaincom_mysql | grep IPAddr

Create a wordpress database (if necessary):

1
mysql -u root -h 172.17.1.174 -p

From mysql> prompt, check to see if the wordpress database already exists:

1
show databases;

Create it, if it doesn’t:

1
create database wordpress;

Logout of the the MySQL command line and execute from the host machine:

1
mysql -u root -h 172.17.1.174 -p wordpress < originaldomaincom_mysql.sql

The data has now been imported and is stored in a volume guarded by the _somenewdomaincom_wordpressdata container.

WordPress

Copy the WordPress data

All the WordPress template and customizations are currently stored in volume directly accessed by _originaldomaincomwordpress container. I needed to find that directory on my host machine:

1
docker inspect originaldomaincom_wordpress

I looked under Mounts for Source. The paths looked something like this:

1
/var/lib/docker/volumes/2623fb3bc681407027c1ebdaca118d04b6e851448459d4e577b86105d694af6c/_data

I copied that data to my current working directory for safe keeping:

1
sudo cp -R /var/lib/docker/volumes/2623fb3bc681407027c1ebdaca118d04b6e851448459d4e577b86105d694af6c/_data .

Then I needed the Source path for the _somenewdomaincom_wordpressdata container:

1
docker inspect somenewdomaincom_wordpress_data

It looked like this:

1
/var/lib/docker/volumes/c93e95c490dc2cc5e9dc226d16412f05ccd8f335d437237045632a8f46fda45c/_data

I was careful to note the configuration whose Destination was /var/lib/mysql. There will be two mount points. The other destination (the wrong one) looks like this: /data.

Knowing the destination path, I removed the entire _data directory, because I don’t want any weird stuff hanging around:

1
sudo rm -rf /var/lib/docker/volumes/c93e95c490dc2cc5e9dc226d16412f05ccd8f335d437237045632a8f46fda45c/_data

I copied the contents of the data directory to the _somenewdomaincom_wordpressdata volume container’s source:

1
sudo cp -R data /var/lib/docker/volumes/c93e95c490dc2cc5e9dc226d16412f05ccd8f335d437237045632a8f46fda45c/

With all the data transfered, I restarted Docker Compose:

1
docker-compose restart

Everything looked good, except for one thing. All the links on the homepage were still pointing to the old originaldomain.com domain.

To fix this, I simply edited the wp-config.php file contained in the _data/ directory I had just copied

1
sudo vim /var/lib/docker/volumes/c93e95c490dc2cc5e9dc226d16412f05ccd8f335d437237045632a8f46fda45c/_data/wp-config.php

Then I appended and saved these two lines:

1
2
define('WP_HOME','https://somenewdomain.com');
define('WP_SITEURL','https://somenewdomain.com');

Restart again,

1
docker-compose restart

The site worked as it did before on its new domain.

Conclusion

I have a better understanding of how to work with Docker volumes, which renders my previous application of the Docker technology mostly inadequate.

As well, I suspect the whole import/export MySQL stuff may be unnecessary. It may be sufficient to simply copy the directory as with the WordPress data, but that has not yet been confirmed.

Set up a private Docker registry

Having mastered deploying WordPress sites with Docker and Compose, I set up a blog for my lovely wife on one of our many demonstration/prototyping domains. Once her site was configured to her liking, I purchased a dedicated domain with the intent of moving her site over. This is simple enough, but I wanted a more comprehensive solution. One that would allow me to backup the changes she makes to her site (and database) periodically. As well, she wanted to establish a basic WordPress image from which she could launch new projects without having to go through the whole set up and configuration rigamarole over and over again.

With all that in mind, the following outlines how I set up our private Docker registery. The procedure was adapted and condensed from here.

Get a certificate for your domain

Always with the certificates!

I like to use startssl.com because they’re free. startssl.com provides an intermediate certificate, so remember to chain them. Alternatively, you can sign your own certificates.

However you decide to obtain your certificates, install them somewhere on your domain registry’s server. E.g.,

1
2
cd ~
mkdir certs

You should have two certificates named something like this:

  • ~/certs/myregistrydomain.com.crt
  • ~/certs/myregistrydomain.com.key

Restrict access with password

Make an auth/ directory:

1
2
cd ~
mkdir auth

Then set a user named someguy whose password is someAlphaNum3r1cPassword (or whatever):

1
docker run --entrypoint htpasswd registry:2 -Bbn someguy someAlphaNum3r1cPassword > auth/htpasswd

Note: at the time of writing, the password must be alphanumeric. Special symbols do not work. Assuming all else is configured correctly, using non-alphanumerics will result in this error:

1
basic auth attempt to https://myregistrydomain.com:5000/v2/ realm "Registry Realm" failed with status: 401 Unauthorized

I’m not sure if this is by oversight or by design. Either way, all this stuff is pretty wild and woolly and will likely change as the Docker product continues to evolve.

Set up local file system storage

1
2
cd ~
docker run -d -p 5000:5000 --restart=always --name registry -v `pwd`/data:/var/lib/registry registry:2

Configure Compose

Create a directory in which to write your docker-compose.yml file:

1
2
3
4
cd ~
mkdir registry
cd registry
vim docker-compose.yml

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_HTTP_SECRET: SomePseudoRandomString
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/myregistrydomain.com.crt
REGISTRY_HTTP_TLS_KEY: /certs/myregistrydomain.com.key
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /path/to/data:/var/lib/registry
- /path/to/certs:/certs
- /path/to/auth:/auth

Change the /path/to/ directory to point to your certs and auth directories. This will be your account’s home directory, if following the steps above to the letter (cf., cd ~).

Also, execute the following to generate a pseudo-random string for the REGISTRY_HTTP_SECRET option:

1
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 32

Start up the registry

From the ~/registry directory:

1
docker-compose up -d

Commit the images

I have two Docker images that need committing. These are hosted on a server different than my Docker registry server.

Obtain their container IDs:

1
docker ps

Supposing output similar to this:

1
2
3
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8cb170d95bb wordpress "/entrypoint.sh apach" 15 minutes ago Up 10 minutes 80/tcp examplecom_wordpress
69b59e19aadc mysql:5.7 "/entrypoint.sh mysql" 53 minutes ago Up 10 minutes 3306/tcp examplecom_mysql

First, I commit the WordPress image:

1
docker commit -p c8cb170d95bb somenewdomaincom_wordpress

Now I have a snapshot of the WordPress container saved as _somenewdomaincomwordpress.

Now commit the associated MySQL container:

1
docker commit -p 69b59e19aadc somenewdomaincom_mysql

Push the images

Authentication has been set up, so log in first:

1
docker login myregistrydomain.com:5000

Then push:

1
2
docker push myregistrydomain.com:5000/somenewdomaincom_wordpress
docker push myregistrydomain.com:5000/somenewdomaincom_mysql

Tag the images

Having committed the images, I now have two snapshots that need tagging. First, I tag the WordPress image:

1
docker tag examplecom_wordpress myregistrydomain.com:5000/somenewdomaincom_wordpress

Now I tag the associated MySQL image:

1
docker tag examplecom_mysql myregistrydomain.com:5000/somenewdomaincom_mysql

Push the images

Authentication has been set up, so log in first:

1
docker login myregistrydomain.com:5000

Then push:

1
2
docker push myregistrydomain.com:5000/examplecom_wordpress
docker push myregistrydomain.com:5000/examplecom_mysql

Redeploy (with Compose)

The whole purpose of this exercise was to move my wife’s site from one domain to another. We use an Nginx proxy to let us host a bunch of different WordPress sites on a single machine. Supposing that configuration with domain-appropriate security certificates pre-installed, I can use Compose to pull images from my new Docker registry.

First, create a directory on the host machine:

1
2
3
4
cd ~
mkdir somenewdomain.com
cd somenewdomain.com
vim docker-compose.yml

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
wordpress:
image: myregistrydomain.com:5000/examplecom_wordpress
links:
- mysql
environment:
- WORDPRESS_DB_PASSWORD=secretp@ssword
- VIRTUAL_HOST=somenewdomain.com
expose:
- 80
mysql:
image: myregistrydomain.com:5000/examplecom_mysql
environment:
- MYSQL_ROOT_PASSWORD=secretp@ssword
- MYSQL_DATABASE=wordpress

Fire ‘er up!

1
docker-compose up -d