compose


An nginx-proxy/lets-encrypt Docker Composition

I was just doing a major redeployment when I realized I’ve never documented my approach to nginx-proxy and lets-encrypt with Version 3 of docker-compose.

I like to deploy a bunch of web applications and static web sites behind a single proxy. What follows is meant to be copy-paste workable on an Ubuntu 16.04 server.

Organization

Set up your server’s directory structure:

1
mkdir -p ~/sites/nginx-proxy && cd ~/sites/nginx-proxy

Docker Compose

Paste the following into docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
# Can anyone explain this sorcery?
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
logging:
options:
max-size: "4m"
max-file: "10"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
depends_on:
- nginx-proxy
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
vhost:

# Do not forget to 'docker network create nginx-proxy' before launch
# and to add '--network nginx-proxy' to proxyed containers.
networks:
default:
external:
name: nginx-proxy

Configuring the nginx in nginx-proxy

Sometimes you need to override the default nginx configuration contained in the nginx-proxy Docker image. To do this, you must build a new image using nginx-proxy as its base.

For example, an app might need to accept large file uploads. You would paste this into your Dockerfile:

1
2
3
4
5
6
# Cf., https://github.com/schmunk42/nginx-proxy#proxy-wide
FROM jwilder/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 5m;'; \
} > /etc/nginx/conf.d/my_proxy.conf

This sets the required configurations within the nginx-proxy container.

In this case you also need to modify the docker-compose.yml file to build the local Dockerfile. The first few lines will now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
# docker-compose.yml
version: '3'
services:
nginx-proxy:

# Change this:
#image: jwilder/nginx-proxy

# To this:
build: .

# as above...

Deploying sites and apps

With the proxy configured and deployed (docker-compose up -d), you can wire up all your sites and apps.

Static Site

A static site deployed with nginx:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: unless-stopped
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=you@example.com
expose:
- 80
volumes:
- ./_site:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Deploy App

Requirements are going to vary app-by-app, but for a simple node application, use the following as a starting point:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# docker-compose.yml
version: '3'
services:
node:
build: .
restart: unless-stopped
ports:
- 3000
environment:
- NODE_ENV=production
- VIRTUAL_HOST=app.example.com
- LETSENCRYPT_HOST=app.example.com
- LETSENCRYPT_EMAIL=you@example.com
volumes:
- .:/home/node
- /home/node/node_modules
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Dockerizing Tor to serve up multiple hidden web services

This post documents an improvement made to the method demonstrated in A Dockerized Torified Express Application Served with Nginx. The previous configuration only deploys one hidden Tor service. I want to be able to deploy a bunch of hidden services behind a general Tor proxy.

Here I use Docker and Compose to build a Tor container behind which multiple Express applications are served.

Express Apps

Let’s suppose there are two express apps. Each will have their own Dockerfile and docker-compose.yml configurations.

Dockerfile

Assuming that each app is setup with all dependencies installed, a simple express Dockerfile might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FROM node
ENV NPM_CONFIG_LOGLEVEL warn
EXPOSE 3000

# App setup
USER node
ENV HOME=/home/node

WORKDIR $HOME

ENV PATH $HOME/app/node_modules/.bin:$PATH

ADD package.json $HOME
RUN NODE_ENV=production npm install

CMD ["node", "./app.js"]

This defines the container in which the express app runs. Here, port 3000 will be open to apps on the network bridge (see below). Each app will need its own port. For example, the second app may EXPOSE 3001.

docker-compose.yml

docker-compose will build the express app image and serve it up on localhost. It will be connected to the same Docker network as the Tor container. A docker-compose.yml for a simple express app might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
networks:
default:
external:
name: torproxy_default

Deploy Apps

Once the apps have been Dockerized, each may be brought online with this:

1
docker-compose up -d

Tor

Tor will use the same Dockerfile/docker-compose.yml approach to deploying the service. This will provide the public (hidden) access point.

The Tor proxy container should be setup in its own directory apart from the apps. E.g.,

1
mkdir tor-proxy && cd tor-proxy

Docker

Paste the following to Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
FROM debian
ENV NPM_CONFIG_LOGLEVEL warn

ENV DEBIAN_FRONTEND noninteractive

EXPOSE 9050

# `apt-utils` squelches a configuration warning
# `gnupg2` is required for adding the `apt` key
RUN apt-get update
RUN apt-get -y install apt-utils gnupg2

#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-25
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring

# The debian image does not create a default user
RUN useradd -m user
USER user

# Run the Tor service
CMD /usr/bin/tor -f /etc/tor/torrc

docker-compose.yml

This builds and deploys the Tor container. Paste into docker-compose.yml:

1
2
3
4
5
6
7
version: '3'
services:
tor:
build: .
restart: always
volumes:
- ./config/torrc:/etc/tor/torrc

Configuration

As declared above (in docker-compose.yml), the container shares a volume on the host called /config/torrc and connects to the torproxy_default network. It’s in the torrc file that you set the ports for your hidden service. The network allows the external hidden apps to connect to the tor-proxy container. To find the hosts for each hidden service, simply execute:

1
docker ps

You should see something like this:

1
2
3
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
94816844b40b torapp2_node "npm start" 11 minutes ago Up 11 minutes 3001/tcp torapp2_node_1
8c11fb2c9167 torapp1_node "npm start" 12 minutes ago Up 12 minutes 3000/tcp torapp1_node_1

The items listed under the NAMES column serve as your hostnames. So, in this two app configuration, ./config/torrc looks like this:

1
2
3
4
5
HiddenServiceDir /home/user/.tor/hidden_app_1/
HiddenServicePort 80 torapp1_node_1:3000

HiddenServiceDir /home/user/.tor/hidden_app_2/
HiddenServicePort 80 torapp2_node_1:3001

Note the different ports on each of the hidden services. These correspond to the exposed ports in each app’s docker-compose.yml file.

Deploy Tor Container

Bring Tor online with this:

1
docker-compose up -d

If the container reports any sort of directory permissions issues, refer to the notes pertaining to the RUN usermod -u 1001 user command in the tor-proxy Dockerfile.

Assuming everything is built and deployed correctly, you can find your .onion hostnames in the .tor directory in the container:

1
2
docker-compose exec tor cat /home/user/.tor/hidden_app_1/hostname
docker-compose exec tor cat /home/user/.tor/hidden_app_2/hostname

Assuming all goes well, welcome to the darkweb.


A Dockerized, Torified, Express Application

Dark Web chatter is picking up. I’m interested in providing cool web services anonymously. This is my first attempt at using Docker Compose to stay ahead of this trend.

Assumption: all the software goodies are setup and ready to go on an Ubuntu 16.04 server (node, docker, docker-compose, et al).

Set up an Express App

The Express Application Generator strikes me as a little bloated, but I use it anyway because I’m super lazy.

1
sudo npm install express-generator -g

Once installed, set up a vanilla express project:

1
2
express --view=ejs tor-app
cd tor-app && npm install

The express-generator will tell you to run the app like this:

1
DEBUG=tor-app:* npm start

This, of course, is only useful for development. From here, we’ll Dockerize for deployment and Torify for anonymity.

Tor pre-configuration

In anticipation of setting up the actual Torified app container, create a new file called config/torrc. This file will be used by Tor inside the Docker container to serve up our app. Paste the following into config/torrc:

1
2
HiddenServiceDir /home/node/.tor/hidden_service/
HiddenServicePort 80 127.0.0.1:3000

Docker

Copy and paste the following into a new file called Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
FROM node:stretch
ENV NPM_CONFIG_LOGLEVEL warn

ENV DEBIAN_FRONTEND noninteractive

EXPOSE 9050

# `apt-utils` squelches a configuration warning
RUN apt-get update
RUN apt-get -y install apt-utils

#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-20
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring

#
# Tor raises some tricky directory permissions issues. Once started, Tor will
# write the hostname and private key into a directory on the host system. If
# the `node` user in the container does not have the same UID as the user on
# the host system, Tor will not be able to create and write to these
# directories. Execute `id -u` on the host to determine your UID.
#
# RUN usermod -u 1001 node

# App setup
USER node
ENV HOME=/home/node

WORKDIR $HOME

ENV PATH $HOME/app/node_modules/.bin:$PATH

ADD package.json $HOME
RUN NODE_ENV=production npm install

# Run the Tor service alongside the app itself
CMD /usr/bin/tor -f /etc/tor/torrc & npm start

Container/Host Permissions

Take special note of the comment posted above the RUN usermode -u 1001 node instruction in Dockerfile. If you get any errors on the container build/execute step described below, you’ll need to make sure your host user’s UID is the same as your container user’s UID (i.e., the node user).

Usually the user in the container has a UID of 1000. To determine the host user’s UID, execute id -u. If it’s not 1000, uncomment the usermod instruction in Dockerfile and make sure the numbers match.

Docker Compose

docker-compose does all of the heavy lifting for building the Dockerfile and start-up/shut-down operations. Paste the following into a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
- ./config/torrc:/etc/tor/torrc

Bring the whole thing online by running

1
docker-compose up -d

Every now and then I get an error trying to obtain the GPG key:

1
gpg: keyserver receive failed: Cannot assign requested address

This usually solves itself on subsequent calls to docker-compose up.

Assuming the build and execution was successful, you can determine your .onion address like this:

1
docker-compose exec node cat /home/node/.tor/hidden_service/hostname

You should now be able to access your app from favourite Tor web browser.

If you’re interested in poking around inside the container, access the bash prompt like this:

1
docker-compose exec node bash

Notes

This is the first step in configuring and deploying a hidden service on the Tor network. Since working out the initial details, I’ve already thought of potential improvements to this approach. As it stands, only one hidden service can be deployed. It would be far better to create a Tor container able to proxy multiple apps. I will also be looking into setting up .onion vanity URLs and HTTPS.


cors-anywhere deployment with Docker Compose

My ad-tracker Express app serves up queued advertisements with a client-side call to Javascript’s fetch function. This, of course, raises the issue of Cross-Origin Resource Sharing.

I use the cors-anywhere node module to allow sites with ad-tracker advertisements to access the server. Naturally, docker-compose is my preferred deployment tool.

Set up the project

Create a project directory and initialize the application with npm:

1
2
mkdir -p sites/cors-anywhere-server && cd sites/cors-anywhere-server
npm init

Follow the npm init prompts.

Once initialized, add the cors-anywhere module to the project:

1
npm install cors-anywhere --save

Copy and paste the following into index.js (or whatever entry-point you specified in the initialization step):

1
2
3
4
5
6
7
8
9
10
11
12
13
// Listen on a specific host via the HOST environment variable
var host = process.env.HOST || '0.0.0.0';
// Listen on a specific port via the PORT environment variable
var port = process.env.PORT || 8080;

var cors_proxy = require('cors-anywhere');
cors_proxy.createServer({
originWhitelist: [], // Allow all origins
requireHeader: ['origin', 'x-requested-with'],
removeHeaders: ['cookie', 'cookie2']
}).listen(port, host, function() {
console.log('Running CORS Anywhere on ' + host + ':' + port);
});

This code is take verbatim from the cors-anywhere documentation.

To execute the application:

1
node index.js

If it executes successfully, you should see:

1
Running CORS Anywhere on 0.0.0.0:8080

Exit the app.

Docker

To create the Dockerized application image, paste the following into Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FROM node
ENV NPM_CONFIG_LOGLEVEL warn
EXPOSE 8080

# App setup
USER node
ENV HOME=/home/node

WORKDIR $HOME

ENV PATH $HOME/app/node_modules/.bin:$PATH

ADD package.json $HOME
RUN NODE_ENV=production npm install

CMD ["node", "./index.js"]

This will build the cors-anywhere into a Docker node container.

Docker Compose

Paste the following into docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
version: '3'
services:
node:
build: .
restart: always
ports:
- "8080"
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules

Build the image and deploy in one step:

1
docker-compose up

The last line of the console output should read:

1
node_1  | Running CORS Anywhere on 0.0.0.0:8080

At this point, any request proxied through the cors-anywhere-server will be allowed access to cross-domain resources. Your client-side fetch calls can now leverage this functionality by prefixing the destination URL with the cors-anywhere-server URL. It may look something like this:

(function() {
  var CORS_SERVER = 'https://cors-server.example.com:8080';
  var AD_SERVER = 'https://ads.example.com';

  fetch(CORS_SERVER + '/' + AD_SERVER).then(function(response) {
    return response.json();    
  }).then(function(json) {     
    console.log('CORS request successful!');
    console.log(json); 
  });
})();

Done!


PostgreSQL Backup and Restore Between Docker-composed Containers

The importance of backup and recovery really only becomes clear in the face of catastrophic data loss. I’ve got a slick little Padrino app that’s starting to generate traffic (and ad revenue). As such, it would be a real shame if my data got lost and I had to start from scratch.

docker-compose

This is what I’m working with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# docker-compose.yml
nginx:
restart: always
build: ./
volumes:
# Page content
- ./:/home/app/webapp
links:
- postgres
environment:
- PASSENGER_APP_ENV=production
- RACK_ENV=production
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=daniel@example.com
postgres:
restart: always
image: postgres
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=secretpassword
volumes_from:
- myapp_data

It’s the old Compose Version 1 syntax, but what follows should still apply. As with all such compositions, I write database data to a data-only container. Though the data persists apart from the Dockerized Postgres container, it still needs to be running (e.g., docker-compose up -d).

Dump the data

Assuming the containers are up and running, the appropriate command looks like this:

1
docker-compose exec -u <your_postgres_user> <postgres_service_name> pg_dump -Fc <database_name_here> > db.dump

Given the composition above, the command I actually execute is this:

1
docker-compose exec --user root postgres pg_dump -Fc myapp_production > db.dump

At this point, the db.dump file can be transfered to a remote server through whatever means are appropriate (I set this all up in capistrano to make it super easy).

Restore the data

Another assumption: a new database is up and running on the remote backup machine (ideally using the same docker-compose.yml file above).

The restore command looks like this:

1
docker-compose exec -i -u <your_postgres_user> <postgres_service_name> pg_restore -C -d postgres < db.dump

The command I execute is this:

1
docker-compose exec -i -u root postgres pg_restore -C -d postgres < db.dump

Done!


Nginx Proxy, Let's Encrypt Companion, and Docker Compose Version 3

I recently discovered that I don’t need to manually create data-only containers with docker-compose anymore. A welcome feature, but one that comes with all the usual migration overhead. I rely heavily on nginx-proxy and letsencrypt-nginx-proxy-companion. Getting it all to work in the style of docker-compose version 3 took a bit of doing.

My previous tried and true approach is getting pretty stale. It is time to up my Docker game…

My Site

nginx-proxy proxies multiple site, but for demonstration purposes, I’m only serving up one with nginx. I like to put all my individual Docker compositions in their own directories:

1
mkdir mysite && cd mysite

Optional

The following assumes you have some sort of site you want to serve up from the mysite/ directory. If not, just create a simple Hello, world! HTML page. Copy and paste the following to index.html:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello, world!</title>
</head>
<body>
Hello, world!
</body>
</html>

docker-compose

It’s awesome that I can create data-only containers in my docker-compose.yml, but now I’ve got to manually create a network bridge:

1
docker network create nginx-proxy

Proxied containers also need to know about this network in their own docker-compose.yml files…

Copy and paste the code below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: always
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=site.example.com
- LETSENCRYPT_EMAIL=email@example.com
volumes:
- ./:/usr/share/nginx/html
networks:
default:
external:
name: nginx-proxy

This will serve up files from the current directory (i.e., the same one that contains the new index.html page, if created).

Start docker-compose:

1
docker-compose up -d

The site won’t be accessible yet. That comes next.

nginx-proxy

As before, put the nginx-proxy Docker compositions in its own directory:

1
2
cd ..
mkdir nginx-proxy && cd nginx-proxy

Create a directory in which to store the Let’s Encrypt certificates:

1
mkdir certs

Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
volumes:
vhost:
networks:
default:
external:
name: nginx-proxy

This allows nginx-proxy to combine forces with letsencrypt-nginx-proxy-companion, all in one docker-compose file.

Start docker-compose:

1
docker-compose up -d

If all is well, you should be able to access your site at the address configured.


A Home-Based Ubuntu 16.04 Production Server with Salvaged Equipment

Preface

As I’ve often griped before, my cloud service provider (cloudatcost.com) is not exactly reliable. I’m currently on Day 3 waiting for their tech support to address several downed servers. Three days isn’t even that bad considering I’ve waited up to two weeks in the past. In any case, though I wish them success, I’m sick of their nonsense and am starting to migrate my servers out of the cloud and into my house. As a cloud company that routinely loses its customers’ data, it’s prudent to prepare for their likely bankruptcy and closure.

I have an old smashed-up AMD Quad Core laptop I’m going to use as a server. The screen was totally broken, so as a laptop it’s kind of useless anyway. It’s a little lightweight on resources (only 4 GB of RAM), but this is much more than I’m used to. I used unetbootin to create an Ubuntu 16.04 bootable USB and installed the base system.

What follows are the common steps I take when setting up a production server. This bare minimum approach is a process I repeat frequently, so it’s worth documenting here. Once the OS is installed…

Change the root password

The install should put the created user into the sudo group. Change the root password with that user:

1
2
3
sudo su
passwd
exit

Update OS

An interesting thing happened during install… I couldn’t install additional software (i.e., open-ssh), so I skipped it. When it came time to install vim, I discovered I didn’t have access to any of the repositories. The answer here shed some light on the situation, but didn’t really resolve anything.

I ended up copying the example sources.list from the documentation to fix the problem:

1
sudo cp /usr/share/doc/apt/examples/sources.list /etc/apt/sources.list

I found out later that this contained all repositories for Ubuntu 14.04. So I ended up manually pasting this in /etc/apt/sources.list:

1
2
3
4
5
6
7
8
9
10
11
12
# deb cdrom:[Ubuntu 16.04 LTS _Xenial Xerus_ - Release amd64 (20160420.1)]/ xenial main restricted
deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
# deb http://archive.ubuntu.com/ubuntu xenial-proposed main restricted universe multiverse
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner

After that, update/upgrade worked (make sure it doesn’t actually work before messing around):

1
2
sudo apt update
sudo apt upgrade

Install open-ssh

The first thing I do is configure my machine for remote access. As above, I couldn’t install open-ssh during the OS installation, for some reason. After sources.list was sorted out, it all worked:

1
sudo apt-get install openssh-server

Check the official Ubuntu docs for configuration tips.

While I’m here, though, I need to set a static IP…

1
sudo vi /etc/network/interfaces

Paste this (or similar) under # The primary network interface, as per lewis4u.

1
2
3
4
5
auto enp0s25
iface enp0s25 inet static
address 192.168.0.150
netmask 255.255.255.0
gateway 192.168.0.1

Then flush, restart, and verify that the settings are correct:

1
2
3
sudo ip addr flush enp0s25
sudo systemctl restart networking.service
ip add

Start ssh

My ssh didn’t start running automatically after install. I did this to make ssh run on startup:

1
sudo systemctl enable ssh

And then I did this, which actually starts the service:

1
sudo service ssh start

Open a port on the router

This step, of course, depends entirely on the make and model of router behind which the server is operating. For me, I access the adminstrative control panel by logging in at 192.168.0.1 on my LAN.

I found the settings I needed to configure on my Belkin router under Firewall -> Virtual Servers. I want to serve up web apps (both HTTP and HTTPS) and allow SSH access. As such, I configured three access points by providing the following information for each:

  1. Description
  2. Inbound ports (i.e., 22, 80, and 443)
  3. TCP traffic type (no UDP)
  4. The private/static address I just set on my server
  5. Inbound private ports (22, 80, and 443 respectively)

Set up DNS

Again, this depends on where you registered your domain. I pointed a domain I have registered with GoDaddy to my modems’s IP address, which now receives requests and forwards them to my server.

Login via SSH

With my server, router, and DNS all properly configured, I don’t need to be physically sitting in front of my machine anymore. As such, I complete the following steps logged in remotely.

Set up app user

I like to have one user account control app deployment. Toward that end, I create an app user and add him to the sudo group:

1
2
sudo adduser app
sudo adduser app sudo

Install the essentials

git

Won’t get far without git:

1
sudo apt install git

vim

My favourite editor is vim, which is not installed by default.

1
sudo apt install vim

NERDTree

My favourite vim plugin:

1
2
3
4
mkdir -p ~/.vim/autoload ~/.vim/bundle
cd ~/.vim/autoload
wget https://raw.github.com/tpope/vim-pathogen/HEAD/autoload/pathogen.vim
vim ~/.vimrc

Add this to the .vimrc file:

1
2
3
4
call pathogen#infect()
map <C-n> :NERDTreeToggle<CR>
set softtabstop=2
set expandtab

Save and exit:

1
2
cd ~/.vim/bundle 
git clone https://github.com/scrooloose/nerdtree.git

Now, when running vim, hit ctrl-n to toggle the file tree view.

Docker

Current installation instructions can be found here. The distilled process is as follows:

1
2
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

You can verify the key fingerprint:

1
sudo apt-key fingerprint 0EBFCD88

Which should return something like this:

1
2
3
4
pub   4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <docker@docker.com>
sub 4096R/F273FCD8 2017-02-22

Add repository and update:

1
2
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update

Install:

1
sudo apt install docker-ce

Create a docker user group:

1
sudo groupadd docker

Add yourself to the group:

1
sudo usermod -aG docker $USER

Add the app user to the group as well:

1
sudo usermod -aG docker $USER

Logout, login, and test docker without sudo:

1
docker run hello-world

If everything works, you should see the usual Hello, World! message.

Configure docker to start on boot:

1
sudo systemctl enable docker

docker-compose

This downloads the current stable version. Cross reference it with that offered here.

1
2
3
su
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Install command completion while still root:

1
2
curl -L https://raw.githubusercontent.com/docker/compose/master/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
exit

Test:

1
docker-compose --version

node

These steps are distilled from here.

1
2
3
cd ~
curl -sL https://deb.nodesource.com/setup_6.x -o nodesource_setup.sh
sudo bash nodesource_setup.sh

Now install:

1
sudo apt-get install nodejs build-essential

Ruby

The steps followed conclude with installing rails. I only install ruby:

1
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev nodejs

Using rbenv:

1
2
3
4
5
6
7
8
9
10
11
cd
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL

git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
exec $SHELL

rbenv install 2.4.0

That last step can take a while…

1
2
rbenv global 2.4.0
ruby -v

Install Bundler:

1
gem install bundler

Done!

There it is… all my usual favourites on some busted up piece of junk laptop. I expect it to be exactly 1000% more reliable than cloudatcost.com.


mongodump and mongorestore Between Docker-composed Containers

I’m trying to refine the process by which I backup and restore Dockerized MongoDB containers. My previous effort is basically a brute-force copy-and-paste job on the container’s data directory. It works, but I’m concerned about restoring data between containers installed with different versions of MongoDB. Apparently this is tricky enough even with the benefit of recovery tools like mongodump and mongorestore, which is what I’m using below.

In short, I need to dump my data from a data-only MongoDB container, bundle the files uploaded to my Express application, and restore it all on another server. Here’s how I did it…

Dump the data

I’m a big fan of docker-compose. I use it to manage all my containers. The following method requires that the composition be running so that mongodump can be run against the running Mongo container (which, in turn, accesses the data-only container). Assuming the name of the container is myapp_mongo_1

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongodump --host $MONGO_PORT_27017_TCP_ADDR'

This will create a root-owned directory called myapp-mongo-dump in your current directory. It contains all the BSON and JSON meta-data for this database. For convenience, I change ownership of this resource:

1
sudo chown -R user:user myapp-mongo-dump

Then, for transport, I archive the directory:

1
tar zcvf myapp-mongo-dump.tar.gz myapp-mongo-dump

Archive the uploaded files

My app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar zcvf uploads.tar.gz uploads

Now I have two archived files: myapp-mongo-dump.tar.gz and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp myapp-mongo-dump.tar.gz uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
2
tar zxvf uploads.tar.gz
tar zxvf myapp-mongo-dump.tar.gz

Then I restore the data to the data-only container through the running Mongo instance (assumed to be called myapp_mongo_1):

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongorestore --host $MONGO_PORT_27017_TCP_ADDR'

With that, all data is restored. I didn’t even have to restart my containers to begin using the app on its new server.


MongoDB backup and restore between Dockerized Node apps

My bargain-basement cloud service provider, CloudAtCost recently lost one of my servers and all the data on it. This loss was exasperated by the fact that I didn’t backup my MongoDB data somewhere else. Now I’m working out the exact process after the fact so that I don’t suffer this loss again (it’s happened twice now with CloudAtCost, but hey, the price is right).

The following is a brute-force backup and recovering process. I suspect this approach has its weaknesses in that it may depend upon version-consistency between the MongoDB containers. This is not ideal for someone like myself who always installs the latest version when creating new containers. I aim to develop a more flexible process soon.

Context

I have a server running Ubuntu 16.04, which, in turn is serving up a Dockerized Express application (Nginx, MongoDB, and the app itself). The MongoDB data is backed up in a data-only container. To complicate matters, the application allows file uploads, which are being stored on the file system in the project’s root.

I need to dump the data from the data-only container, bundle the uploaded files, and restore it all on another server. Here’s how I did it…

Dump the data

I use docker-compose to manage my containers. To obtain the name of the MongoDB data-only container, I simply run docker ps -a. Assuming the name of the container is myapp_mongo_data

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data/db

This will put a file called backup.tar in the app’s root directory. It may belong to the root user. If so, run sudo chown user:user backup.tar.

Archive the uploaded files

The app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar -zcvf uploads.tar.gz uploads

Now I have two archived files: backup.tar and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp backup.tar uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
tar -zxvf uploads.tar.gz

Then I restore the data to the data container:

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Remove and restart containers

The project containers don’t need to be running when you restore the data in the previous step. If they are running, however, once the data is restored, remove the running containers and start again with docker-compose:

1
2
3
docker-compose stop
docker-compose rm
docker-compose up -d

I’m sure there is a reasonable explanation as to why removing the containers is necessary, but I don’t know what it is yet. In any case, removing the containers isn’t harmful because all the data is on the data-only container anyway.

Warning

As per the introduction, this process probably depends on version consistency between MongoDB containers.


Nginx, Let's Encrypt, and Docker Compose

DEPRECATED!

As of 2017-7-31, this no longer suits my purpose. Here’s my updated docker-compose version 3 approach.

Introduction

I’ve used StartSSL for years. Then I discovered Let’s Encrypt. All my old StartSSL certificates are expiring, so I needed to work out a process by which I could swap them out for Let’s Encrypt certs.

jwilder’s excellent nginx-proxy has been my go-to for easy certificate configuration for some time now. I was relieved to learn that I am still be able to leverage this tool with the help of the docker-letsencrypt-nginx-proxy-companion container.

This document outlines the process by which Let’s Encrypt certificates are managed for a single nginx container behind an nginx-proxy accompanied by the docker-letsencrypt-nginx-proxy-companion. docker-compose is used to manage the overall configuration. It was proven on Ubuntu 16.04. Naturally, it is assumed that Docker and Compose are already installed. Copying and pasting the commands provided should lead to a successful deployment.

My Site

nginx-proxy proxies multiple site. I’m only serving one with nginx. I like to put all my individual Docker compositions in their own directories:

1
mkdir mysite && cd mysite

Optional

The following assumes you have some sort of site you want to serve up from the mysite/ directory. If not, just create a simple Hello, world! HTML page. Copy and paste the following to index.html:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello, world!</title>
</head>
<body>
Hello, world!
</body>
</html>

docker-compose

This configures docker-compose to serve up your site with nginx. Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
# docker-compose.yml
nginx:
image: nginx
restart: always
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=site.example.com
- LETSENCRYPT_EMAIL=email@example.com
volumes:
- ./:/usr/share/nginx/html

This will serve up files from the current directory (i.e., the same one that contains the new index.html page, if created).

Start docker-compose:

1
docker-compose up -d

The site won’t be accessible yet. That comes next.

nginx-proxy

As before, put the nginx-proxy Docker compositions in its own directory:

1
2
cd ..
mkdir nginx-proxy && cd nginx-proxy

Create a directory in which to store the Let’s Encrypt certificates:

1
mkdir certs

Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# docker-compose.yml
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx-proxy

This allows nginx-proxy to combine forces with docker-letsencrypt-nginx-proxy-companion, all in one docker-compose file.

Start docker-compose:

1
docker-compose up -d

If all is well, you should be able to access your site at the address configured.