devops


A Home-Based Ubuntu 16.04 Production Server with Salvaged Equipment

Preface

As I’ve often griped before, my cloud service provider (cloudatcost.com) is not exactly reliable. I’m currently on Day 3 waiting for their tech support to address several downed servers. Three days isn’t even that bad considering I’ve waited up to two weeks in the past. In any case, though I wish them success, I’m sick of their nonsense and am starting to migrate my servers out of the cloud and into my house. As a cloud company that routinely loses its customers’ data, it’s prudent to prepare for their likely bankruptcy and closure.

I have an old smashed-up AMD Quad Core laptop I’m going to use as a server. The screen was totally broken, so as a laptop it’s kind of useless anyway. It’s a little lightweight on resources (only 4 GB of RAM), but this is much more than I’m used to. I used unetbootin to create an Ubuntu 16.04 bootable USB and installed the base system.

What follows are the common steps I take when setting up a production server. This bare minimum approach is a process I repeat frequently, so it’s worth documenting here. Once the OS is installed…

Change the root password

The install should put the created user into the sudo group. Change the root password with that user:

1
2
3
sudo su
passwd
exit

Update OS

An interesting thing happened during install… I couldn’t install additional software (i.e., open-ssh), so I skipped it. When it came time to install vim, I discovered I didn’t have access to any of the repositories. The answer here shed some light on the situation, but didn’t really resolve anything.

I ended up copying the example sources.list from the documentation to fix the problem:

1
sudo cp /usr/share/doc/apt/examples/sources.list /etc/apt/sources.list

I found out later that this contained all repositories for Ubuntu 14.04. So I ended up manually pasting this in /etc/apt/sources.list:

1
2
3
4
5
6
7
8
9
10
11
12
# deb cdrom:[Ubuntu 16.04 LTS _Xenial Xerus_ - Release amd64 (20160420.1)]/ xenial main restricted
deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
# deb http://archive.ubuntu.com/ubuntu xenial-proposed main restricted universe multiverse
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner

After that, update/upgrade worked (make sure it doesn’t actually work before messing around):

1
2
sudo apt update
sudo apt upgrade

Install open-ssh

The first thing I do is configure my machine for remote access. As above, I couldn’t install open-ssh during the OS installation, for some reason. After sources.list was sorted out, it all worked:

1
sudo apt-get install openssh-server

Check the official Ubuntu docs for configuration tips.

While I’m here, though, I need to set a static IP…

1
sudo vi /etc/network/interfaces

Paste this (or similar) under # The primary network interface, as per lewis4u.

1
2
3
4
5
auto enp0s25
iface enp0s25 inet static
address 192.168.0.150
netmask 255.255.255.0
gateway 192.168.0.1

Then flush, restart, and verify that the settings are correct:

1
2
3
sudo ip addr flush enp0s25
sudo systemctl restart networking.service
ip add

Start ssh

My ssh didn’t start running automatically after install. I did this to make ssh run on startup:

1
sudo systemctl enable ssh

And then I did this, which actually starts the service:

1
sudo service ssh start

Open a port on the router

This step, of course, depends entirely on the make and model of router behind which the server is operating. For me, I access the adminstrative control panel by logging in at 192.168.0.1 on my LAN.

I found the settings I needed to configure on my Belkin router under Firewall -> Virtual Servers. I want to serve up web apps (both HTTP and HTTPS) and allow SSH access. As such, I configured three access points by providing the following information for each:

  1. Description
  2. Inbound ports (i.e., 22, 80, and 443)
  3. TCP traffic type (no UDP)
  4. The private/static address I just set on my server
  5. Inbound private ports (22, 80, and 443 respectively)

Set up DNS

Again, this depends on where you registered your domain. I pointed a domain I have registered with GoDaddy to my modems’s IP address, which now receives requests and forwards them to my server.

Login via SSH

With my server, router, and DNS all properly configured, I don’t need to be physically sitting in front of my machine anymore. As such, I complete the following steps logged in remotely.

Set up app user

I like to have one user account control app deployment. Toward that end, I create an app user and add him to the sudo group:

1
2
sudo adduser app
sudo adduser app sudo

Install the essentials

git

Won’t get far without git:

1
sudo apt install git

vim

My favourite editor is vim, which is not installed by default.

1
sudo apt install vim

NERDTree

My favourite vim plugin:

1
2
3
4
mkdir -p ~/.vim/autoload ~/.vim/bundle
cd ~/.vim/autoload
wget https://raw.github.com/tpope/vim-pathogen/HEAD/autoload/pathogen.vim
vim ~/.vimrc

Add this to the .vimrc file:

1
2
3
4
call pathogen#infect()
map <C-n> :NERDTreeToggle<CR>
set softtabstop=2
set expandtab

Save and exit:

1
2
cd ~/.vim/bundle
git clone https://github.com/scrooloose/nerdtree.git

Now, when running vim, hit ctrl-n to toggle the file tree view.

Docker

Current installation instructions can be found here. The distilled process is as follows:

1
2
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

You can verify the key fingerprint:

1
sudo apt-key fingerprint 0EBFCD88

Which should return something like this:

1
2
3
4
pub 4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <docker@docker.com>
sub 4096R/F273FCD8 2017-02-22

Add repository and update:

1
2
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update

Install:

1
sudo apt install docker-ce

Create a docker user group:

1
sudo groupadd docker

Add yourself to the group:

1
sudo usermod -aG docker $USER

Add the app user to the group as well:

1
sudo usermod -aG docker $USER

Logout, login, and test docker without sudo:

1
docker run hello-world

If everything works, you should see the usual Hello, World! message.

Configure docker to start on boot:

1
sudo systemctl enable docker

docker-compose

This downloads the current stable version. Cross reference it with that offered here.

1
2
3
su
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Install command completion while still root:

1
2
curl -L https://raw.githubusercontent.com/docker/compose/master/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
exit

Test:

1
docker-compose --version

node

These steps are distilled from here.

1
2
3
cd ~
curl -sL https://deb.nodesource.com/setup_6.x -o nodesource_setup.sh
sudo bash nodesource_setup.sh

Now install:

1
sudo apt-get install nodejs build-essential

Ruby

The steps followed conclude with installing rails. I only install ruby:

1
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev nodejs

Using rbenv:

1
2
3
4
5
6
7
8
9
10
11
cd
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
exec $SHELL
rbenv install 2.4.0

That last step can take a while…

1
2
rbenv global 2.4.0
ruby -v

Install Bundler:

1
gem install bundler

Done!

There it is… all my usual favourites on some busted up piece of junk laptop. I expect it to be exactly 1000% more reliable than cloudatcost.com.


Brute force Docker WordPress Nginx proxy demo

Dockerizing a dynamic Nginx-WordPress proxy is tricky business. I plan to bundle this all up in bash scripts, but for now I am simply documenting the steps I took to configure the following system in my local environment:

[System Topology]

What follows is not a production-ready path to deployment. Rather, it is a brute-force proof of concept.

MySQL

Start a detatched MySQL container.

1
docker run -d -e MYSQL_ROOT_PASSWORD=secretp@ssword --name consolidated_blog_mysql_image mysql:5.7.8

This one probably won’t cause any trouble, so I don’t need to see any output.

Main WordPress

This is the WordPress instance you encounter when you land on the domain’s root.

1
docker run --rm --link consolidated_blog_mysql_image:mysql -e WORDPRESS_DB_NAME=main_blog -e WORDPRESS_DB_PASSWORD=secretp@ssword -p 8081:80 --name main_blog_wordpress_image wordpress:4

Secondary WordPress blog

This is the WordPress instance you encounter when you land on the domains /blog path.

1
docker run --rm --link consolidated_blog_mysql_image:mysql -e WORDPRESS_DB_NAME=blog2 -e WORDPRESS_DB_PASSWORD=secretp@ssword -p 8083:80 --name blog2_wordpress_image wordpress:4

Notice the port. If I were to set it from 8083:80 to 8082:80, it will redirect back to 8081, and I don’t know why yet.

Nginx proxy

This is the tricky part. I need to obtain the IPs assigned to my WordPress containers and set them in my Nginx default.conf.

First, get the IP addresses of each running main_blog_wordpress_image container:

1
docker inspect -f '{{ .NetworkSettings.IPAddress }}' main_blog_wordpress_image

This will output the IP. Make note of it, because I need to copy it to the Nginx’s default.conf file.

1
172.17.0.181

Get the IP addresses of each running blog2_wordpress_image container:

1
docker inspect -f '{{ .NetworkSettings.IPAddress }}' blog2_wordpress_image

There’s a good chance it will be the next IP in line:

1
172.17.0.182

Now, create a default.conf file:

1
vim default.conf

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {
listen 80;
server_name localhost;
# Main blog
location / {
proxy_pass http://172.17.0.181/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# Secondary blog
location /blog/ {
proxy_pass http://172.17.0.182/;
}
}

Change the proxy_pass IPs accordingly.

Execute:

1
docker run --rm --name nginx-wordpress-proxy -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 nginx

The main blog should now be accessible at http://localhost. The secondary blog at http://localhost/blog. Set up different blogs on each WordPress instance to confirm the system is working as designed.


Set up a private Docker registry

Having mastered deploying WordPress sites with Docker and Compose, I set up a blog for my lovely wife on one of our many demonstration/prototyping domains. Once her site was configured to her liking, I purchased a dedicated domain with the intent of moving her site over. This is simple enough, but I wanted a more comprehensive solution. One that would allow me to backup the changes she makes to her site (and database) periodically. As well, she wanted to establish a basic WordPress image from which she could launch new projects without having to go through the whole set up and configuration rigamarole over and over again.

With all that in mind, the following outlines how I set up our private Docker registery. The procedure was adapted and condensed from here.

Get a certificate for your domain

Always with the certificates!

I like to use startssl.com because they’re free. startssl.com provides an intermediate certificate, so remember to chain them. Alternatively, you can sign your own certificates.

However you decide to obtain your certificates, install them somewhere on your domain registry’s server. E.g.,

1
2
cd ~
mkdir certs

You should have two certificates named something like this:

  • ~/certs/myregistrydomain.com.crt
  • ~/certs/myregistrydomain.com.key

Restrict access with password

Make an auth/ directory:

1
2
cd ~
mkdir auth

Then set a user named someguy whose password is someAlphaNum3r1cPassword (or whatever):

1
docker run --entrypoint htpasswd registry:2 -Bbn someguy someAlphaNum3r1cPassword > auth/htpasswd

Note: at the time of writing, the password must be alphanumeric. Special symbols do not work. Assuming all else is configured correctly, using non-alphanumerics will result in this error:

1
basic auth attempt to https://myregistrydomain.com:5000/v2/ realm "Registry Realm" failed with status: 401 Unauthorized

I’m not sure if this is by oversight or by design. Either way, all this stuff is pretty wild and woolly and will likely change as the Docker product continues to evolve.

Set up local file system storage

1
2
cd ~
docker run -d -p 5000:5000 --restart=always --name registry -v `pwd`/data:/var/lib/registry registry:2

Configure Compose

Create a directory in which to write your docker-compose.yml file:

1
2
3
4
cd ~
mkdir registry
cd registry
vim docker-compose.yml

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_HTTP_SECRET: SomePseudoRandomString
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/myregistrydomain.com.crt
REGISTRY_HTTP_TLS_KEY: /certs/myregistrydomain.com.key
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /path/to/data:/var/lib/registry
- /path/to/certs:/certs
- /path/to/auth:/auth

Change the /path/to/ directory to point to your certs and auth directories. This will be your account’s home directory, if following the steps above to the letter (cf., cd ~).

Also, execute the following to generate a pseudo-random string for the REGISTRY_HTTP_SECRET option:

1
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 32

Start up the registry

From the ~/registry directory:

1
docker-compose up -d

Commit the images

I have two Docker images that need committing. These are hosted on a server different than my Docker registry server.

Obtain their container IDs:

1
docker ps

Supposing output similar to this:

1
2
3
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8cb170d95bb wordpress "/entrypoint.sh apach" 15 minutes ago Up 10 minutes 80/tcp examplecom_wordpress
69b59e19aadc mysql:5.7 "/entrypoint.sh mysql" 53 minutes ago Up 10 minutes 3306/tcp examplecom_mysql

First, I commit the WordPress image:

1
docker commit -p c8cb170d95bb somenewdomaincom_wordpress

Now I have a snapshot of the WordPress container saved as _somenewdomaincomwordpress.

Now commit the associated MySQL container:

1
docker commit -p 69b59e19aadc somenewdomaincom_mysql

Push the images

Authentication has been set up, so log in first:

1
docker login myregistrydomain.com:5000

Then push:

1
2
docker push myregistrydomain.com:5000/somenewdomaincom_wordpress
docker push myregistrydomain.com:5000/somenewdomaincom_mysql

Tag the images

Having committed the images, I now have two snapshots that need tagging. First, I tag the WordPress image:

1
docker tag examplecom_wordpress myregistrydomain.com:5000/somenewdomaincom_wordpress

Now I tag the associated MySQL image:

1
docker tag examplecom_mysql myregistrydomain.com:5000/somenewdomaincom_mysql

Push the images

Authentication has been set up, so log in first:

1
docker login myregistrydomain.com:5000

Then push:

1
2
docker push myregistrydomain.com:5000/examplecom_wordpress
docker push myregistrydomain.com:5000/examplecom_mysql

Redeploy (with Compose)

The whole purpose of this exercise was to move my wife’s site from one domain to another. We use an Nginx proxy to let us host a bunch of different WordPress sites on a single machine. Supposing that configuration with domain-appropriate security certificates pre-installed, I can use Compose to pull images from my new Docker registry.

First, create a directory on the host machine:

1
2
3
4
cd ~
mkdir somenewdomain.com
cd somenewdomain.com
vim docker-compose.yml

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
wordpress:
image: myregistrydomain.com:5000/examplecom_wordpress
links:
- mysql
environment:
- WORDPRESS_DB_PASSWORD=secretp@ssword
- VIRTUAL_HOST=somenewdomain.com
expose:
- 80
mysql:
image: myregistrydomain.com:5000/examplecom_mysql
environment:
- MYSQL_ROOT_PASSWORD=secretp@ssword
- MYSQL_DATABASE=wordpress

Fire ‘er up!

1
docker-compose up -d

Deploy a Rails app to Docker with Capistrano

These instructions follow my previous post on deploying multiple Rails apps with Passenger, Nginx, and Docker. Go read (and do) all that first.

Assumptions

As always, this guide assumes the production server is running Ubuntu 14.04 and has all the requisite software already installed (e.g.: Docker, Rails, Capistrano, etc.). Further, it is assumed that you have a system similar to the one described here, and that by following the instruction provided, you have a Rails application deployed in a Docker container. I will be setting up Capistrano for the Rails app in that container.

Set up project in local development environment

Update the previous Docker configuration files

Nginx configuration

Change the existing docker/my-app.conf to look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name example.com;
# This used to be /home/app/my-app/public;
root /home/app/my-app/current/public;
# Passenger
passenger_enabled on;
passenger_user app;
passenger_ruby /usr/bin/ruby2.2;
}

Change Dockerfile

Since Capistrano will be building the app, all those steps can be removed from the Dockerfile. It should now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
FROM phusion/passenger-ruby22:latest
MAINTAINER Some Groovy Cat "hepcat@example.com"
# Set correct environment variables.
ENV HOME /root
ENV RAILS_ENV production
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# Start Nginx and Passenger
EXPOSE 80
RUN rm -f /etc/service/nginx/down
# Configure Nginx
RUN rm /etc/nginx/sites-enabled/default
ADD docker/my-app.conf /etc/nginx/sites-enabled/my-app.conf
ADD docker/postgres-env.conf /etc/nginx/main.d/postgres-env.conf
# Install the app
ADD . /home/app/my-app
WORKDIR /home/app/my-app
RUN chown -R app:app /home/app/my-app
RUN sudo -u app bundle install --deployment
RUN sudo -u app RAILS_ENV=production rake assets:precompile
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

Set up Capistrano

You work on your Rails app locally and you deploy to production. From your local development environment, go to your app’s root directory and run:

1
2
cd my-app
cap install

If Capistrano is installed (gem install capistrano), you will see something similar to this:

1
2
3
4
5
6
7
mkdir -p config/deploy
create config/deploy.rb
create config/deploy/staging.rb
create config/deploy/production.rb
mkdir -p lib/capistrano/tasks
create Capfile
Capified

This produces a pre-cooked config/deploy.rb file. For the app deployed in the previous post, change it to look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
lock '3.4.0'
set :application, 'my-app'
set :repo_url, 'https://github/myprofile/my-app.git'
set :branch, 'master'
set :scm, :git
set :deploy_to, "/home/app/#{fetch(:application)}"
namespace :deploy do
desc 'Install node modules'
task :npm_install do
on roles(:app) do
execute "cd #{release_path} && npm install"
end
end
desc 'Build Docker images'
task :build do
on roles(:app) do
execute "cd #{release_path} && docker build -t #{fetch(:application)}-image ."
end
end
desc 'Restart application'
task :restart do
on roles(:app) do
execute "docker stop #{fetch(:application)} ; true"
execute "docker rm #{fetch(:application)} ; true"
execute "docker run --restart=always --name #{fetch(:application)} --expose 80 -e VIRTUAL_HOST=example.com --link postgres:postgres -d #{fetch(:application)}-image"
end
end
before :updated, 'deploy:npm_install'
after :publishing, 'deploy:build'
after :publishing, 'deploy:restart'
end

Then, in config/deploy/production.rb, modify as appropriate (it’s probably sufficient to tack this on to the end of the file):

1
server "example.com", user: "app", roles: %w{app web}

Commit and push your changes to your repository.

Set up project in production

Configuration

Before deploying with Capistrano, you need to do some configuration. Assuming that the app has already been cloned to the production machine these are the files that need adjusting:

  • my-app/config/database.yml
  • my-app/config/secrets.yml

The settings in here are not typically committed to the repository for security reasons. Assuming the Postgres configuration in the previous post, database.yml should look like this:

1
2
3
4
5
6
7
production:
<<: *default
database: my-app_production
username: postgres
password: secretp@ssword
host: <%= ENV['POSTGRES_PORT_5432_TCP_ADDR'] %>
port: <%= ENV['POSTGRES_PORT_5432_TCP_PORT'] %>

secrets.yml needs to have a secret key set for production. From your app’s home directory, run:

1
rake secret

Copy the key it produces and set it in secrets.yml:

1
2
production:
secret_key_base: PasteGeneratedKeyHere

Back in the local development environment…

From your app’s home directory:

1
cap production deploy

And now back to production…

If working with the app from the previous post, everything should be ready to go. If the site reports an error, however, you may need to setup the database in production. First, stop the Docker container:

1
docker stop my-app

Then, create and seed the database:

1
2
3
docker run --rm --link postgres:postgres my-app-image rake db:create
docker run --rm --link postgres:postgres my-app-image rake db:migrate
docker run --rm --link postgres:postgres my-app-image rake db:seed

And restart:

1
docker start my-app

Deploy multiple Rails apps with Passenger, Nginx, and Docker

Here’s the problem:

I’ve got a bunch of Rails apps, but only a handful of cloud servers. I need some of them to live on a single machine without them stepping all over each other.

Assumptions

This guide assumes the server is running Ubuntu 14.04 and has all the requisite software already installed (e.g.: Docker, Rails, etc.).

Enter Docker

Docker makes the following configuration easy to maintain:

[System Topology]

Docker is also nice because all the required containers come pre-packaged:

Nginx

First, get some SSL certificates

You’ll need one for each Rails app you wish to deploy. These can be self-signed or obtained from a Certificate Authority. To self-sign a certificate, execute the following:

1
2
3
4
5
6
mkdir certs
cd certs
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sub.example.com.key -out sub.example.com.crt
cd ..
sudo chown -R root:root certs
sudo chmod -R 600 certs

Note the keyout and out options. The jwilder/nginx-proxy Docker image won’t pick up the certificates unless they are named in accordance with the production site’s URL and subdomain (if any). For example, if you have a certificate for example.com, the keyout and out options must be named example.com.key and example.com.crt respectively.

Obtain a certificate for each app you wish to deploy (or just get one for the purposes of this tutorial).

Then, run the Nginx docker image

Note the app username. Adjust as appropriate.

1
docker run --restart=always --name nginx-proxy -d -p 80:80 -p 443:443 -v /home/app/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

PostgreSQL

1
docker run --restart=always --name postgres -e POSTGRES_PASSWORD=secretp@ssword -d postgres

Rails apps

Now for the tricky part…

This configuration is meant to make deployment easy. The easiest way I’ve discovered so far involves writing a Dockerfile for the Rails app and providing Nginx some configuration files.

Save this sample Dockerfile in your app’s root directory on the server (next to the Gemfile):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Adapted from https://intercityup.com/blog/deploy-rails-app-including-database-configuration-env-vars-assets-using-docker.html
FROM phusion/passenger-ruby22:latest
MAINTAINER Some Groovy Cat "hepcat@example.com"
# Set correct environment variables.
ENV HOME /root
ENV RAILS_ENV production
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# Start Nginx and Passenger
EXPOSE 80
RUN rm -f /etc/service/nginx/down
# Configure Nginx
RUN rm /etc/nginx/sites-enabled/default
ADD docker/my-app.conf /etc/nginx/sites-enabled/my-app.conf
ADD docker/postgres-env.conf /etc/nginx/main.d/postgres-env.conf
# Install the app
ADD . /home/app/my-app
WORKDIR /home/app/my-app
RUN chown -R app:app /home/app/my-app
RUN sudo -u app bundle install --deployment
# TODO: figure out how to install `node` modules without `sudo`
RUN sudo npm install
RUN sudo -u app RAILS_ENV=production rake assets:precompile
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

Note the ADD commands under the Configure Nginx header. These are copying configurations into the Docker image. Here I put them in the docker directory to keep them organized. From your app’s root directory:

1
mkdir docker

Now, save the following to docker/my-app.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name example.com;
root /home/app/my-app/public;
# Passenger
passenger_enabled on;
passenger_user app;
passenger_ruby /usr/bin/ruby2.2;
}

Of course, change the server name as appropriate. Also note the /home/app directory. app is the username set up by the phusion/passenger-ruby22 image.

Next, save the following to docker/postgres-env.conf

1
2
env POSTGRES_PORT_5432_TCP_ADDR;
env POSTGRES_PORT_5432_TCP_PORT;

This is some Docker magic that preserves these Postgres environment variables.

Now, build the app’s image from the project’s root directory:

1
docker build -t my-app-image .

This command reads the Dockerfile just created and executes the instructions contained therein.

Setup, migrate, and seed the database:

1
2
3
docker run --rm --link postgres:postgres my-app-image rake db:create
docker run --rm --link postgres:postgres my-app-image rake db:migrate
docker run --rm --link postgres:postgres my-app-image rake db:seed

Finally, execute the image:

1
docker run --restart=always --name my-app --expose 80 -e VIRTUAL_HOST=example.com --link postgres:postgres -d my-app-image

If everything goes well, you will be able to see your app at example.com (or wherever).

Next

Deploy a Rails app to Docker with Capistrano