I was just doing a major redeployment when I realized I’ve never documented my approach to nginx-proxy and lets-encrypt with Version 3 of docker-compose.
I like to deploy a bunch of web applications and static web sites behind a single proxy. What follows is meant to be copy-paste workable on an Ubuntu 16.04 server.
Organization
Set up your server’s directory structure:
1
mkdir -p ~/sites/nginx-proxy && cd ~/sites/nginx-proxy
# Do not forget to 'docker network create nginx-proxy' before launch
# and to add '--network nginx-proxy' to proxyed containers.
networks:
default:
external:
name: nginx-proxy
Configuring the nginx in nginx-proxy
Sometimes you need to override the default nginx configuration contained in the nginx-proxy Docker image. To do this, you must build a new image using nginx-proxy as its base.
For example, an app might need to accept large file uploads. You would paste this into your Dockerfile:
I recently discovered that I don’t need to manually create data-only containers with docker-compose anymore. A welcome feature, but one that comes with all the usual migration overhead. I rely heavily on nginx-proxy and letsencrypt-nginx-proxy-companion. Getting it all to work in the style of docker-compose version 3 took a bit of doing.
My previous tried and true approach is getting pretty stale. It is time to up my Docker game…
My Site
nginx-proxy proxies multiple site, but for demonstration purposes, I’m only serving up one with nginx. I like to put all my individual Docker compositions in their own directories:
1
mkdir mysite && cd mysite
Optional
The following assumes you have some sort of site you want to serve up from the mysite/ directory. If not, just create a simple Hello, world! HTML page. Copy and paste the following to index.html:
1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello, world!</title>
</head>
<body>
Hello, world!
</body>
</html>
docker-compose
It’s awesome that I can create data-only containers in my docker-compose.yml, but now I’ve got to manually create a network bridge:
1
docker network create nginx-proxy
Proxied containers also need to know about this network in their own docker-compose.yml files…
Copy and paste the code below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: always
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=site.example.com
- LETSENCRYPT_EMAIL=email@example.com
volumes:
- ./:/usr/share/nginx/html
networks:
default:
external:
name: nginx-proxy
This will serve up files from the current directory (i.e., the same one that contains the new index.html page, if created).
Start docker-compose:
1
docker-compose up -d
The site won’t be accessible yet. That comes next.
nginx-proxy
As before, put the nginx-proxy Docker compositions in its own directory:
1
2
cd ..
mkdir nginx-proxy && cd nginx-proxy
Create a directory in which to store the Let’s Encrypt certificates:
1
mkdir certs
Copy and paste the following to a file called docker-compose.yml:
Dockerizing a dynamic Nginx-WordPress proxy is tricky business. I plan to bundle this all up in bash scripts, but for now I am simply documenting the steps I took to configure the following system in my local environment:
What follows is not a production-ready path to deployment. Rather, it is a brute-force proof of concept.
MySQL
Start a detatched MySQL container.
1
docker run -d -e MYSQL_ROOT_PASSWORD=secretp@ssword --name consolidated_blog_mysql_image mysql:5.7.8
This one probably won’t cause any trouble, so I don’t need to see any output.
Main WordPress
This is the WordPress instance you encounter when you land on the domain’s root.
There’s a good chance it will be the next IP in line:
1
172.17.0.182
Now, create a default.conf file:
1
vim default.conf
Copy and save the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {
listen 80;
server_name localhost;
# Main blog
location / {
proxy_pass http://172.17.0.181/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# Secondary blog
location /blog/ {
proxy_pass http://172.17.0.182/;
}
}
Change the proxy_pass IPs accordingly.
Execute:
1
docker run --rm --name nginx-wordpress-proxy -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 nginx
The main blog should now be accessible at http://localhost. The secondary blog at http://localhost/blog. Set up different blogs on each WordPress instance to confirm the system is working as designed.
Obtaining or self-signing security certificates is a frequent step in my notes. The intent of this post is to DRY out my blog.
To self-sign a certificate, first create a certs/ directory:
1
2
mkdir certs
cd certs
In the following command, note the keyout and out options. I like to name my certificates in accordance with my production site’s URL and subdomain (if any). For example, suppose I need a certificate for example.com. I set the keyout and out options to example.com.key and example.com.crt respectively.
If you’re like me and you use the jwilder/nginx-proxy Docker image, it won’t find your certificates unless you follow the naming convention above.
Now, make sure that no one but root can look at your private key:
1
2
3
cd ..
sudo chown -R root:root certs
sudo chmod -R 600 certs
Alternatively, if you need validation from a third-party Certificate Authority, I like to use startssl.com. Their site is a little clunky, but they offer certificates for free, so they’re alright in my books.
I always use startssl.com to get free authentication certificates. It’s a little clunky to use, but it’s free and that makes it awesome. When it comes time to configure Nginx to use my new certificates, I always forget what to do. These instructions are adapted from here.
Having successfully followed the instructions at startssl.com, you’ll wind up with these four files:
ca.pem
ssl.crt
ssl.key
sub.class1.server.ca.pem
I like to put these all in a directory and zip ‘em up for transport to the production server. Assuming that they’ve all been saved to a directory named for your URL (e.g., example.com/):
1
2
tar -zcvf example.com.tar.gz example.com
scp example.com.tar.gz you@example.com:~
Then, from the production machine, untar the file:
1
2
3
ssh you@example.com
tar -zxvf example.com.tar.gz
cd example.com/
Decrypt the private key with the password you entered at startssl.com.
1
openssl rsa -in ssl.key -out example.com.key
The unencrypted private key is not something you want to show off. Make it so only root can read it:
1
2
chmod 400 example.com.key
sudo chown root:root example.com.key
Nginx needs the startssl.com intermediate certificate concatenated to the public certificate:
I’ve got a bunch of Rails apps, but only a handful of cloud servers. I need some of them to live on a single machine without them stepping all over each other.
Assumptions
This guide assumes the server is running Ubuntu 14.04 and has all the requisite software already installed (e.g.: Docker, Rails, etc.).
You’ll need one for each Rails app you wish to deploy. These can be self-signed or obtained from a Certificate Authority. To self-sign a certificate, execute the following:
Note the keyout and out options. The jwilder/nginx-proxy Docker image won’t pick up the certificates unless they are named in accordance with the production site’s URL and subdomain (if any). For example, if you have a certificate for example.com, the keyout and out options must be named example.com.key and example.com.crt respectively.
Obtain a certificate for each app you wish to deploy (or just get one for the purposes of this tutorial).
docker run --restart=always --name postgres -e POSTGRES_PASSWORD=secretp@ssword -d postgres
Rails apps
Now for the tricky part…
This configuration is meant to make deployment easy. The easiest way I’ve discovered so far involves writing a Dockerfile for the Rails app and providing Nginx some configuration files.
Save this sample Dockerfile in your app’s root directory on the server (next to the Gemfile):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Adapted from https://intercityup.com/blog/deploy-rails-app-including-database-configuration-env-vars-assets-using-docker.html
# TODO: figure out how to install `node` modules without `sudo`
RUN sudo npm install
RUN sudo -u app RAILS_ENV=production rake assets:precompile
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Note the ADD commands under the Configure Nginx header. These are copying configurations into the Docker image. Here I put them in the docker directory to keep them organized. From your app’s root directory:
1
mkdir docker
Now, save the following to docker/my-app.conf:
1
2
3
4
5
6
7
8
9
10
11
12
13
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name example.com;
root /home/app/my-app/public;
# Passenger
passenger_enabled on;
passenger_user app;
passenger_ruby /usr/bin/ruby2.2;
}
Of course, change the server name as appropriate. Also note the /home/app directory. app is the username set up by the phusion/passenger-ruby22 image.
Next, save the following to docker/postgres-env.conf
1
2
env POSTGRES_PORT_5432_TCP_ADDR;
env POSTGRES_PORT_5432_TCP_PORT;
This is some Docker magic that preserves these Postgres environment variables.
Now, build the app’s image from the project’s root directory:
1
docker build -t my-app-image .
This command reads the Dockerfile just created and executes the instructions contained therein.
Setup, migrate, and seed the database:
1
2
3
docker run --rm --link postgres:postgres my-app-image rake db:create
docker run --rm --link postgres:postgres my-app-image rake db:migrate
docker run --rm --link postgres:postgres my-app-image rake db:seed
Hexo has become a little flaky of late, but it’s still my goto software when I need to set up a new blog. It boasts One-Command Deployment, which would be great if I could figure out how to deploy it to anything other than GitHub or Heroku. There may be a way, but I’ve tried nothing and I’m all out of ideas. So instead I’ll deploy with Capistrano, because I want to try it with something other than Rails for a change.
Assumptions
You’re working on Ubuntu with the following installed on a remote machine on which to host a git repository and blog site:
Hit me up in the comments if I’ve missed any basic dependencies. The software immediately pertinent to this post (e.g., Hexo and Capistrano) will be installed as required.
I’m also assuming that you have a remote machine or cloud server on which to host a git repository and Hexo blog site. Your blog will be modified on a local machine and deployed to a production machine with Capistrano. As such, to make things easy, all the software named above needs to be installed locally and remotely.
Install Hexo on your local machine
Detailed instructions are found here, but this is how you do it in a nutshell:
1
npm install hexo-cli -g
npm should have been installed as part of the node installation.
Initialize a Hexo blog
This, of course, is not necessary if you already have a Hexo blog to work with. But if you don’t,
1
2
3
hexo init blog
cd blog
npm install
Set up a remote git repository
Capistrano talks to your blog’s remote repository when it comes time to deploy. See git remote repository SSH setup for help on how to set this up.
When the blank repository has been initialized on the remote machine, you will need to initialize git in your local Hexo blog directory (i.e., blog/ if you’re following from the previous step). This step is covered in the link provided and repeated here. Assuming you’re in the blog/ directory:
1
2
3
4
5
git init
git add .
git commit -m "Hello, my new Hexo blog"
git remote add origin git@example.com:/opt/git/my-hexo-blog.git # Change domain and project name as appropriate
git push origin master
If everything is set up correctly, you won’t even need to enter a password to push your first commit.
Then, in config/deploy/production.rb, modify as appropriate once again (out of the box, it should be sufficient to tack this on to the end of the file):
1
server "example.com", user: "deploy", roles: %w{web}
Note: the above assumes that my remote production server has a user named deploy and that this user can write to the /home/deploy/my-hexo-blog directory. Ultimately, it is up to you to determine which user deploys and where your blog is located on the file system.
I recently worked through Michael Hartl’s wonderful Ruby on Rails Tutorial as a refresher. The software implemented under his direction offers functionality that basically every modern website requires (e.g., user sign up, password retrieval, etc). That which follows documents the steps I took to deploy all the best parts of that tutorial in a production environment.
Get a server
Much of this post was ripped off from this article. They recommend Digital Ocean. I like cloudatcost.com for no other reason than because they’re cheap. For the purposes of this post, it doesn’t really matter as long as it’s installed with Ubuntu 14.04.
Add a user account
The templated Rails application is executed under this account:
The echo command prevents documentation for each gem being installed locally.
Install NodeJS
Since it is my intention to deploy this system to a production environment, I need to use the Asset Pipeline to prep my content for distribution across the web. All that requires node.
You’ll need to set the database password in config/application.yml.
Configure the environment
Before deploying with capistrano, a few files have to be in place. As the deploy user:
1
2
cd
mkdir -p rails-tutorial-template/shared/config
Get a secret key
If you have a rails project nearby, you can just type in
1
rake secret
Or, you can generate one by running irb
1
irb
and executing the following instructions:
1
2
3
require 'securerandom'
SecureRandom.hex(64)
exit
Copy the string generated by the SecureRandom.hex(64) command.
application.yml
This template uses figaro to manage all the sensitive stuff that sometimes goes into environment variables. The config/application.yml file it looks for isn’t committed to the repository, so you have to create it yourself:
This is meant to be completed on the development machine (not the server). It is assumed that postgresql and all the other dependencies are already installed (if not, do so as above).