nginx


Torifying a Static Web Site with Docker and Ubuntu 20

This documents the easiest way to Tor-ify a static website with Docker. These instructions were executed on a default Ubuntu 20 Server installation.

The Website

Navigate to your preferred working directory and execute the following:

1
mkdir -p tor-site/www && cd tor-site

docker-compose.yml

Create a file called docker-compose.yml and paste:

1
2
3
4
5
6
7
8
9
10
11
12
13
version: "3.9"
services:
nginx:
image: nginx
restart: unless-stopped
expose:
- "80"
volumes:
- ./www:/usr/share/nginx/html
networks:
default:
external:
name: tor-proxy_default

This sets up the nginx container, which will serve the static website. The website itself will go into the www/ directory. Create a new file called index.html inside the www directory:

1
touch www/index.html

Paste this into www/index.html:

1
2
3
4
5
6
7
8
<html>
<head>
<title>Bonjour</title>
</head>
<body>
<h1>Bienvenue sur le darkweb</h1>
</body>
</html>

Deploy the Server:

If you’re familiar with Docker, you will have noticed a default network called tor-proxy_default. This doesn’t exist yet. You only need to execute the following one time:

1
docker network create tor-proxy_default

Now, your website should be ready to go.

1
docker compose up -d

The Tor Proxy

Navigate back to your preferred working directory:

1
cd ..

And create a new directory alongside your tor-site/:

1
mkdir -p tor-proxy/config && cd tor-proxy

This is where you set up the Tor proxy itself. It is in a seperate directory from where you’ve set up your static nginx website.

Dockerfile

Create a file called Dockerfile and paste:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
FROM debian
ENV NPM_CONFIG_LOGLEVEL warn

ENV DEBIAN_FRONTEND noninteractive

EXPOSE 9050

RUN apt-get update

# `apt-utils` squelches a configuration warning
# `gnupg` is required for adding the `apt` key
RUN apt-get -y install apt-utils wget gnupg apt-transport-https

#
# Info from: https://packages.debian.org/stretch/amd64/libevent-2.0-5/download
#
RUN echo "deb http://ftp.ca.debian.org/debian stretch main" | tee -a /etc/apt/sources.list.d/torproject.list

#
# From https://support.torproject.org/apt/
#
RUN echo "deb [signed-by=/usr/share/keyrings/tor-archive-keyring.gpg] https://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src [signed-by=/usr/share/keyrings/tor-archive-keyring.gpg] https://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN wget -qO- https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --dearmor | tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/null
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring

# The debian image does not create a default user
RUN useradd -m user
USER user

# Run the Tor service
CMD /usr/bin/tor -f /etc/tor/torrc

This creates the Docker image.

docker-compose.yml

I think it’s best to manage the created image with docker-compose. Create a new file called docker-compose.yml and paste:

1
2
3
4
5
6
7
version: "3.9"                 
services:
tor:
build: .
restart: unless-stopped
volumes:
- ./config/torrc:/etc/tor/torrc

Configure the Tor Proxy

You can host several websites or web applications through a Tor proxy. Notice the shared volume declared in the docker-compose.yml file. You need to create a file called ./config/torrc:

1
touch config/torrc

This is read from inside the Tor Docker container. Paste the following into ./config/torrc:

1
2
3
HiddenServiceDir /home/user/.tor/hidden_app_1/
HiddenServicePort 80 tor-site-nginx-1:80
HiddenServiceVersion 3

Deploy the Tor Proxy

1
docker compose up -d

You can find your .onion hostnames in the .tor directory in the container:

1
docker compose exec tor cat /home/user/.tor/hidden_app_1/hostname

You’ll see an address that looks like this:

1
257472yzjwmkcts7mmp2nl4v32brv5gq7ybx2ac6luv6ddjnipwv2xyd.onion

Navigate to your Torified website by entering it into your Tor-enabled browser.

Network Troubleshooting

Whoa! I had never run into this issue before…

I discovered a frustrating error that took some effort to solve. Basically, when it comes to sharing networking with the host, a Docker image defaults to an MTU of 1500. The network adapter on my server has an MTU of 1280. The symptom of this was that the Tor Docker container couldn’t run apt-get because the connection would always time out.

This is the resource that solved the problem: https://www.civo.com/learn/fixing-networking-for-docker

To determine if you have this issue, execute the following in the tor-proxy directory:

1
ip a

You will see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

# ...

2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1280 qdisc mq state UP group default qlen 1000
link/ether 74:d4:35:e9:5b:7e brd ff:ff:ff:ff:ff:ff
inet 192.168.2.8/24 brd 192.168.2.255 scope global enp2s0
valid_lft forever preferred_lft forever
inet6 fe80::76d4:35ff:fee9:5b7e/64 scope link
valid_lft forever preferred_lft forever

# ...


5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a3:b9:76:b0 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:a3ff:feb9:76b0/64 scope link
valid_lft forever preferred_lft foreve

The values associated with mtu need to match. This resource explains how you might fix the problem.


An nginx-proxy/lets-encrypt Docker Composition

I was just doing a major redeployment when I realized I’ve never documented my approach to nginx-proxy and lets-encrypt with Version 3 of docker-compose.

I like to deploy a bunch of web applications and static web sites behind a single proxy. What follows is meant to be copy-paste workable on an Ubuntu 16.04 server.

Organization

Set up your server’s directory structure:

1
mkdir -p ~/sites/nginx-proxy && cd ~/sites/nginx-proxy

Docker Compose

Paste the following into docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
# Can anyone explain this sorcery?
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
logging:
options:
max-size: "4m"
max-file: "10"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
depends_on:
- nginx-proxy
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
vhost:

# Do not forget to 'docker network create nginx-proxy' before launch
# and to add '--network nginx-proxy' to proxyed containers.
networks:
default:
external:
name: nginx-proxy

Configuring the nginx in nginx-proxy

Sometimes you need to override the default nginx configuration contained in the nginx-proxy Docker image. To do this, you must build a new image using nginx-proxy as its base.

For example, an app might need to accept large file uploads. You would paste this into your Dockerfile:

1
2
3
4
5
6
# Cf., https://github.com/schmunk42/nginx-proxy#proxy-wide
FROM jwilder/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 5m;'; \
} > /etc/nginx/conf.d/my_proxy.conf

This sets the required configurations within the nginx-proxy container.

In this case you also need to modify the docker-compose.yml file to build the local Dockerfile. The first few lines will now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
# docker-compose.yml
version: '3'
services:
nginx-proxy:

# Change this:
#image: jwilder/nginx-proxy

# To this:
build: .

# as above...

Deploying sites and apps

With the proxy configured and deployed (docker-compose up -d), you can wire up all your sites and apps.

Static Site

A static site deployed with nginx:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: unless-stopped
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=you@example.com
expose:
- 80
volumes:
- ./_site:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Deploy App

Requirements are going to vary app-by-app, but for a simple node application, use the following as a starting point:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# docker-compose.yml
version: '3'
services:
node:
build: .
restart: unless-stopped
ports:
- 3000
environment:
- NODE_ENV=production
- VIRTUAL_HOST=app.example.com
- LETSENCRYPT_HOST=app.example.com
- LETSENCRYPT_EMAIL=you@example.com
volumes:
- .:/home/node
- /home/node/node_modules
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Nginx Proxy, Let's Encrypt Companion, and Docker Compose Version 3

I recently discovered that I don’t need to manually create data-only containers with docker-compose anymore. A welcome feature, but one that comes with all the usual migration overhead. I rely heavily on nginx-proxy and letsencrypt-nginx-proxy-companion. Getting it all to work in the style of docker-compose version 3 took a bit of doing.

My previous tried and true approach is getting pretty stale. It is time to up my Docker game…

My Site

nginx-proxy proxies multiple site, but for demonstration purposes, I’m only serving up one with nginx. I like to put all my individual Docker compositions in their own directories:

1
mkdir mysite && cd mysite

Optional

The following assumes you have some sort of site you want to serve up from the mysite/ directory. If not, just create a simple Hello, world! HTML page. Copy and paste the following to index.html:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello, world!</title>
</head>
<body>
Hello, world!
</body>
</html>

docker-compose

It’s awesome that I can create data-only containers in my docker-compose.yml, but now I’ve got to manually create a network bridge:

1
docker network create nginx-proxy

Proxied containers also need to know about this network in their own docker-compose.yml files…

Copy and paste the code below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: always
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=site.example.com
- LETSENCRYPT_EMAIL=email@example.com
volumes:
- ./:/usr/share/nginx/html
networks:
default:
external:
name: nginx-proxy

This will serve up files from the current directory (i.e., the same one that contains the new index.html page, if created).

Start docker-compose:

1
docker-compose up -d

The site won’t be accessible yet. That comes next.

nginx-proxy

As before, put the nginx-proxy Docker compositions in its own directory:

1
2
cd ..
mkdir nginx-proxy && cd nginx-proxy

Create a directory in which to store the Let’s Encrypt certificates:

1
mkdir certs

Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
volumes:
vhost:
networks:
default:
external:
name: nginx-proxy

This allows nginx-proxy to combine forces with letsencrypt-nginx-proxy-companion, all in one docker-compose file.

Start docker-compose:

1
docker-compose up -d

If all is well, you should be able to access your site at the address configured.


Brute force Docker WordPress Nginx proxy demo

Dockerizing a dynamic Nginx-WordPress proxy is tricky business. I plan to bundle this all up in bash scripts, but for now I am simply documenting the steps I took to configure the following system in my local environment:

What follows is not a production-ready path to deployment. Rather, it is a brute-force proof of concept.

MySQL

Start a detatched MySQL container.

1
docker run -d -e MYSQL_ROOT_PASSWORD=secretp@ssword --name consolidated_blog_mysql_image mysql:5.7.8

This one probably won’t cause any trouble, so I don’t need to see any output.

Main WordPress

This is the WordPress instance you encounter when you land on the domain’s root.

1
docker run --rm --link consolidated_blog_mysql_image:mysql -e WORDPRESS_DB_NAME=main_blog -e WORDPRESS_DB_PASSWORD=secretp@ssword -p 8081:80 --name main_blog_wordpress_image wordpress:4

Secondary WordPress blog

This is the WordPress instance you encounter when you land on the domains /blog path.

1
docker run --rm --link consolidated_blog_mysql_image:mysql -e WORDPRESS_DB_NAME=blog2 -e WORDPRESS_DB_PASSWORD=secretp@ssword -p 8083:80 --name blog2_wordpress_image wordpress:4

Notice the port. If I were to set it from 8083:80 to 8082:80, it will redirect back to 8081, and I don’t know why yet.

Nginx proxy

This is the tricky part. I need to obtain the IPs assigned to my WordPress containers and set them in my Nginx default.conf.

First, get the IP addresses of each running main_blog_wordpress_image container:

1
docker inspect -f '{{ .NetworkSettings.IPAddress }}' main_blog_wordpress_image

This will output the IP. Make note of it, because I need to copy it to the Nginx’s default.conf file.

1
172.17.0.181

Get the IP addresses of each running blog2_wordpress_image container:

1
docker inspect -f '{{ .NetworkSettings.IPAddress }}' blog2_wordpress_image

There’s a good chance it will be the next IP in line:

1
172.17.0.182

Now, create a default.conf file:

1
vim default.conf

Copy and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {
listen 80;
server_name localhost;

# Main blog
location / {
proxy_pass http://172.17.0.181/;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

# Secondary blog
location /blog/ {
proxy_pass http://172.17.0.182/;
}
}

Change the proxy_pass IPs accordingly.

Execute:

1
docker run --rm --name nginx-wordpress-proxy -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 nginx

The main blog should now be accessible at http://localhost. The secondary blog at http://localhost/blog. Set up different blogs on each WordPress instance to confirm the system is working as designed.


Self-signing security certificates

Obtaining or self-signing security certificates is a frequent step in my notes. The intent of this post is to DRY out my blog.

To self-sign a certificate, first create a certs/ directory:

1
2
mkdir certs
cd certs

In the following command, note the keyout and out options. I like to name my certificates in accordance with my production site’s URL and subdomain (if any). For example, suppose I need a certificate for example.com. I set the keyout and out options to example.com.key and example.com.crt respectively.

1
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout example.com.key -out example.com.crt

If you’re like me and you use the jwilder/nginx-proxy Docker image, it won’t find your certificates unless you follow the naming convention above.

Now, make sure that no one but root can look at your private key:

1
2
3
cd ..
sudo chown -R root:root certs
sudo chmod -R 600 certs

Alternatively, if you need validation from a third-party Certificate Authority, I like to use startssl.com. Their site is a little clunky, but they offer certificates for free, so they’re alright in my books.

See also: Chaining intermediate certificates for Nginx


Chaining intermediate certificates for Nginx

I always use startssl.com to get free authentication certificates. It’s a little clunky to use, but it’s free and that makes it awesome. When it comes time to configure Nginx to use my new certificates, I always forget what to do. These instructions are adapted from here.

Having successfully followed the instructions at startssl.com, you’ll wind up with these four files:

  1. ca.pem
  2. ssl.crt
  3. ssl.key
  4. sub.class1.server.ca.pem

I like to put these all in a directory and zip ‘em up for transport to the production server. Assuming that they’ve all been saved to a directory named for your URL (e.g., example.com/):

1
2
tar -zcvf example.com.tar.gz example.com
scp example.com.tar.gz you@example.com:~

Then, from the production machine, untar the file:

1
2
3
ssh you@example.com
tar -zxvf example.com.tar.gz
cd example.com/

Decrypt the private key with the password you entered at startssl.com.

1
openssl rsa -in ssl.key -out example.com.key

The unencrypted private key is not something you want to show off. Make it so only root can read it:

1
2
chmod 400 example.com.key
sudo chown root:root example.com.key

Nginx needs the startssl.com intermediate certificate concatenated to the public certificate:

1
cat ssl.crt sub.class1.server.ca.pem > example.com.crt

The private key has been decrypted and the public key concatenated. Supposing you have an Nginx server directive that looks like this:

1
2
3
4
5
6
server {
listen 443 default_server ssl;
ssl_certificate /etc/nginx/ssl/example.com.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
# ...
}

We need to move the public and private keys into the directory specified (/etc/nginx/ssl/).

1
2
sudo mv example.com.crt /etc/nginx/ssl/
sudo mv example.com.key /etc/nginx/ssl/

Restart your Nginx server and your certificates should be ready to go.


Deploy multiple Rails apps with Passenger, Nginx, and Docker

Here’s the problem:

I’ve got a bunch of Rails apps, but only a handful of cloud servers. I need some of them to live on a single machine without them stepping all over each other.

Assumptions

This guide assumes the server is running Ubuntu 14.04 and has all the requisite software already installed (e.g.: Docker, Rails, etc.).

Enter Docker

Docker makes the following configuration easy to maintain:

Docker is also nice because all the required containers come pre-packaged:

Nginx

First, get some SSL certificates

You’ll need one for each Rails app you wish to deploy. These can be self-signed or obtained from a Certificate Authority. To self-sign a certificate, execute the following:

1
2
3
4
5
6
mkdir certs
cd certs
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sub.example.com.key -out sub.example.com.crt
cd ..
sudo chown -R root:root certs
sudo chmod -R 600 certs

Note the keyout and out options. The jwilder/nginx-proxy Docker image won’t pick up the certificates unless they are named in accordance with the production site’s URL and subdomain (if any). For example, if you have a certificate for example.com, the keyout and out options must be named example.com.key and example.com.crt respectively.

Obtain a certificate for each app you wish to deploy (or just get one for the purposes of this tutorial).

Then, run the Nginx docker image

Note the app username. Adjust as appropriate.

1
docker run --restart=always --name nginx-proxy -d -p 80:80 -p 443:443 -v /home/app/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

PostgreSQL

1
docker run --restart=always --name postgres -e POSTGRES_PASSWORD=secretp@ssword -d postgres

Rails apps

Now for the tricky part…

This configuration is meant to make deployment easy. The easiest way I’ve discovered so far involves writing a Dockerfile for the Rails app and providing Nginx some configuration files.

Save this sample Dockerfile in your app’s root directory on the server (next to the Gemfile):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Adapted from https://intercityup.com/blog/deploy-rails-app-including-database-configuration-env-vars-assets-using-docker.html

FROM phusion/passenger-ruby22:latest
MAINTAINER Some Groovy Cat "hepcat@example.com"

# Set correct environment variables.
ENV HOME /root
ENV RAILS_ENV production

# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]

# Start Nginx and Passenger
EXPOSE 80
RUN rm -f /etc/service/nginx/down

# Configure Nginx
RUN rm /etc/nginx/sites-enabled/default
ADD docker/my-app.conf /etc/nginx/sites-enabled/my-app.conf
ADD docker/postgres-env.conf /etc/nginx/main.d/postgres-env.conf

# Install the app
ADD . /home/app/my-app
WORKDIR /home/app/my-app
RUN chown -R app:app /home/app/my-app
RUN sudo -u app bundle install --deployment

# TODO: figure out how to install `node` modules without `sudo`
RUN sudo npm install

RUN sudo -u app RAILS_ENV=production rake assets:precompile

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

Note the ADD commands under the Configure Nginx header. These are copying configurations into the Docker image. Here I put them in the docker directory to keep them organized. From your app’s root directory:

1
mkdir docker

Now, save the following to docker/my-app.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

server_name example.com;

root /home/app/my-app/public;

# Passenger
passenger_enabled on;
passenger_user app;
passenger_ruby /usr/bin/ruby2.2;
}

Of course, change the server name as appropriate. Also note the /home/app directory. app is the username set up by the phusion/passenger-ruby22 image.

Next, save the following to docker/postgres-env.conf

1
2
env POSTGRES_PORT_5432_TCP_ADDR;
env POSTGRES_PORT_5432_TCP_PORT;

This is some Docker magic that preserves these Postgres environment variables.

Now, build the app’s image from the project’s root directory:

1
docker build -t my-app-image .

This command reads the Dockerfile just created and executes the instructions contained therein.

Setup, migrate, and seed the database:

1
2
3
docker run --rm --link postgres:postgres my-app-image rake db:create
docker run --rm --link postgres:postgres my-app-image rake db:migrate
docker run --rm --link postgres:postgres my-app-image rake db:seed

Finally, execute the image:

1
docker run --restart=always --name my-app --expose 80 -e VIRTUAL_HOST=example.com --link postgres:postgres -d my-app-image

If everything goes well, you will be able to see your app at example.com (or wherever).

Next

Deploy a Rails app to Docker with Capistrano


Deploy a Hexo blog with Capistrano

Hexo has become a little flaky of late, but it’s still my goto software when I need to set up a new blog. It boasts One-Command Deployment, which would be great if I could figure out how to deploy it to anything other than GitHub or Heroku. There may be a way, but I’ve tried nothing and I’m all out of ideas. So instead I’ll deploy with Capistrano, because I want to try it with something other than Rails for a change.

Assumptions

You’re working on Ubuntu with the following installed on a remote machine on which to host a git repository and blog site:

Hit me up in the comments if I’ve missed any basic dependencies. The software immediately pertinent to this post (e.g., Hexo and Capistrano) will be installed as required.

I’m also assuming that you have a remote machine or cloud server on which to host a git repository and Hexo blog site. Your blog will be modified on a local machine and deployed to a production machine with Capistrano. As such, to make things easy, all the software named above needs to be installed locally and remotely.

Install Hexo on your local machine

Detailed instructions are found here, but this is how you do it in a nutshell:

1
npm install hexo-cli -g

npm should have been installed as part of the node installation.

Initialize a Hexo blog

This, of course, is not necessary if you already have a Hexo blog to work with. But if you don’t,

1
2
3
hexo init blog
cd blog
npm install

Set up a remote git repository

Capistrano talks to your blog’s remote repository when it comes time to deploy. See git remote repository SSH setup for help on how to set this up.

When the blank repository has been initialized on the remote machine, you will need to initialize git in your local Hexo blog directory (i.e., blog/ if you’re following from the previous step). This step is covered in the link provided and repeated here. Assuming you’re in the blog/ directory:

1
2
3
4
5
git init
git add .
git commit -m "Hello, my new Hexo blog"
git remote add origin git@example.com:/opt/git/my-hexo-blog.git # Change domain and project name as appropriate
git push origin master

If everything is set up correctly, you won’t even need to enter a password to push your first commit.

nginx

Add host:

1
2
3
sudo touch /etc/nginx/sites-available/my-hexo-blog.conf
sudo ln -s /etc/nginx/sites-available/my-hexo-blog.conf /etc/nginx/sites-enabled/my-hexo-blog.conf
sudo vim /etc/nginx/sites-available/my-hexo-blog.conf

Write the following to the file:

1
2
3
4
5
6
7
8
9
10
11
server {
listen 80;
server_name example.com www.example.com;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;

location / {
alias /home/deploy/example/current/public/;
try_files $uri $uri/ /index.html;
}
}

Restart:

1
sudo service nginx restart

Install Capistrano

1
gem install capistrano

Set up Capistrano

Just like hexo and git, Capistrano needs to be initialized in your project directory:

1
cap install

If successful, you will see something like this:

1
2
3
4
5
6
7
mkdir -p config/deploy
create config/deploy.rb
create config/deploy/staging.rb
create config/deploy/production.rb
mkdir -p lib/capistrano/tasks
create Capfile
Capified

With regard to the steps previously taken, modify the pre-cooked config/deploy.rb as appropriate. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
set :application, "my-hexo-blog"
set :repo_url, "git@example.com:/opt/git/my-hexo-blog.git"

# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, "/home/deploy/my-hexo-blog"

# ...

namespace :deploy do

# 2015-4-14 https://gist.github.com/ryanray/7579912
desc 'Install node modules'
task :npm_install do
on roles(:web) do
execute "cd #{release_path} && npm install"
end
end

desc 'Compile markdown'
task :hexo_generate do
on roles(:web) do
execute "cd #{release_path} && hexo generate"
end
end

before :updated, 'deploy:npm_install'
after :deploy, 'deploy:hexo_generate'
after :finishing, 'deploy:cleanup'
end

Then, in config/deploy/production.rb, modify as appropriate once again (out of the box, it should be sufficient to tack this on to the end of the file):

1
server "example.com", user: "deploy", roles: %w{web}

Note: the above assumes that my remote production server has a user named deploy and that this user can write to the /home/deploy/my-hexo-blog directory. Ultimately, it is up to you to determine which user deploys and where your blog is located on the file system.

Deploy

1
cap production deploy

That should do it. If something goes wrong,

1
cap production deploy --trace

will give more details.


Deploying the Rails Tutorial Sample App

I recently worked through Michael Hartl’s wonderful Ruby on Rails Tutorial as a refresher. The software implemented under his direction offers functionality that basically every modern website requires (e.g., user sign up, password retrieval, etc). That which follows documents the steps I took to deploy all the best parts of that tutorial in a production environment.

Get a server

Much of this post was ripped off from this article. They recommend Digital Ocean. I like cloudatcost.com for no other reason than because they’re cheap. For the purposes of this post, it doesn’t really matter as long as it’s installed with Ubuntu 14.04.

Add a user account

The templated Rails application is executed under this account:

1
2
3
sudo adduser deploy
sudo adduser deploy sudo
su deploy

Install Ruby

Some dependencies

1
2
sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev

rbenv

1
2
3
4
5
cd
git clone git://github.com/sstephenson/rbenv.git .rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL

ruby-build plugin

1
2
3
git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
exec $SHELL

rbenv-gem-rehash plugins

1
git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash

Ruby

1
2
3
rbenv install 2.2.1
rbenv global 2.2.1
ruby -v

bundler

1
2
echo "gem: --no-ri --no-rdoc" > ~/.gemrc
gem install bundler

The echo command prevents documentation for each gem being installed locally.

Install NodeJS

Since it is my intention to deploy this system to a production environment, I need to use the Asset Pipeline to prep my content for distribution across the web. All that requires node.

1
2
3
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs

Install Rails

1
2
gem install rails -v 4.2.0
rails -v

Nginx and Passenger

Install Phusion’s PGP key to verify packages

1
2
gpg --keyserver keyserver.ubuntu.com --recv-keys 561F9B9CAC40B2F7
gpg --armor --export 561F9B9CAC40B2F7 | sudo apt-key add -

Add HTTPS support to APT

1
sudo apt-get install apt-transport-https

Add the passenger repository

1
2
3
4
sudo sh -c "echo 'deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main' >> /etc/apt/sources.list.d/passenger.list"
sudo chown root: /etc/apt/sources.list.d/passenger.list
sudo chmod 600 /etc/apt/sources.list.d/passenger.list
sudo apt-get update

nginx and passenger

1
sudo apt-get install nginx-full nginx-extras passenger

Configure

1
sudo vim /etc/nginx/nginx.conf

Uncomment the rbenv Phusion Passenger stuff. There should be some helpful hints in the file itself:

1
2
3
4
5
6
7
8
9
10
11
##
# Phusion Passenger
##
# Uncomment it if you installed ruby-passenger or ruby-passenger-enterprise
##

passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;

passenger_ruby /home/deploy/.rbenv/shims/ruby; # If you use rbenv
# passenger_ruby /home/deploy/.rvm/wrappers/ruby-2.1.2/ruby; # If use use rvm, be sure to change the version number
# passenger_ruby /usr/bin/ruby; # If you use ruby from source

Get an SSL certificate

These instructions will produce a self-signed certificate:

1
2
sudo mkdir /etc/nginx/ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

Alternatively, validate with startssl.com for free. This document provides some excellent additional information.

Add nginx host

1
2
3
sudo touch /etc/nginx/sites-available/mydomain.conf
sudo ln -s /etc/nginx/sites-available/mydomain.conf /etc/nginx/sites-enabled/mydomain.conf
sudo vim /etc/nginx/sites-available/mydomain.conf

Write the following to the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

listen 443 ssl;

server_name gofish.mobi;

# SSL
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;

# Error logs
access_log /var/log/nginx/gofish.access.log;
error_log /var/log/nginx/gofish.error.log;

# Passenger
passenger_enabled on;
rails_env production;
root /home/deploy/rails-tutorial-template/current/public;

# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}

# Static assets
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
}

Start, or restart nginx:

1
sudo service nginx restart

PostgreSQL

Install:

1
sudo apt-get install postgresql postgresql-contrib libpq-dev

Create the deploy postgres user:

1
2
3
sudo su - postgres
createuser -U postgres -d -e -E -I -P -r -s deploy
exit

You’ll need to set the database password in config/application.yml.

Configure the environment

Before deploying with capistrano, a few files have to be in place. As the deploy user:

1
2
cd
mkdir -p rails-tutorial-template/shared/config

Get a secret key

If you have a rails project nearby, you can just type in

1
rake secret

Or, you can generate one by running irb

1
irb

and executing the following instructions:

1
2
3
require 'securerandom'
SecureRandom.hex(64)
exit

Copy the string generated by the SecureRandom.hex(64) command.

application.yml

This template uses figaro to manage all the sensitive stuff that sometimes goes into environment variables. The config/application.yml file it looks for isn’t committed to the repository, so you have to create it yourself:

1
2
cd rails-tutorial-template/shared/config
vim application.yml

Copy, paste, modify, and save the following:

1
2
3
4
5
6
7
8
9
10
11
12
# General
app_name: "rails_tutorial_template"

# Email
default_from: "noreply@gofish.mobi"
gmail_username: "noreply@gofish.mobi"
gmail_password: "secretnoreplypassword"

# Production
secret_key_base: "PasteTheSecretKeyFromThePreviousStepHere"
host: "gofish.mobi"
provider_database_password: "databasepassword"

I set up an account in Gmail to handle signup verifications and password resets.

database.yml and secrets.yml

There’s no sensitive information contained in the database.yml or secrets.yml files, so these can be copied directly from github.

1
2
wget https://raw.githubusercontent.com/RaphaelDeLaGhetto/rails-tutorial-template/master/config/database.yml
wget https://raw.githubusercontent.com/RaphaelDeLaGhetto/rails-tutorial-template/master/config/secrets.yml

Clone the template

This is meant to be completed on the development machine (not the server). It is assumed that postgresql and all the other dependencies are already installed (if not, do so as above).

1
2
3
4
5
6
7
git clone https://github.com/RaphaelDeLaGhetto/rails-tutorial-template.git
cd rails-tutorial-template
bundle install
sudo npm install
rake db:setup
rake db:seed
vim config/application.yml

Then copy, paste, and save the following in the file:

1
default_from: 'noreply@example.com'

Tests should all pass

1
rake

capistrano deployment

I’m still working on making this easier. From the project’s directory on the development machine set the following in config/deploy/production.rb

1
2
# Replace 127.0.0.1 with your server's IP address!
server 'gofish.mobi', user: 'deploy', roles: %w{web app}

Then run

1
bundle exec cap production deploy --trace

The deployment should succeed, but the site will not be accessible until the database is set up. Log in to the production server as deploy:

1
2
3
ssh deploy@gofish.mobi
cd rails-tutorial-template/current
RAILS_ENV=production rake db:setup

Now, enable the deploy user to restart passenger without providing a sudo password:

1
sudo visudo

Add this to the end of the file and save:

1
deploy ALL=(root) NOPASSWD: /usr/bin/passenger-config

Back on the local machine, the deployment should now succeed:

1
bundle exec cap production deploy --trace

If everything worked out right, then the app should be accessible at the configured domain name (gofish.mobi in my case).