express


A Dockerized, Torified, Express Application

Dark Web chatter is picking up. I’m interested in providing cool web services anonymously. This is my first attempt at using Docker Compose to stay ahead of this trend.

Assumption: all the software goodies are setup and ready to go on an Ubuntu 16.04 server (node, docker, docker-compose, et al).

Set up an Express App

The Express Application Generator strikes me as a little bloated, but I use it anyway because I’m super lazy.

1
sudo npm install express-generator -g

Once installed, set up a vanilla express project:

1
2
express --view=ejs tor-app
cd tor-app && npm install

The express-generator will tell you to run the app like this:

1
DEBUG=tor-app:* npm start

This, of course, is only useful for development. From here, we’ll Dockerize for deployment and Torify for anonymity.

Tor pre-configuration

In anticipation of setting up the actual Torified app container, create a new file called config/torrc. This file will be used by Tor inside the Docker container to serve up our app. Paste the following into config/torrc:

1
2
HiddenServiceDir /home/node/.tor/hidden_service/
HiddenServicePort 80 127.0.0.1:3000

Docker

Copy and paste the following into a new file called Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
FROM node:stretch
ENV NPM_CONFIG_LOGLEVEL warn
ENV DEBIAN_FRONTEND noninteractive
EXPOSE 9050
# `apt-utils` squelches a configuration warning
RUN apt-get update
RUN apt-get -y install apt-utils
#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-20
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring
#
# Tor raises some tricky directory permissions issues. Once started, Tor will
# write the hostname and private key into a directory on the host system. If
# the `node` user in the container does not have the same UID as the user on
# the host system, Tor will not be able to create and write to these
# directories. Execute `id -u` on the host to determine your UID.
#
# RUN usermod -u 1001 node
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
# Run the Tor service alongside the app itself
CMD /usr/bin/tor -f /etc/tor/torrc & npm start

Container/Host Permissions

Take special note of the comment posted above the RUN usermode -u 1001 node instruction in Dockerfile. If you get any errors on the container build/execute step described below, you’ll need to make sure your host user’s UID is the same as your container user’s UID (i.e., the node user).

Usually the user in the container has a UID of 1000. To determine the host user’s UID, execute id -u. If it’s not 1000, uncomment the usermod instruction in Dockerfile and make sure the numbers match.

Docker Compose

docker-compose does all of the heavy lifting for building the Dockerfile and start-up/shut-down operations. Paste the following into a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
- ./config/torrc:/etc/tor/torrc

Bring the whole thing online by running

1
docker-compose up -d

Every now and then I get an error trying to obtain the GPG key:

1
gpg: keyserver receive failed: Cannot assign requested address

This usually solves itself on subsequent calls to docker-compose up.

Assuming the build and execution was successful, you can determine your .onion address like this:

1
docker-compose exec node cat /home/node/.tor/hidden_service/hostname

You should now be able to access your app from favourite Tor web browser.

If you’re interested in poking around inside the container, access the bash prompt like this:

1
docker-compose exec node bash

Notes

This is the first step in configuring and deploying a hidden service on the Tor network. Since working out the initial details, I’ve already thought of potential improvements to this approach. As it stands, only one hidden service can be deployed. It would be far better to create a Tor container able to proxy multiple apps. I will also be looking into setting up .onion vanity URLs and HTTPS.


mongodump and mongorestore Between Docker-composed Containers

I’m trying to refine the process by which I backup and restore Dockerized MongoDB containers. My previous effort is basically a brute-force copy-and-paste job on the container’s data directory. It works, but I’m concerned about restoring data between containers installed with different versions of MongoDB. Apparently this is tricky enough even with the benefit of recovery tools like mongodump and mongorestore, which is what I’m using below.

In short, I need to dump my data from a data-only MongoDB container, bundle the files uploaded to my Express application, and restore it all on another server. Here’s how I did it…

Dump the data

I’m a big fan of docker-compose. I use it to manage all my containers. The following method requires that the composition be running so that mongodump can be run against the running Mongo container (which, in turn, accesses the data-only container). Assuming the name of the container is myapp_mongo_1

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongodump --host $MONGO_PORT_27017_TCP_ADDR'

This will create a root-owned directory called myapp-mongo-dump in your current directory. It contains all the BSON and JSON meta-data for this database. For convenience, I change ownership of this resource:

1
sudo chown -R user:user myapp-mongo-dump

Then, for transport, I archive the directory:

1
tar zcvf myapp-mongo-dump.tar.gz myapp-mongo-dump

Archive the uploaded files

My app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar zcvf uploads.tar.gz uploads

Now I have two archived files: myapp-mongo-dump.tar.gz and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp myapp-mongo-dump.tar.gz uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
2
tar zxvf uploads.tar.gz
tar zxvf myapp-mongo-dump.tar.gz

Then I restore the data to the data-only container through the running Mongo instance (assumed to be called myapp_mongo_1):

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongorestore --host $MONGO_PORT_27017_TCP_ADDR'

With that, all data is restored. I didn’t even have to restart my containers to begin using the app on its new server.


MongoDB backup and restore between Dockerized Node apps

My bargain-basement cloud service provider, CloudAtCost recently lost one of my servers and all the data on it. This loss was exasperated by the fact that I didn’t backup my MongoDB data somewhere else. Now I’m working out the exact process after the fact so that I don’t suffer this loss again (it’s happened twice now with CloudAtCost, but hey, the price is right).

The following is a brute-force backup and recovering process. I suspect this approach has its weaknesses in that it may depend upon version-consistency between the MongoDB containers. This is not ideal for someone like myself who always installs the latest version when creating new containers. I aim to develop a more flexible process soon.

Context

I have a server running Ubuntu 16.04, which, in turn is serving up a Dockerized Express application (Nginx, MongoDB, and the app itself). The MongoDB data is backed up in a data-only container. To complicate matters, the application allows file uploads, which are being stored on the file system in the project’s root.

I need to dump the data from the data-only container, bundle the uploaded files, and restore it all on another server. Here’s how I did it…

Dump the data

I use docker-compose to manage my containers. To obtain the name of the MongoDB data-only container, I simply run docker ps -a. Assuming the name of the container is myapp_mongo_data

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data/db

This will put a file called backup.tar in the app’s root directory. It may belong to the root user. If so, run sudo chown user:user backup.tar.

Archive the uploaded files

The app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar -zcvf uploads.tar.gz uploads

Now I have two archived files: backup.tar and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp backup.tar uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
tar -zxvf uploads.tar.gz

Then I restore the data to the data container:

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Remove and restart containers

The project containers don’t need to be running when you restore the data in the previous step. If they are running, however, once the data is restored, remove the running containers and start again with docker-compose:

1
2
3
docker-compose stop
docker-compose rm
docker-compose up -d

I’m sure there is a reasonable explanation as to why removing the containers is necessary, but I don’t know what it is yet. In any case, removing the containers isn’t harmful because all the data is on the data-only container anyway.

Warning

As per the introduction, this process probably depends on version consistency between MongoDB containers.