Basic Android-React Native environment setup in Ubuntu 18.04

I am a test-driven developer who avoids fancy IDEs. I attempted to work through the details of a headless Android-React Native development environment, but quickly realized I was in over my head. This document outlines what may be the more typical workspace arrangement. It also demonstrates how I got everything working with Detox.

The following steps were executed on an Ubuntu 18.04 Desktop machine. What follows is heavily adapted from the Facebook and Detox.

Dependencies

Node

You need node 8.3 or newer. I’m using 10.15.3.

React Native CLI

1
npm install -g react-native-cli

Java JDK

This is the version recommended by Facebook. Installation instructions are adapted from those provided by DigitalOcean.

1
sudo apt install openjdk-8-jdk

Android Studio

You can download the IDE here. I simply installed via the Ubuntu Software manager.

On first execution, select Do not import settings and press OK. There are some Setup Wizard screens, which you can navigate. When given the opportunity, choose a Custom setup when prompted to select an installation type. Check the following boxes:

  • Android SDK
  • Android SDK Platform
  • Android Virtual Device

Click Next to install all of these components.

Configure SDK

A React Native app requires the Android 9 (Pie) SDK. Install it throught the SDK Manager in Android Studio. Expand the Pie selection by clicking the Show Package Details box. Make sure the follow options are checked:

  • Android SDK Platform 28
  • Intel x86 Atom_64 System Image or Google APIs Intel x86 Atom System Image (I chose the first option)

Add the following lines to your $HOME/.bashrc config file:

1
2
3
4
5
export ANDROID_HOME=$HOME/Android/Sdk
export PATH=$PATH:$ANDROID_HOME/emulator
export PATH=$PATH:$ANDROID_HOME/tools
export PATH=$PATH:$ANDROID_HOME/tools/bin
export PATH=$PATH:$ANDROID_HOME/platform-tools

Load the config into the current shell:

1
source $HOME/.bashrc

Compile Watchman

1
2
3
4
5
6
7
8
sudo apt install libssl-dev autoconf automake libtool pkg-config python-dev
git clone https://github.com/facebook/watchman.git
cd watchman
git checkout v4.9.0 # the latest stable release
./autogen.sh
./configure
make
sudo make install

Install KVM

Adapted from here.

Check if your CPU supports hardware virtualization, by typing:

1
egrep -c '(vmx|svm)' /proc/cpuinfo

Install dependencies:

1
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

Add your user to some groups, replacing by your own username:

1
2
sudo adduser $USER kvm
sudo - $USER

Check if everything is ok:

1
sudo virsh -c qemu:///system list

React Native CLI

1
npm install -g react-native-cli

Create a React Native project

1
react-native init AwesomeProject

Use Android Studio to open ./AwesomeProject/android. Open AVD Manager to see a list of Android Virtual Devices (AVDs).

Click Create Virtual Device, pick a phone (I picked Nexus 5), press Next, and select the Pie API Level 28 image (I had to download it first).

I run the emulator apart from the Android Studio environment:

1
~/Android/Sdk/emulator/emulator -avd Nexus_5_API_28

Execute the AwesomeProject app:

1
2
cd AwesomeProject
react-native run-android

Add Detox to Android project

Here, I simply consolidated all the setup steps described over several pages of Detox docs.

1
2
npm install -g detox-cli
npm install --save-dev detox

Paste to package.json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
"detox" : {
"configurations": {
"android.emu.debug": {
"binaryPath": "android/app/build/outputs/apk/debug/app-debug.apk",
"build": "cd android && ./gradlew assembleDebug assembleAndroidTest -DtestBuildType=debug && cd ..",
"type": "android.emulator",
"name": "Nexus_5_API_28"
},
"android.emu.release": {
"binaryPath": "android/app/build/outputs/apk/release/app-release.apk",
"build": "cd android && ./gradlew assembleRelease assembleAndroidTest -DtestBuildType=release && cd ..",
"type": "android.emulator",
"name": "Nexus_5_API_28"
}
}
}

Configure Gradle

In android/build.gradle you need to add this under allprojects > repositories. The default init will look much like this already. Note the two separate maven blocks:

1
2
3
4
5
6
7
8
9
10
11
12
13
allprojects {
repositories {
// ...
google()
maven {
// All of Detox' artifacts are provided via the npm module
url "$rootDir/../node_modules/detox/Detox-android"
}
maven {
url "$rootDir/../node_modules/react-native/android"
}
}
}

Set minSdkVersion in android/build.gradle:

1
2
3
4
5
buildscript {
ext {
// ...
minSdkVersion = 18
// ...

Add to dependencies in android/app/build.gradle:

1
2
3
4
5
dependencies {
// ...
androidTestImplementation('com.wix:detox:+') { transitive = true }
androidTestImplementation 'junit:junit:4.12'
}

Also in android/app/build.gradle, update defaultConfig:

1
2
3
4
5
6
7
8
9
android {
// ...
defaultConfig {
// ...
testBuildType System.getProperty('testBuildType', 'debug') // This will later be used to control the test apk build type
testInstrumentationRunner 'androidx.test.runner.AndroidJUnitRunner'
}
}

Add Kotlin

In android/build.gradle, update `dependencies:

1
2
3
4
5
6
7
8
9
10
11
12
13
buildscript {
// ...
ext: {
// ...
kotlinVersion = '1.3.10' // Your app's version
}
dependencies: {
// ...
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlinVersion"
}
}

Create Android Test Class

Execute:

1
2
3
mkdir -p android/app/src/androidTest/java/com/awesomeproject/
wget https://raw.githubusercontent.com/wix/Detox/master/examples/demo-react-native/android/app/src/androidTest/java/com/example/DetoxTest.java
mv DetoxTest.java android/app/src/androidTest/java/com/awesomeproject/

At the top of the DetoxTest.java file, change com.example to com.awesomeproject.

Add testing frameworks

1
npm install mocha --save-dev

Create template example tests:

1
detox init -r mocha

Build app

1
detox build --configuration android.emu.debug

Run tests

Make sure the emulator is running:

1
~/Android/Sdk/emulator/emulator -avd Nexus_5_API_28

Start the react-native server:

1
react-native start

Run tests:

1
detox test -c android.emu.debug

Notes

Switch between projects I had difficulty with watchman. The instructions found here cleared the error:

1
2
watchman watch-del-all
watchman shutdown-server

Peace



Dockerized Matomo on Ubuntu 16.04

I’ve been hard on CloudAtCost before… they’re still terrible, but I’ve got a lot of use of my one-time purchase. I still use the resources I own to run non-critical applications. Matomo falls into that category.

Anyhoo, my server crashed and had to be deleted. This is how I setup Matomo on Ubuntu 16.04. Do it behind an nginx-proxy/lets-encrypt Docker Composition. This process is very manual and may one day be set up as a proper Docker build. As it stands, there is a lot of manual manipulation within the container.

First, create a project directory:

1
mkdir matomo && cd matomo

Copy and paste this into a file called docker-compose.yml.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
version: '3'
services:
mariadb:
image: 'bitnami/mariadb:latest'
restart: unless-stopped
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_matomo
- MARIADB_DATABASE=bitnami_matomo
volumes:
- 'mariadb_data:/bitnami'
matomo:
image: 'bitnami/matomo:latest'
restart: unless-stopped
environment:
- MATOMO_DATABASE_USER=bn_matomo
- MATOMO_DATABASE_NAME=bitnami_matomo
- ALLOW_EMPTY_PASSWORD=yes
- MATOMO_USERNAME=Dan
- MATOMO_EMAIL=someguy@example.com
- VIRTUAL_HOST=matomo.example.com
- LETSENCRYPT_HOST=matomo.example.com
- LETSENCRYPT_EMAIL=someguy@example.com
depends_on:
- mariadb
volumes:
- 'matomo_data:/bitnami'
- './misc:/opt/bitnami/matomo/misc/'
volumes:
mariadb_data:
driver: local
matomo_data:
driver: local
networks:
default:
external:
name: nginx-proxy

Create and execute the container with:

1
docker-compose up -d

This is the time to start (or restart) the nginx-proxy/lets-encrypt composition. Once this is running, your username will be what was set in the docker-compose.yml file described above. In this case, the default credentials are:

  • Username: Dan
  • Password: bitnami

You should be able to login at the domain specified now.

App-level Configuration

matomo works out of the box, but there will be a bunch of things you’ll want to set up at the application level.

Upgrade

Before all that, there’s a weird permissions issue in the container. You’ll want to upgrade matomo, but won’t be able to do so until you fix this. It’s super hacky having to do this from within the container, but that’s what I’m working with at the moment.

From your project directory:

1
docker-compose exec matomo bash

Then, from within the container:

1
2
chown -R daemon:daemon /opt/bitnami/matomo
chmod -R 0755 /opt/bitnami/matomo

Dependencies and Headers

Again, this is super hacky, because now you need to install an editor and a bunch of other dependencies within the container:

1
2
3
apt update
apt install vim git wget autoconf gettext libtool build-essential
vim /opt/bitnami/matomo/config/config.ini.php

Add this to the [General] section:

1
2
3
4
5
force_ssl = 1
; Standard proxy
proxy_client_headers[] = HTTP_X_FORWARDED_FOR
proxy_host_headers[] = HTTP_X_FORWARDED_HOST

Exit the container and restart.

1
docker-compose restart

Config checklist

At this point, everything should be operational on a basic level. Address the following points, and get a lot more use out of matomo.

Personal > Settings

  • Change password
  • Exclude your own visits using a cookie

System > Geolocation

I set up the GeoIP2 (Php) extension, which is supposed to make things faster somehow..

1
docker-compose exec matomo bash

From within the container, clone libmaxminddb:

1
git clone --recursive https://github.com/maxmind/libmaxminddb

Install from inside the cloned directory:

1
2
3
4
5
6
cd libmaxminddb
./bootstrap
./configure
make
make install
ldconfig

Install the extension:

1
2
3
4
5
6
7
cd ..
git clone https://github.com/maxmind/MaxMind-DB-Reader-php.git
cd MaxMind-DB-Reader-php/ext
phpize
./configure
make
make install

Edit php.ini:

1
vim /opt/bitnami/php/lib/php.ini

Add this to the end and save:

1
extension=maxminddb.so

Get the database:

1
2
3
4
cd /opt/bitnami/matomo/misc
wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
tar -xvfz GeoLite2-City.tar.gz
mv GeoLite2-City_20190115/* .

Exit the container and restart from the host:

1
docker-compose restart

If you refresh the System > Geolocation page, GeoIp 2 (Php) will be operational. Select this option and save.

Websites > Manage

Add all the websites you want track.

Conclusion

I needed to bang this out for my own purposes. I will likely be forced to revisit this when CloudAtCost fails me once again.


An nginx-proxy/lets-encrypt Docker Composition

I was just doing a major redeployment when I realized I’ve never documented my approach to nginx-proxy and lets-encrypt with Version 3 of docker-compose.

I like to deploy a bunch of web applications and static web sites behind a single proxy. What follows is meant to be copy-paste workable on an Ubuntu 16.04 server.

Organization

Set up your server’s directory structure:

1
mkdir -p ~/sites/nginx-proxy && cd ~/sites/nginx-proxy

Docker Compose

Paste the following into docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
# Can anyone explain this sorcery?
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
logging:
options:
max-size: "4m"
max-file: "10"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
depends_on:
- nginx-proxy
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
vhost:
# Do not forget to 'docker network create nginx-proxy' before launch
# and to add '--network nginx-proxy' to proxyed containers.
networks:
default:
external:
name: nginx-proxy

Configuring the nginx in nginx-proxy

Sometimes you need to override the default nginx configuration contained in the nginx-proxy Docker image. To do this, you must build a new image using nginx-proxy as its base.

For example, an app might need to accept large file uploads. You would paste this into your Dockerfile:

1
2
3
4
5
6
# Cf., https://github.com/schmunk42/nginx-proxy#proxy-wide
FROM jwilder/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 5m;'; \
} > /etc/nginx/conf.d/my_proxy.conf

This sets the required configurations within the nginx-proxy container.

In this case you also need to modify the docker-compose.yml file to build the local Dockerfile. The first few lines will now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
# docker-compose.yml
version: '3'
services:
nginx-proxy:
# Change this:
#image: jwilder/nginx-proxy
# To this:
build: .
# as above...

Deploying sites and apps

With the proxy configured and deployed (docker-compose up -d), you can wire up all your sites and apps.

Static Site

A static site deployed with nginx:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: unless-stopped
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=you@example.com
expose:
- 80
volumes:
- ./_site:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Deploy App

Requirements are going to vary app-by-app, but for a simple node application, use the following as a starting point:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# docker-compose.yml
version: '3'
services:
node:
build: .
restart: unless-stopped
ports:
- 3000
environment:
- NODE_ENV=production
- VIRTUAL_HOST=app.example.com
- LETSENCRYPT_HOST=app.example.com
- LETSENCRYPT_EMAIL=you@example.com
volumes:
- .:/home/node
- /home/node/node_modules
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Dockerizing Tor to serve up multiple hidden web services

This post documents an improvement made to the method demonstrated in A Dockerized Torified Express Application Served with Nginx. The previous configuration only deploys one hidden Tor service. I want to be able to deploy a bunch of hidden services behind a general Tor proxy.

Here I use Docker and Compose to build a Tor container behind which multiple Express applications are served.

Express Apps

Let’s suppose there are two express apps. Each will have their own Dockerfile and docker-compose.yml configurations.

Dockerfile

Assuming that each app is setup with all dependencies installed, a simple express Dockerfile might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FROM node
ENV NPM_CONFIG_LOGLEVEL warn
EXPOSE 3000
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
CMD ["node", "./app.js"]

This defines the container in which the express app runs. Here, port 3000 will be open to apps on the network bridge (see below). Each app will need its own port. For example, the second app may EXPOSE 3001.

docker-compose.yml

docker-compose will build the express app image and serve it up on localhost. It will be connected to the same Docker network as the Tor container. A docker-compose.yml for a simple express app might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
networks:
default:
external:
name: torproxy_default

Deploy Apps

Once the apps have been Dockerized, each may be brought online with this:

1
docker-compose up -d

Tor

Tor will use the same Dockerfile/docker-compose.yml approach to deploying the service. This will provide the public (hidden) access point.

The Tor proxy container should be setup in its own directory apart from the apps. E.g.,

1
mkdir tor-proxy && cd tor-proxy

Docker

Paste the following to Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
FROM debian
ENV NPM_CONFIG_LOGLEVEL warn
ENV DEBIAN_FRONTEND noninteractive
EXPOSE 9050
# `apt-utils` squelches a configuration warning
# `gnupg2` is required for adding the `apt` key
RUN apt-get update
RUN apt-get -y install apt-utils gnupg2
#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-25
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring
# The debian image does not create a default user
RUN useradd -m user
USER user
# Run the Tor service
CMD /usr/bin/tor -f /etc/tor/torrc

docker-compose.yml

This builds and deploys the Tor container. Paste into docker-compose.yml:

1
2
3
4
5
6
7
version: '3'
services:
tor:
build: .
restart: always
volumes:
- ./config/torrc:/etc/tor/torrc

Configuration

As declared above (in docker-compose.yml), the container shares a volume on the host called /config/torrc and connects to the torproxy_default network. It’s in the torrc file that you set the ports for your hidden service. The network allows the external hidden apps to connect to the tor-proxy container. To find the hosts for each hidden service, simply execute:

1
docker ps

You should see something like this:

1
2
3
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94816844b40b torapp2_node "npm start" 11 minutes ago Up 11 minutes 3001/tcp torapp2_node_1
8c11fb2c9167 torapp1_node "npm start" 12 minutes ago Up 12 minutes 3000/tcp torapp1_node_1

The items listed under the NAMES column serve as your hostnames. So, in this two app configuration, ./config/torrc looks like this:

1
2
3
4
5
HiddenServiceDir /home/user/.tor/hidden_app_1/
HiddenServicePort 80 torapp1_node_1:3000
HiddenServiceDir /home/user/.tor/hidden_app_2/
HiddenServicePort 80 torapp2_node_1:3001

Note the different ports on each of the hidden services. These correspond to the exposed ports in each app’s docker-compose.yml file.

Deploy Tor Container

Bring Tor online with this:

1
docker-compose up -d

If the container reports any sort of directory permissions issues, refer to the notes pertaining to the RUN usermod -u 1001 user command in the tor-proxy Dockerfile.

Assuming everything is built and deployed correctly, you can find your .onion hostnames in the .tor directory in the container:

1
2
docker-compose exec tor cat /home/user/.tor/hidden_app_1/hostname
docker-compose exec tor cat /home/user/.tor/hidden_app_2/hostname

Assuming all goes well, welcome to the darkweb.


A better open-source extension for Silhouette Cameo, Inkscape, and Ubuntu

I would have updated my previous attempt at configuring Inkscape to work with the Silhouette Cameo, but got so swept up in the excitement of cutting vinyl stickers, I forgot to do it until now. Unless something has changed since my last relevant post, InkCut doesn’t really work.

This post demonstrates how to configure the open-source inkscape-silhouette extension on Ubuntu 16.04.

System and dependencies

Do the usual system prep before adding the software upon which Inkscape and the Silhouette extension depend:

1
2
sudo apt update
sudo apt upgrade

Ubuntu 16.04

Just as with a conventional printer, the Silhouette Cameo requires some drivers be installed before it can work with Ubuntu.

Open your System Settings:

[Open System Settings]

Open the Printers option:

[Click Printers]

Add a printer:

[Add Printer]

Hopefully you see your device in the list:

[Find device in list]

The drivers for generic printing devices will suffice in this situation:

[Select Generic] [Text-only driver]

Change your cutter’s name, if you like. I left these settings untouched:

[Printer description]

Not sure what would happen if you attempted to print a test page. I cancelled:

[Cancel test page]

If all is well, you should see the device you just added:

[Silhouette device added]

Inkscape

The Inkscape vector graphics tool has an extension that enables you to send your own SVG files to the Cameo.

Add the Inkscape repository and install:

1
2
3
sudo add-apt-repository ppa:inkscape.dev/stable
sudo apt update
sudo apt install inkscape

Run it from the command line to make sure it works:

1
inkscape

inkscape-silhouette extension

These steps are adapted from the inkscape-silhouette wiki.

This extension depends upon python-usb:

1
sudo apt install python-usb

Next, you’ll need to download a copy of the extension’s latest release. At the time of writing, you could obtain it from the command line like this:

1
2
3
cd ~
wget https://github.com/fablabnbg/inkscape-silhouette/releases/download/v1.19/inkscape-silhouette_1.19-1_all.deb
sudo dpkg -i inkscape-silhouette_1.19-1_all.deb

Try it out

Execute inkscape (from the command line, if you wish):

1
inkscape

Load the SVG file you want to cut and navigate to Extensions > Export > Send to Silhouette:

[Extensions > Export > Send to Silhouette]

I leave the settings for you to play with. I only cut vinyl, so I go with the extension-provided defaults:

[Vinyl defaults]

When ready, press Apply and watch your Silhouette Cameo spring to life.


A Dockerized, Torified, Express Application

Dark Web chatter is picking up. I’m interested in providing cool web services anonymously. This is my first attempt at using Docker Compose to stay ahead of this trend.

Assumption: all the software goodies are setup and ready to go on an Ubuntu 16.04 server (node, docker, docker-compose, et al).

Set up an Express App

The Express Application Generator strikes me as a little bloated, but I use it anyway because I’m super lazy.

1
sudo npm install express-generator -g

Once installed, set up a vanilla express project:

1
2
express --view=ejs tor-app
cd tor-app && npm install

The express-generator will tell you to run the app like this:

1
DEBUG=tor-app:* npm start

This, of course, is only useful for development. From here, we’ll Dockerize for deployment and Torify for anonymity.

Tor pre-configuration

In anticipation of setting up the actual Torified app container, create a new file called config/torrc. This file will be used by Tor inside the Docker container to serve up our app. Paste the following into config/torrc:

1
2
HiddenServiceDir /home/node/.tor/hidden_service/
HiddenServicePort 80 127.0.0.1:3000

Docker

Copy and paste the following into a new file called Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
FROM node:stretch
ENV NPM_CONFIG_LOGLEVEL warn
ENV DEBIAN_FRONTEND noninteractive
EXPOSE 9050
# `apt-utils` squelches a configuration warning
RUN apt-get update
RUN apt-get -y install apt-utils
#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-20
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring
#
# Tor raises some tricky directory permissions issues. Once started, Tor will
# write the hostname and private key into a directory on the host system. If
# the `node` user in the container does not have the same UID as the user on
# the host system, Tor will not be able to create and write to these
# directories. Execute `id -u` on the host to determine your UID.
#
# RUN usermod -u 1001 node
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
# Run the Tor service alongside the app itself
CMD /usr/bin/tor -f /etc/tor/torrc & npm start

Container/Host Permissions

Take special note of the comment posted above the RUN usermode -u 1001 node instruction in Dockerfile. If you get any errors on the container build/execute step described below, you’ll need to make sure your host user’s UID is the same as your container user’s UID (i.e., the node user).

Usually the user in the container has a UID of 1000. To determine the host user’s UID, execute id -u. If it’s not 1000, uncomment the usermod instruction in Dockerfile and make sure the numbers match.

Docker Compose

docker-compose does all of the heavy lifting for building the Dockerfile and start-up/shut-down operations. Paste the following into a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
- ./config/torrc:/etc/tor/torrc

Bring the whole thing online by running

1
docker-compose up -d

Every now and then I get an error trying to obtain the GPG key:

1
gpg: keyserver receive failed: Cannot assign requested address

This usually solves itself on subsequent calls to docker-compose up.

Assuming the build and execution was successful, you can determine your .onion address like this:

1
docker-compose exec node cat /home/node/.tor/hidden_service/hostname

You should now be able to access your app from favourite Tor web browser.

If you’re interested in poking around inside the container, access the bash prompt like this:

1
docker-compose exec node bash

Notes

This is the first step in configuring and deploying a hidden service on the Tor network. Since working out the initial details, I’ve already thought of potential improvements to this approach. As it stands, only one hidden service can be deployed. It would be far better to create a Tor container able to proxy multiple apps. I will also be looking into setting up .onion vanity URLs and HTTPS.


cors-anywhere deployment with Docker Compose

My ad-tracker Express app serves up queued advertisements with a client-side call to Javascript’s fetch function. This, of course, raises the issue of Cross-Origin Resource Sharing.

I use the cors-anywhere node module to allow sites with ad-tracker advertisements to access the server. Naturally, docker-compose is my preferred deployment tool.

Set up the project

Create a project directory and initialize the application with npm:

1
2
mkdir -p sites/cors-anywhere-server && cd sites/cors-anywhere-server
npm init

Follow the npm init prompts.

Once initialized, add the cors-anywhere module to the project:

1
npm install cors-anywhere --save

Copy and paste the following into index.js (or whatever entry-point you specified in the initialization step):

1
2
3
4
5
6
7
8
9
10
11
12
13
// Listen on a specific host via the HOST environment variable
var host = process.env.HOST || '0.0.0.0';
// Listen on a specific port via the PORT environment variable
var port = process.env.PORT || 8080;
var cors_proxy = require('cors-anywhere');
cors_proxy.createServer({
originWhitelist: [], // Allow all origins
requireHeader: ['origin', 'x-requested-with'],
removeHeaders: ['cookie', 'cookie2']
}).listen(port, host, function() {
console.log('Running CORS Anywhere on ' + host + ':' + port);
});

This code is take verbatim from the cors-anywhere documentation.

To execute the application:

1
node index.js

If it executes successfully, you should see:

1
Running CORS Anywhere on 0.0.0.0:8080

Exit the app.

Docker

To create the Dockerized application image, paste the following into Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FROM node
ENV NPM_CONFIG_LOGLEVEL warn
EXPOSE 8080
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
CMD ["node", "./index.js"]

This will build the cors-anywhere into a Docker node container.

Docker Compose

Paste the following into docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
version: '3'
services:
node:
build: .
restart: always
ports:
- "8080"
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules

Build the image and deploy in one step:

1
docker-compose up

The last line of the console output should read:

1
node_1 | Running CORS Anywhere on 0.0.0.0:8080

At this point, any request proxied through the cors-anywhere-server will be allowed access to cross-domain resources. Your client-side fetch calls can now leverage this functionality by prefixing the destination URL with the cors-anywhere-server URL. It may look something like this:

(function() {
  var CORS_SERVER = 'https://cors-server.example.com:8080';
  var AD_SERVER = 'https://ads.example.com';

  fetch(CORS_SERVER + '/' + AD_SERVER).then(function(response) {
    return response.json();    
  }).then(function(json) {     
    console.log('CORS request successful!');
    console.log(json); 
  });
})();

Done!


PostgreSQL Backup and Restore Between Docker-composed Containers

The importance of backup and recovery really only becomes clear in the face of catastrophic data loss. I’ve got a slick little Padrino app that’s starting to generate traffic (and ad revenue). As such, it would be a real shame if my data got lost and I had to start from scratch.

docker-compose

This is what I’m working with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# docker-compose.yml
nginx:
restart: always
build: ./
volumes:
# Page content
- ./:/home/app/webapp
links:
- postgres
environment:
- PASSENGER_APP_ENV=production
- RACK_ENV=production
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=daniel@example.com
postgres:
restart: always
image: postgres
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=secretpassword
volumes_from:
- myapp_data

It’s the old Compose Version 1 syntax, but what follows should still apply. As with all such compositions, I write database data to a data-only container. Though the data persists apart from the Dockerized Postgres container, it still needs to be running (e.g., docker-compose up -d).

Dump the data

Assuming the containers are up and running, the appropriate command looks like this:

1
docker-compose exec -u <your_postgres_user> <postgres_service_name> pg_dump -Fc <database_name_here> > db.dump

Given the composition above, the command I actually execute is this:

1
docker-compose exec --user root postgres pg_dump -Fc myapp_production > db.dump

At this point, the db.dump file can be transfered to a remote server through whatever means are appropriate (I set this all up in capistrano to make it super easy).

Restore the data

Another assumption: a new database is up and running on the remote backup machine (ideally using the same docker-compose.yml file above).

The restore command looks like this:

1
docker-compose exec -i -u <your_postgres_user> <postgres_service_name> pg_restore -C -d postgres < db.dump

The command I execute is this:

1
docker-compose exec -i -u root postgres pg_restore -C -d postgres < db.dump

Done!


Nginx Proxy, Let's Encrypt Companion, and Docker Compose Version 3

I recently discovered that I don’t need to manually create data-only containers with docker-compose anymore. A welcome feature, but one that comes with all the usual migration overhead. I rely heavily on nginx-proxy and letsencrypt-nginx-proxy-companion. Getting it all to work in the style of docker-compose version 3 took a bit of doing.

My previous tried and true approach is getting pretty stale. It is time to up my Docker game…

My Site

nginx-proxy proxies multiple site, but for demonstration purposes, I’m only serving up one with nginx. I like to put all my individual Docker compositions in their own directories:

1
mkdir mysite && cd mysite

Optional

The following assumes you have some sort of site you want to serve up from the mysite/ directory. If not, just create a simple Hello, world! HTML page. Copy and paste the following to index.html:

1
2
3
4
5
6
7
8
9
10
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello, world!</title>
</head>
<body>
Hello, world!
</body>
</html>

docker-compose

It’s awesome that I can create data-only containers in my docker-compose.yml, but now I’ve got to manually create a network bridge:

1
docker network create nginx-proxy

Proxied containers also need to know about this network in their own docker-compose.yml files…

Copy and paste the code below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: always
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=site.example.com
- LETSENCRYPT_EMAIL=email@example.com
volumes:
- ./:/usr/share/nginx/html
networks:
default:
external:
name: nginx-proxy

This will serve up files from the current directory (i.e., the same one that contains the new index.html page, if created).

Start docker-compose:

1
docker-compose up -d

The site won’t be accessible yet. That comes next.

nginx-proxy

As before, put the nginx-proxy Docker compositions in its own directory:

1
2
cd ..
mkdir nginx-proxy && cd nginx-proxy

Create a directory in which to store the Let’s Encrypt certificates:

1
mkdir certs

Copy and paste the following to a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
volumes:
vhost:
networks:
default:
external:
name: nginx-proxy

This allows nginx-proxy to combine forces with letsencrypt-nginx-proxy-companion, all in one docker-compose file.

Start docker-compose:

1
docker-compose up -d

If all is well, you should be able to access your site at the address configured.