How to get started testing with Express, Jasmine, and Zombie

Okay, so I’m a little old fashioned. I’m also lazy and reluctant to learn new things. My buddy Dawson asked How do I get started with testing my web software? I said, start with node/express

Express

Express is a web framework. You build server-side software with Express. Let’s bootstrap a project with express-generator:

1
npx express-generator --view=ejs myapp

This creates a skeleton application from which to launch development. The ejs stands for Embedded Javascript. To install:

1
2
cd myapp
npm install

Execute the server:

1
npm start

Assuming all is well, you can navigate to http://localhost:3000 to see your new app in action. Halt server execution by pressing Ctrl-C.

That’s great and all, but let’s get to testing…

Jasmine

I’m not sure Jasmine is trendy, but I’ve been using it for years. It takes minimal setup and can be neatly structured. It’s a good tool and the tests you write are easily adapted for execution by other test frameworks (if you decide you don’t like jasmine).

Add jasmine to your web application:

1
npm install --save-dev jasmine

jasmine is now a development dependency. Execute cat package.json to get a peak at how node manages its dependencies.

Initialize jasmine like this:

1
npx jasmine init

Most people like to script test execution with npm. Open the package.json file just mentioned. Configure "scripts": { "test": "jasmine" }… that is, make package.json look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"name": "myapp",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www",
"test": "jasmine"
},
"dependencies": {
"cookie-parser": "~1.4.4",
"debug": "~2.6.9",
"ejs": "~2.6.1",
"express": "~4.16.1",
"http-errors": "~1.6.3",
"morgan": "~1.9.1"
},
"devDependencies": {
"jasmine": "^3.5.0"
}
}

If you do this correctly, you can run the following:

1
npm test

And expect to see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
> myapp@0.0.0 test /home/daniel/workspace/myapp
> jasmine
Randomized with seed 44076
Started
No specs found
Finished in 0.003 seconds
Incomplete: No specs found
Randomized with seed 44076 (jasmine --random=true --seed=44076)
npm ERR! Test failed. See above for more details.

jasmine is telling us that the tests failed because we have yet to write any. Being as lazy as I am, I try to find opportunities to skip unit tests and go directly to testing user behaviour. For this, I use a headless browser called Zombie.

Zombie

Like jasmine, I’m not sure zombie is the trendiest option out there, but I’ve always managed to get the two to play nicely together and have yet to find any serious shortcoming. Add zombie to your project like this:

1
npm install --save-dev zombie

Now we’re ready to write some tests…

Testing!

Oh wait, what’s this app supposed to do? Ummmmm…

For now, I’ll keep it really simple until my buddy Dawson comes up with tougher testing questions.

Purpose

I want to be able to load my app in a browser, enter my name in an input field, press Submit, and receive a friendly greeting in return.

When starting a new web application, the first test I write ensures my page actually loads in the zombie browser. Create a spec file:

1
touch spec/indexSpec.js

I use the word index in the sense that it’s the default first page you land on at any website. Test files end with the *Spec.js suffix by default. Execute cat spec/support/jasmine.json to see how jasmine decides which files to execute.

Open spec/indexSpec.js in your favourite editor and paste this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
const Browser = require('zombie');
Browser.localhost('example.com', 3000);
describe('the landing page', () => {
let browser;
/**
* This loads the running web application
* with a new `zombie` browser before each test.
*/
beforeEach(done => {
browser = new Browser();
browser.visit('/', err => {
if (err) return done.fail(err);
done();
});
});
/**
* Your first test!
*
* `zombie` has loaded and rendered the page
* returned by your application. Use `jasmine`
* and `zombie` to ensure it's doing what you
* expect.
*
* In this case, I just want to make sure a
* page title is displayed.
*/
it('displays the page title', () => {
browser.assert.text('h1', 'Friendly Greeting Generator');
});
/**
* Put future tests here...
*/
// ...
});

Simple enough. At this point you might be tempted to go make the test pass. Instead, execute the following to make sure it fails:

1
npm test

Whoa! What happened? You probably see something like this:

1
2
3
4
5
6
7
8
9
10
11
Randomized with seed 73862
Started
F
Failures:
1) the landing page displays the page title
Message:
Failed: connect ECONNREFUSED 127.0.0.1:3000
Stack:
...

It’s good that it failed, because that’s an important step, but if you look closely at the error, connect ECONNREFUSED 127.0.0.1:3000 tells you your app isn’t even running. You’ll need to open another shell or process and execute:

1
npm start

Your app is now running and zombie can now send a request and expect to receive your landing page. In another shell (so that your app can keep running), execute the tests again:

1
npm test

If it fails (as expected), you will see something like this:

1
2
3
4
5
6
7
8
9
10
Randomized with seed 38606
Started
F
Failures:
1) the landing page displays the page title
Message:
AssertionError [ERR_ASSERTION]: 'Express' deepEqual 'The Friendly Greeting Generator'
Stack:
error properties: Object({ generatedMessage: true, code: 'ERR_ASSERTION', actual: 'Express', expected: 'Friendly Greeting Generator', operator: 'deepEqual' })

That’s much better. Now, having ensured the test fails, make the test pass. Open routes/index.js in your project folder and make it look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
var express = require('express');
var router = express.Router();
/* GET home page. */
router.get('/', function(req, res, next) {
// The old command
//res.render('index', { title: 'Express' });
// The new test-friendly command
res.render('index', { title: 'The Friendly Greeting Generator' });
});
module.exports = router;

Execute the tests again:

1
npm test

And you will see:

1
2
3
4
5
6
7
8
9
Randomized with seed 29903
Started
F
Failures:
1) the landing page displays the page title
Message:
AssertionError [ERR_ASSERTION]: 'Express' deepEqual 'The Friendly Greeting Generator'
Stack:

Oh no! Not again! Go back and check… yup, you definitely changed the name of the app. What could be wrong?

You need to restart your server in your other shell. Exit with Ctrl-C and restart with npm start. (Yes, there is a much better way of doing this).

Having restarted your application, execute the tests again with npm test. You will see this:

1
2
3
4
5
6
7
8
Randomized with seed 46658
Started
.
1 spec, 0 failures
Finished in 0.071 seconds
Randomized with seed 46658 (jasmine --random=true --seed=46658

Awesome. Your first test passes. Recall the stated purpose of this app:

I want to be able to load my app in a browser, enter my name in an input field, press Submit, and receive a friendly greeting in return.

Using this user story as a guide, you can proceed writing your tests. So far, the first part of the story has been covered (i.e., I want to be able to load my app in a browser). Now to test the rest…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
// ...
// Add these below our first test in `indexSpec.js`
it('renders an input form', () => {
browser.assert.element('input[type=text]');
browser.assert.element('input[type=submit]');
});
it('returns a friendly greeting if you enter your name and press Submit', done => {
browser.fill('name', 'Dan');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'What up, Dan?');
done();
});
});
it('trims excess whitespace from the name submitted', done => {
browser.fill('name', ' Dawson ');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'What up, Dan?');
done();
});
});
it('gets snarky if you forget to enter your name before pressing Submit', done => {
browser.fill('name', '');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'Whatevs...');
done();
});
});
it('gets snarky if you forget to enter a blank name before pressing Submit', done => {
browser.fill('name', ' ');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'Please don\'t waste my time');
done();
});
});
});

You can push this as far as you want. For example, you might want to ensure your audience doesn’t enter a number or special characters for a name. The ones above define the minimal test-coverage requirement in this case.

Make sure these new tests fail by executing npm test. You won’t need to restart the server until you make changes to your app (yes, you should find a better way to manage this).

Make the tests pass

You should try doing this yourself before you skip ahead. I’ll give you a couple of clues and then provide the spoiler. In order to get these tests to pass, you’ll need to add a route to routes/index.js and you’ll need to modify the document structure in views/index.ejs.

Did you try it yourself?

Here’s one possible answer:

Make routes/index.js look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var express = require('express');
var router = express.Router();
/* GET home page. */
router.get('/', function(req, res, next) {
res.render('index', { title: 'The Friendly Greeting Generator', message: '' });
});
router.post('/', function(req, res, next) {
let message = 'Whatevs...'
if (req.body.name.length) {
let name = req.body.name.trim();
if (!name.length) {
message = 'Please don\'t waste my time';
}
else {
message = `What up, ${name}?`;
}
}
res.render('index', { title: 'The Friendly Greeting Generator', message: message });
});
module.exports = router;

Make views/index.js look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<!DOCTYPE html>
<html>
<head>
<title><%= title %></title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1><%= title %></h1>
<p>Welcome to <%= title %></p>
<form action="/" method="post">
<label for="name">Name:</label>
<input name="name" type=input" />
<button type="submit">Submit</button>
</form>
<% if (message) { %>
<h3><%= message %></h3>
<% } %>
</body>
</html>

Note the EJS alligator tags (<% %> and <%= %>).

When all expectations are satisfied, you will see something like this:

1
2
3
4
5
6
7
8
Randomized with seed 17093
Started
.....
5 specs, 0 failures
Finished in 0.106 seconds
Randomized with seed 17093 (jasmine --random=true --seed=17093)

What’s next?

Got a test question? What kind of app should I build and test?

If you learned something, support my work through Wycliffe or Patreon.


A better open-source extension for Silhouette Cameo, Inkscape, and Ubuntu

This post demonstrates how to configure the open-source inkscape-silhouette extension on Ubuntu 18.04.

My previous method is documented here.

Even now this post is out of date, as Ubuntu 20.04 has almost certainly been released at the time of writing (though I am far too lazy to go check). It may also be out of date because I don’t see a lot of Silhoutte merchandise at the local craft store anymore. Is there a similar way to interface with the Cricut?

System and dependencies

Do the usual system prep before adding the software upon which Inkscape and the Silhouette extension depend:

1
2
sudo apt update
sudo apt upgrade

Ubuntu 18.04

Just as with a conventional printer, the Silhouette Cameo requires some drivers be installed before it can work with Ubuntu.

Open your System Settings:

[Open System Settings]

Select Printers click Add:

[Add printer]

Select your device and press Add:

[Add your device]

You may see this:

[Searching for drivers]

Generic text-only driver automatically installed:

[Text-only driver]

Inkscape

The Inkscape vector graphics tool has an extension that enables you to send your own SVG files to the Cameo.

Add the Inkscape repository and install:

1
2
3
sudo add-apt-repository ppa:inkscape.dev/stable
sudo apt update
sudo apt install inkscape

Run it from the command line to make sure it works:

1
inkscape

inkscape-silhouette extension

These steps are adapted from the inkscape-silhouette wiki.

This extension depends upon python-usb:

1
sudo apt install python-usb

Next, you’ll need to download a copy of the extension’s latest release. At the time of writing, you could obtain it from the command line like this:

1
2
3
cd ~
wget https://github.com/fablabnbg/inkscape-silhouette/releases/download/v1.22/inkscape-silhouette_1.22-1_all.deb
sudo dpkg -i inkscape-silhouette_1.22-1_all.deb

Try it out

Execute inkscape (from the command line, if you wish):

1
inkscape

Load the SVG file you want to cut and navigate to Extensions > Export > Send to Silhouette:

I leave the settings for you to play with. I only cut vinyl, so I go with the extension-provided defaults:

When ready, press Apply and watch your Silhouette Cameo spring to life.


A non-intrusive behavioural testing approach to bootstrapped React in Typescript

I’m on a team that loves Typescript and React. Their code is manually tested. I prefer to write my tests first.

The following process addresses the problem of respecting an existing approach to app development, while respecting my own standard of professional practice.

This is how I introduced my behavioural testing approach to a project bootstrapped with create-react-app. It equips me to write automated tests while allowing my teamates their own approach.

Generate a React Typescript app

As per the docs, the following produces the base application:

1
2
npx create-react-app my-app --typescript
cd my-app

create-react-app comes with a few baked-in npm scripts. There are two existing build scripts: npm run build and npm start. The first only produces production builds. The latter only produces transient development builds (i.e., you can’t save them for later).

I need to produce test builds, so I add this to this to the scripts section in package.json:

1
2
3
4
5
6
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"build:test": "npx env-path -p .env.test react-scripts build"
...
}

create-react-app does a lot of crazy-cool stuff with .env files. I don’t think what follows will respect the dotenv-flow and dotenv-expand features that allows you to cascade build configurations according to environment.

With that it mind, install env-path:

1
npm install --save-dev env-path

I want a test build, so I need a .env.test file. Configurations, of course, are app dependent. The following is one likely example:

1
2
REACT_APP_REDIRECT_URI=http://localhost:3000
REACT_APP_CLIENT_ID=SomeExampleToken123

To create your test build, execute:

1
npm run build:test

You’ll see a message that says:

1
Creating an optimized production build...

Uh oh! react-scripts build only produces a production build. Is it still creating a production build, or is it reading my .env.test file?

This is the perfect opportunity to test if this test build configuration is working…

jasmine and zombie.js

None of this requires testing with jasmine and zombie, but I like testing with this pair because I’m old-fashioned and lazy. Install and initialize:

1
2
npm install --save-dev jasmine zombie
npx jasmine init

NOTE: I’m purposefully skipping adding typescript support to jasmine for the moment. I may revisit this in the future.

Testing the build

I need to determine if the values I set in .env.test are being baked into the test build configured above.

Here’s my first test, which I place in spec/indexSpec.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
const path = require('path');
require('dotenv').config({ path: path.resolve(process.cwd(), '.env.test') });
require('../support/server');
const Browser = require('zombie');
const PORT = 3000;
Browser.localhost('example.com', PORT);
describe('landing page', () => {
let browser, document;
beforeEach(done => {
browser = new Browser({ waitDuration: '30s', loadCss: false });
// Wait for React to execute and render
browser.on('loading', (doc) => {
document = doc;
document.addEventListener("DOMContentLoaded", (event) => {
done();
});
});
browser.visit('/', (err) => {
if (err) return done.fail(err);
});
});
it('displays the .env.test config variables', () => {
browser.assert.link('a', 'Redirect', 'http://localhost:3000/&id=SomeExampleToken123');
});
});

If you look closely at the above file, you’ll see this test needs its own server to run (npm start only does a development build). Paste the following into spec/support/server.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const express = require('express');
const path = require('path');
const app = express();
const logger = require('morgan');
app.use(logger('dev'));
app.use(express.static(path.join(__dirname, '../../build')));
app.get('/*', function(req, res) {
res.sendFile(path.join(__dirname, '../../build', 'index.html'));
});
const port = 3000;
app.listen(port, '0.0.0.0', function() {
console.log(`auth-account listening on port ${port}!`);
});

You need morgan to see the server output:

1
npm install --save-dev morgan

Configure jasmine script

Add another line in the package.json (i.e., the e2e line right below the build:test line already configured):

1
2
3
4
5
6
7
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"build:test": "npx env-path -p .env.test react-scripts build",
"e2e": "npm run build:test && npx jasmine"
...
}

The tests should now execute with one failing test:

1
npm run e2e

Make the test pass

The test defined above is simply checking to see if a link is created from the values stored in the .env.test file. Add that link to src/App.tsx:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// ...
const App: React.FC = () => {
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<p>
Edit <code>src/App.tsx</code> and save to reload.
</p>
<a
className="App-link"
href="https://reactjs.org"
target="_blank"
rel="noopener noreferrer"
>
Learn React
</a>
</header>
<!-- Add this link! Right here!!! -->
<a href={`${process.env.REACT_APP_REDIRECT_URI}/&id=${process.env.REACT_APP_CLIENT_ID}`}>Redirect</a>
</div>
);
}

Execute the test:

1
npm run e2e

It passes!

More work…

This configuration doesn’t currently respect the neat dotenv-flow and dotenv-expand features that come baked into create-react-app.

As noted above, jasmine is not currently configured to support typescript.


Basic Android-React Native environment setup in Ubuntu 18.04

I am a test-driven developer who avoids fancy IDEs. I attempted to work through the details of a headless Android-React Native development environment, but quickly realized I was in over my head. This document outlines what may be the more typical workspace arrangement. It also demonstrates how I got everything working with Detox.

The following steps were executed on an Ubuntu 18.04 Desktop machine. What follows is heavily adapted from the Facebook and Detox.

Dependencies

Node

You need node 8.3 or newer. I’m using 10.15.3.

React Native CLI

1
npm install -g react-native-cli

Java JDK

This is the version recommended by Facebook. Installation instructions are adapted from those provided by DigitalOcean.

1
sudo apt install openjdk-8-jdk

Android Studio

You can download the IDE here. I simply installed via the Ubuntu Software manager.

On first execution, select Do not import settings and press OK. There are some Setup Wizard screens, which you can navigate. When given the opportunity, choose a Custom setup when prompted to select an installation type. Check the following boxes:

  • Android SDK
  • Android SDK Platform
  • Android Virtual Device

Click Next to install all of these components.

Configure SDK

A React Native app requires the Android 9 (Pie) SDK. Install it throught the SDK Manager in Android Studio. Expand the Pie selection by clicking the Show Package Details box. Make sure the follow options are checked:

  • Android SDK Platform 28
  • Intel x86 Atom_64 System Image or Google APIs Intel x86 Atom System Image (I chose the first option)

Add the following lines to your $HOME/.bashrc config file:

1
2
3
4
5
export ANDROID_HOME=$HOME/Android/Sdk
export PATH=$PATH:$ANDROID_HOME/emulator
export PATH=$PATH:$ANDROID_HOME/tools
export PATH=$PATH:$ANDROID_HOME/tools/bin
export PATH=$PATH:$ANDROID_HOME/platform-tools

Load the config into the current shell:

1
source $HOME/.bashrc

Compile Watchman

1
2
3
4
5
6
7
8
sudo apt install libssl-dev autoconf automake libtool pkg-config python-dev
git clone https://github.com/facebook/watchman.git
cd watchman
git checkout v4.9.0 # the latest stable release
./autogen.sh
./configure
make
sudo make install

Install KVM

Adapted from here.

Check if your CPU supports hardware virtualization, by typing:

1
egrep -c '(vmx|svm)' /proc/cpuinfo

Install dependencies:

1
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

Add your user to some groups, replacing by your own username:

1
2
sudo adduser $USER kvm
sudo - $USER

Check if everything is ok:

1
sudo virsh -c qemu:///system list

React Native CLI

1
npm install -g react-native-cli

Create a React Native project

1
react-native init AwesomeProject

Use Android Studio to open ./AwesomeProject/android. Open AVD Manager to see a list of Android Virtual Devices (AVDs).

Click Create Virtual Device, pick a phone (I picked Nexus 5), press Next, and select the Pie API Level 28 image (I had to download it first).

I run the emulator apart from the Android Studio environment:

1
~/Android/Sdk/emulator/emulator -avd Nexus_5_API_28

Execute the AwesomeProject app:

1
2
cd AwesomeProject
react-native run-android

Add Detox to Android project

Here, I simply consolidated all the setup steps described over several pages of Detox docs.

1
2
npm install -g detox-cli
npm install --save-dev detox

Paste to package.json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
"detox" : {
"configurations": {
"android.emu.debug": {
"binaryPath": "android/app/build/outputs/apk/debug/app-debug.apk",
"build": "cd android && ./gradlew assembleDebug assembleAndroidTest -DtestBuildType=debug && cd ..",
"type": "android.emulator",
"name": "Nexus_5_API_28"
},
"android.emu.release": {
"binaryPath": "android/app/build/outputs/apk/release/app-release.apk",
"build": "cd android && ./gradlew assembleRelease assembleAndroidTest -DtestBuildType=release && cd ..",
"type": "android.emulator",
"name": "Nexus_5_API_28"
}
}
}

Configure Gradle

In android/build.gradle you need to add this under allprojects > repositories. The default init will look much like this already. Note the two separate maven blocks:

1
2
3
4
5
6
7
8
9
10
11
12
13
allprojects {
repositories {
// ...
google()
maven {
// All of Detox' artifacts are provided via the npm module
url "$rootDir/../node_modules/detox/Detox-android"
}
maven {
url "$rootDir/../node_modules/react-native/android"
}
}
}

Set minSdkVersion in android/build.gradle:

1
2
3
4
5
buildscript {
ext {
// ...
minSdkVersion = 18
// ...

Add to dependencies in android/app/build.gradle:

1
2
3
4
5
dependencies {
// ...
androidTestImplementation('com.wix:detox:+') { transitive = true }
androidTestImplementation 'junit:junit:4.12'
}

Also in android/app/build.gradle, update defaultConfig:

1
2
3
4
5
6
7
8
9
android {
// ...
defaultConfig {
// ...
testBuildType System.getProperty('testBuildType', 'debug') // This will later be used to control the test apk build type
testInstrumentationRunner 'androidx.test.runner.AndroidJUnitRunner'
}
}

Add Kotlin

In android/build.gradle, update `dependencies:

1
2
3
4
5
6
7
8
9
10
11
12
13
buildscript {
// ...
ext: {
// ...
kotlinVersion = '1.3.10' // Your app's version
}
dependencies: {
// ...
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlinVersion"
}
}

Create Android Test Class

Execute:

1
2
3
mkdir -p android/app/src/androidTest/java/com/awesomeproject/
wget https://raw.githubusercontent.com/wix/Detox/master/examples/demo-react-native/android/app/src/androidTest/java/com/example/DetoxTest.java
mv DetoxTest.java android/app/src/androidTest/java/com/awesomeproject/

At the top of the DetoxTest.java file, change com.example to com.awesomeproject.

Add testing frameworks

1
npm install mocha --save-dev

Create template example tests:

1
detox init -r mocha

Build app

1
detox build --configuration android.emu.debug

Run tests

Make sure the emulator is running:

1
~/Android/Sdk/emulator/emulator -avd Nexus_5_API_28

Start the react-native server:

1
react-native start

Run tests:

1
detox test -c android.emu.debug

Notes

Switch between projects I had difficulty with watchman. The instructions found here cleared the error:

1
2
watchman watch-del-all
watchman shutdown-server

Peace



Dockerized Matomo on Ubuntu 16.04

I’ve been hard on CloudAtCost before… they’re still terrible, but I’ve got a lot of use of my one-time purchase. I still use the resources I own to run non-critical applications. Matomo falls into that category.

Anyhoo, my server crashed and had to be deleted. This is how I setup Matomo on Ubuntu 16.04. Do it behind an nginx-proxy/lets-encrypt Docker Composition. This process is very manual and may one day be set up as a proper Docker build. As it stands, there is a lot of manual manipulation within the container.

First, create a project directory:

1
mkdir matomo && cd matomo

Copy and paste this into a file called docker-compose.yml.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
version: '3'
services:
mariadb:
image: 'bitnami/mariadb:latest'
restart: unless-stopped
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_matomo
- MARIADB_DATABASE=bitnami_matomo
volumes:
- 'mariadb_data:/bitnami'
matomo:
image: 'bitnami/matomo:latest'
restart: unless-stopped
environment:
- MATOMO_DATABASE_USER=bn_matomo
- MATOMO_DATABASE_NAME=bitnami_matomo
- ALLOW_EMPTY_PASSWORD=yes
- MATOMO_USERNAME=Dan
- MATOMO_EMAIL=someguy@example.com
- VIRTUAL_HOST=matomo.example.com
- LETSENCRYPT_HOST=matomo.example.com
- LETSENCRYPT_EMAIL=someguy@example.com
depends_on:
- mariadb
volumes:
- 'matomo_data:/bitnami'
- './misc:/opt/bitnami/matomo/misc/'
volumes:
mariadb_data:
driver: local
matomo_data:
driver: local
networks:
default:
external:
name: nginx-proxy

Create and execute the container with:

1
docker-compose up -d

This is the time to start (or restart) the nginx-proxy/lets-encrypt composition. Once this is running, your username will be what was set in the docker-compose.yml file described above. In this case, the default credentials are:

  • Username: Dan
  • Password: bitnami

You should be able to login at the domain specified now.

App-level Configuration

matomo works out of the box, but there will be a bunch of things you’ll want to set up at the application level.

Upgrade

Before all that, there’s a weird permissions issue in the container. You’ll want to upgrade matomo, but won’t be able to do so until you fix this. It’s super hacky having to do this from within the container, but that’s what I’m working with at the moment.

From your project directory:

1
docker-compose exec matomo bash

Then, from within the container:

1
2
chown -R daemon:daemon /opt/bitnami/matomo
chmod -R 0755 /opt/bitnami/matomo

Dependencies and Headers

Again, this is super hacky, because now you need to install an editor and a bunch of other dependencies within the container:

1
2
3
apt update
apt install vim git wget autoconf gettext libtool build-essential
vim /opt/bitnami/matomo/config/config.ini.php

Add this to the [General] section:

1
2
3
4
5
force_ssl = 1
; Standard proxy
proxy_client_headers[] = HTTP_X_FORWARDED_FOR
proxy_host_headers[] = HTTP_X_FORWARDED_HOST

Exit the container and restart.

1
docker-compose restart

Config checklist

At this point, everything should be operational on a basic level. Address the following points, and get a lot more use out of matomo.

Personal > Settings

  • Change password
  • Exclude your own visits using a cookie

System > Geolocation

I set up the GeoIP2 (Php) extension, which is supposed to make things faster somehow..

1
docker-compose exec matomo bash

From within the container, clone libmaxminddb:

1
git clone --recursive https://github.com/maxmind/libmaxminddb

Install from inside the cloned directory:

1
2
3
4
5
6
cd libmaxminddb
./bootstrap
./configure
make
make install
ldconfig

Install the extension:

1
2
3
4
5
6
7
cd ..
git clone https://github.com/maxmind/MaxMind-DB-Reader-php.git
cd MaxMind-DB-Reader-php/ext
phpize
./configure
make
make install

Edit php.ini:

1
vim /opt/bitnami/php/lib/php.ini

Add this to the end and save:

1
extension=maxminddb.so

Get the database:

1
2
3
4
cd /opt/bitnami/matomo/misc
wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
tar -xvfz GeoLite2-City.tar.gz
mv GeoLite2-City_20190115/* .

Exit the container and restart from the host:

1
docker-compose restart

If you refresh the System > Geolocation page, GeoIp 2 (Php) will be operational. Select this option and save.

Websites > Manage

Add all the websites you want track.

Conclusion

I needed to bang this out for my own purposes. I will likely be forced to revisit this when CloudAtCost fails me once again.


An nginx-proxy/lets-encrypt Docker Composition

I was just doing a major redeployment when I realized I’ve never documented my approach to nginx-proxy and lets-encrypt with Version 3 of docker-compose.

I like to deploy a bunch of web applications and static web sites behind a single proxy. What follows is meant to be copy-paste workable on an Ubuntu 16.04 server.

Organization

Set up your server’s directory structure:

1
mkdir -p ~/sites/nginx-proxy && cd ~/sites/nginx-proxy

Docker Compose

Paste the following into docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# docker-compose.yml
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./current/public:/usr/share/nginx/html
- ./certs:/etc/nginx/certs:ro
- vhost:/etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
# Can anyone explain this sorcery?
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
logging:
options:
max-size: "4m"
max-file: "10"
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: unless-stopped
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
- vhost:/etc/nginx/vhost.d
- ./current/public:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
depends_on:
- nginx-proxy
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
vhost:
# Do not forget to 'docker network create nginx-proxy' before launch
# and to add '--network nginx-proxy' to proxyed containers.
networks:
default:
external:
name: nginx-proxy

Configuring the nginx in nginx-proxy

Sometimes you need to override the default nginx configuration contained in the nginx-proxy Docker image. To do this, you must build a new image using nginx-proxy as its base.

For example, an app might need to accept large file uploads. You would paste this into your Dockerfile:

1
2
3
4
5
6
# Cf., https://github.com/schmunk42/nginx-proxy#proxy-wide
FROM jwilder/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 5m;'; \
} > /etc/nginx/conf.d/my_proxy.conf

This sets the required configurations within the nginx-proxy container.

In this case you also need to modify the docker-compose.yml file to build the local Dockerfile. The first few lines will now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
# docker-compose.yml
version: '3'
services:
nginx-proxy:
# Change this:
#image: jwilder/nginx-proxy
# To this:
build: .
# as above...

Deploying sites and apps

With the proxy configured and deployed (docker-compose up -d), you can wire up all your sites and apps.

Static Site

A static site deployed with nginx:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# docker-compose.yml
version: '3'
services:
nginx:
image: nginx
restart: unless-stopped
environment:
- VIRTUAL_HOST=example.com
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=you@example.com
expose:
- 80
volumes:
- ./_site:/usr/share/nginx/html
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Deploy App

Requirements are going to vary app-by-app, but for a simple node application, use the following as a starting point:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# docker-compose.yml
version: '3'
services:
node:
build: .
restart: unless-stopped
ports:
- 3000
environment:
- NODE_ENV=production
- VIRTUAL_HOST=app.example.com
- LETSENCRYPT_HOST=app.example.com
- LETSENCRYPT_EMAIL=you@example.com
volumes:
- .:/home/node
- /home/node/node_modules
logging:
options:
max-size: "4m"
max-file: "10"
networks:
default:
external:
name: nginx-proxy

Dockerizing Tor to serve up multiple hidden web services

This post documents an improvement made to the method demonstrated in A Dockerized Torified Express Application Served with Nginx. The previous configuration only deploys one hidden Tor service. I want to be able to deploy a bunch of hidden services behind a general Tor proxy.

Here I use Docker and Compose to build a Tor container behind which multiple Express applications are served.

Express Apps

Let’s suppose there are two express apps. Each will have their own Dockerfile and docker-compose.yml configurations.

Dockerfile

Assuming that each app is setup with all dependencies installed, a simple express Dockerfile might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FROM node
ENV NPM_CONFIG_LOGLEVEL warn
EXPOSE 3000
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
CMD ["node", "./app.js"]

This defines the container in which the express app runs. Here, port 3000 will be open to apps on the network bridge (see below). Each app will need its own port. For example, the second app may EXPOSE 3001.

docker-compose.yml

docker-compose will build the express app image and serve it up on localhost. It will be connected to the same Docker network as the Tor container. A docker-compose.yml for a simple express app might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
networks:
default:
external:
name: torproxy_default

Deploy Apps

Once the apps have been Dockerized, each may be brought online with this:

1
docker-compose up -d

Tor

Tor will use the same Dockerfile/docker-compose.yml approach to deploying the service. This will provide the public (hidden) access point.

The Tor proxy container should be setup in its own directory apart from the apps. E.g.,

1
mkdir tor-proxy && cd tor-proxy

Docker

Paste the following to Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
FROM debian
ENV NPM_CONFIG_LOGLEVEL warn
ENV DEBIAN_FRONTEND noninteractive
EXPOSE 9050
# `apt-utils` squelches a configuration warning
# `gnupg2` is required for adding the `apt` key
RUN apt-get update
RUN apt-get -y install apt-utils gnupg2
#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-25
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring
# The debian image does not create a default user
RUN useradd -m user
USER user
# Run the Tor service
CMD /usr/bin/tor -f /etc/tor/torrc

docker-compose.yml

This builds and deploys the Tor container. Paste into docker-compose.yml:

1
2
3
4
5
6
7
version: '3'
services:
tor:
build: .
restart: always
volumes:
- ./config/torrc:/etc/tor/torrc

Configuration

As declared above (in docker-compose.yml), the container shares a volume on the host called /config/torrc and connects to the torproxy_default network. It’s in the torrc file that you set the ports for your hidden service. The network allows the external hidden apps to connect to the tor-proxy container. To find the hosts for each hidden service, simply execute:

1
docker ps

You should see something like this:

1
2
3
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94816844b40b torapp2_node "npm start" 11 minutes ago Up 11 minutes 3001/tcp torapp2_node_1
8c11fb2c9167 torapp1_node "npm start" 12 minutes ago Up 12 minutes 3000/tcp torapp1_node_1

The items listed under the NAMES column serve as your hostnames. So, in this two app configuration, ./config/torrc looks like this:

1
2
3
4
5
HiddenServiceDir /home/user/.tor/hidden_app_1/
HiddenServicePort 80 torapp1_node_1:3000
HiddenServiceDir /home/user/.tor/hidden_app_2/
HiddenServicePort 80 torapp2_node_1:3001

Note the different ports on each of the hidden services. These correspond to the exposed ports in each app’s docker-compose.yml file.

Deploy Tor Container

Bring Tor online with this:

1
docker-compose up -d

If the container reports any sort of directory permissions issues, refer to the notes pertaining to the RUN usermod -u 1001 user command in the tor-proxy Dockerfile.

Assuming everything is built and deployed correctly, you can find your .onion hostnames in the .tor directory in the container:

1
2
docker-compose exec tor cat /home/user/.tor/hidden_app_1/hostname
docker-compose exec tor cat /home/user/.tor/hidden_app_2/hostname

Assuming all goes well, welcome to the darkweb.


A better open-source extension for Silhouette Cameo, Inkscape, and Ubuntu 16.04

See the new updated version for Ubuntu 18.04

I would have updated my previous attempt at configuring Inkscape to work with the Silhouette Cameo, but got so swept up in the excitement of cutting vinyl stickers, I forgot to do it until now. Unless something has changed since my last relevant post, InkCut doesn’t really work.

This post demonstrates how to configure the open-source inkscape-silhouette extension on Ubuntu 16.04.

System and dependencies

Do the usual system prep before adding the software upon which Inkscape and the Silhouette extension depend:

1
2
sudo apt update
sudo apt upgrade

Ubuntu 16.04

Just as with a conventional printer, the Silhouette Cameo requires some drivers be installed before it can work with Ubuntu.

Open your System Settings:

[Open System Settings]

Open the Printers option:

[Click Printers]

Add a printer:

[Add Printer]

Hopefully you see your device in the list:

[Find device in list]

The drivers for generic printing devices will suffice in this situation:

[Select Generic] [Text-only driver]

Change your cutter’s name, if you like. I left these settings untouched:

[Printer description]

Not sure what would happen if you attempted to print a test page. I cancelled:

[Cancel test page]

If all is well, you should see the device you just added:

[Silhouette device added]

Inkscape

The Inkscape vector graphics tool has an extension that enables you to send your own SVG files to the Cameo.

Add the Inkscape repository and install:

1
2
3
sudo add-apt-repository ppa:inkscape.dev/stable
sudo apt update
sudo apt install inkscape

Run it from the command line to make sure it works:

1
inkscape

inkscape-silhouette extension

These steps are adapted from the inkscape-silhouette wiki.

This extension depends upon python-usb:

1
sudo apt install python-usb

Next, you’ll need to download a copy of the extension’s latest release. At the time of writing, you could obtain it from the command line like this:

1
2
3
cd ~
wget https://github.com/fablabnbg/inkscape-silhouette/releases/download/v1.19/inkscape-silhouette_1.19-1_all.deb
sudo dpkg -i inkscape-silhouette_1.19-1_all.deb

Try it out

Execute inkscape (from the command line, if you wish):

1
inkscape

Load the SVG file you want to cut and navigate to Extensions > Export > Send to Silhouette:

[Extensions > Export > Send to Silhouette]

I leave the settings for you to play with. I only cut vinyl, so I go with the extension-provided defaults:

[Vinyl defaults]

When ready, press Apply and watch your Silhouette Cameo spring to life.


A Dockerized, Torified, Express Application

Dark Web chatter is picking up. I’m interested in providing cool web services anonymously. This is my first attempt at using Docker Compose to stay ahead of this trend.

Assumption: all the software goodies are setup and ready to go on an Ubuntu 16.04 server (node, docker, docker-compose, et al).

Set up an Express App

The Express Application Generator strikes me as a little bloated, but I use it anyway because I’m super lazy.

1
sudo npm install express-generator -g

Once installed, set up a vanilla express project:

1
2
express --view=ejs tor-app
cd tor-app && npm install

The express-generator will tell you to run the app like this:

1
DEBUG=tor-app:* npm start

This, of course, is only useful for development. From here, we’ll Dockerize for deployment and Torify for anonymity.

Tor pre-configuration

In anticipation of setting up the actual Torified app container, create a new file called config/torrc. This file will be used by Tor inside the Docker container to serve up our app. Paste the following into config/torrc:

1
2
HiddenServiceDir /home/node/.tor/hidden_service/
HiddenServicePort 80 127.0.0.1:3000

Docker

Copy and paste the following into a new file called Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
FROM node:stretch
ENV NPM_CONFIG_LOGLEVEL warn
ENV DEBIAN_FRONTEND noninteractive
EXPOSE 9050
# `apt-utils` squelches a configuration warning
RUN apt-get update
RUN apt-get -y install apt-utils
#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-20
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring
#
# Tor raises some tricky directory permissions issues. Once started, Tor will
# write the hostname and private key into a directory on the host system. If
# the `node` user in the container does not have the same UID as the user on
# the host system, Tor will not be able to create and write to these
# directories. Execute `id -u` on the host to determine your UID.
#
# RUN usermod -u 1001 node
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
# Run the Tor service alongside the app itself
CMD /usr/bin/tor -f /etc/tor/torrc & npm start

Container/Host Permissions

Take special note of the comment posted above the RUN usermode -u 1001 node instruction in Dockerfile. If you get any errors on the container build/execute step described below, you’ll need to make sure your host user’s UID is the same as your container user’s UID (i.e., the node user).

Usually the user in the container has a UID of 1000. To determine the host user’s UID, execute id -u. If it’s not 1000, uncomment the usermod instruction in Dockerfile and make sure the numbers match.

Docker Compose

docker-compose does all of the heavy lifting for building the Dockerfile and start-up/shut-down operations. Paste the following into a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
- ./config/torrc:/etc/tor/torrc

Bring the whole thing online by running

1
docker-compose up -d

Every now and then I get an error trying to obtain the GPG key:

1
gpg: keyserver receive failed: Cannot assign requested address

This usually solves itself on subsequent calls to docker-compose up.

Assuming the build and execution was successful, you can determine your .onion address like this:

1
docker-compose exec node cat /home/node/.tor/hidden_service/hostname

You should now be able to access your app from favourite Tor web browser.

If you’re interested in poking around inside the container, access the bash prompt like this:

1
docker-compose exec node bash

Notes

This is the first step in configuring and deploying a hidden service on the Tor network. Since working out the initial details, I’ve already thought of potential improvements to this approach. As it stands, only one hidden service can be deployed. It would be far better to create a Tor container able to proxy multiple apps. I will also be looking into setting up .onion vanity URLs and HTTPS.