express


How to get started testing with Express, Jasmine, and Zombie

Okay, so I’m a little old fashioned. I’m also lazy and reluctant to learn new things. My buddy Dawson asked How do I get started with testing my web software? I said, start with node/express

Express

Express is a web framework. You build server-side software with Express. Let’s bootstrap a project with express-generator:

1
npx express-generator --view=ejs myapp

This creates a skeleton application from which to launch development. The ejs stands for Embedded Javascript. To install:

1
2
cd myapp
npm install

Execute the server:

1
npm start

Assuming all is well, you can navigate to http://localhost:3000 to see your new app in action. Halt server execution by pressing Ctrl-C.

That’s great and all, but let’s get to testing…

Jasmine

I’m not sure Jasmine is trendy, but I’ve been using it for years. It takes minimal setup and can be neatly structured. It’s a good tool and the tests you write are easily adapted for execution by other test frameworks (if you decide you don’t like jasmine).

Add jasmine to your web application:

1
npm install --save-dev jasmine

jasmine is now a development dependency. Execute cat package.json to get a peak at how node manages its dependencies.

Initialize jasmine like this:

1
npx jasmine init

Most people like to script test execution with npm. Open the package.json file just mentioned. Configure "scripts": { "test": "jasmine" }… that is, make package.json look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"name": "myapp",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www",
"test": "jasmine"
},
"dependencies": {
"cookie-parser": "~1.4.4",
"debug": "~2.6.9",
"ejs": "~2.6.1",
"express": "~4.16.1",
"http-errors": "~1.6.3",
"morgan": "~1.9.1"
},
"devDependencies": {
"jasmine": "^3.5.0"
}
}

If you do this correctly, you can run the following:

1
npm test

And expect to see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
> myapp@0.0.0 test /home/daniel/workspace/myapp
> jasmine
Randomized with seed 44076
Started
No specs found
Finished in 0.003 seconds
Incomplete: No specs found
Randomized with seed 44076 (jasmine --random=true --seed=44076)
npm ERR! Test failed. See above for more details.

jasmine is telling us that the tests failed because we have yet to write any. Being as lazy as I am, I try to find opportunities to skip unit tests and go directly to testing user behaviour. For this, I use a headless browser called Zombie.

Zombie

Like jasmine, I’m not sure zombie is the trendiest option out there, but I’ve always managed to get the two to play nicely together and have yet to find any serious shortcoming. Add zombie to your project like this:

1
npm install --save-dev zombie

Now we’re ready to write some tests…

Testing!

Oh wait, what’s this app supposed to do? Ummmmm…

For now, I’ll keep it really simple until my buddy Dawson comes up with tougher testing questions.

Purpose

I want to be able to load my app in a browser, enter my name in an input field, press Submit, and receive a friendly greeting in return.

When starting a new web application, the first test I write ensures my page actually loads in the zombie browser. Create a spec file:

1
touch spec/indexSpec.js

I use the word index in the sense that it’s the default first page you land on at any website. Test files end with the *Spec.js suffix by default. Execute cat spec/support/jasmine.json to see how jasmine decides which files to execute.

Open spec/indexSpec.js in your favourite editor and paste this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
const Browser = require('zombie');
Browser.localhost('example.com', 3000);
describe('the landing page', () => {
let browser;
/**
* This loads the running web application
* with a new `zombie` browser before each test.
*/
beforeEach(done => {
browser = new Browser();
browser.visit('/', err => {
if (err) return done.fail(err);
done();
});
});
/**
* Your first test!
*
* `zombie` has loaded and rendered the page
* returned by your application. Use `jasmine`
* and `zombie` to ensure it's doing what you
* expect.
*
* In this case, I just want to make sure a
* page title is displayed.
*/
it('displays the page title', () => {
browser.assert.text('h1', 'Friendly Greeting Generator');
});
/**
* Put future tests here...
*/
// ...
});

Simple enough. At this point you might be tempted to go make the test pass. Instead, execute the following to make sure it fails:

1
npm test

Whoa! What happened? You probably see something like this:

1
2
3
4
5
6
7
8
9
10
11
Randomized with seed 73862
Started
F
Failures:
1) the landing page displays the page title
Message:
Failed: connect ECONNREFUSED 127.0.0.1:3000
Stack:
...

It’s good that it failed, because that’s an important step, but if you look closely at the error, connect ECONNREFUSED 127.0.0.1:3000 tells you your app isn’t even running. You’ll need to open another shell or process and execute:

1
npm start

Your app is now running and zombie can now send a request and expect to receive your landing page. In another shell (so that your app can keep running), execute the tests again:

1
npm test

If it fails (as expected), you will see something like this:

1
2
3
4
5
6
7
8
9
10
Randomized with seed 38606
Started
F
Failures:
1) the landing page displays the page title
Message:
AssertionError [ERR_ASSERTION]: 'Express' deepEqual 'The Friendly Greeting Generator'
Stack:
error properties: Object({ generatedMessage: true, code: 'ERR_ASSERTION', actual: 'Express', expected: 'Friendly Greeting Generator', operator: 'deepEqual' })

That’s much better. Now, having ensured the test fails, make the test pass. Open routes/index.js in your project folder and make it look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
var express = require('express');
var router = express.Router();
/* GET home page. */
router.get('/', function(req, res, next) {
// The old command
//res.render('index', { title: 'Express' });
// The new test-friendly command
res.render('index', { title: 'The Friendly Greeting Generator' });
});
module.exports = router;

Execute the tests again:

1
npm test

And you will see:

1
2
3
4
5
6
7
8
9
Randomized with seed 29903
Started
F
Failures:
1) the landing page displays the page title
Message:
AssertionError [ERR_ASSERTION]: 'Express' deepEqual 'The Friendly Greeting Generator'
Stack:

Oh no! Not again! Go back and check… yup, you definitely changed the name of the app. What could be wrong?

You need to restart your server in your other shell. Exit with Ctrl-C and restart with npm start. (Yes, there is a much better way of doing this).

Having restarted your application, execute the tests again with npm test. You will see this:

1
2
3
4
5
6
7
8
Randomized with seed 46658
Started
.
1 spec, 0 failures
Finished in 0.071 seconds
Randomized with seed 46658 (jasmine --random=true --seed=46658

Awesome. Your first test passes. Recall the stated purpose of this app:

I want to be able to load my app in a browser, enter my name in an input field, press Submit, and receive a friendly greeting in return.

Using this user story as a guide, you can proceed writing your tests. So far, the first part of the story has been covered (i.e., I want to be able to load my app in a browser). Now to test the rest…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
// ...
// Add these below our first test in `indexSpec.js`
it('renders an input form', () => {
browser.assert.element('input[type=text]');
browser.assert.element('input[type=submit]');
});
it('returns a friendly greeting if you enter your name and press Submit', done => {
browser.fill('name', 'Dan');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'What up, Dan?');
done();
});
});
it('trims excess whitespace from the name submitted', done => {
browser.fill('name', ' Dawson ');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'What up, Dan?');
done();
});
});
it('gets snarky if you forget to enter your name before pressing Submit', done => {
browser.fill('name', '');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'Whatevs...');
done();
});
});
it('gets snarky if you forget to enter a blank name before pressing Submit', done => {
browser.fill('name', ' ');
browser.pressButton('Submit', () => {
browser.assert.text('h3', 'Please don\'t waste my time');
done();
});
});
});

You can push this as far as you want. For example, you might want to ensure your audience doesn’t enter a number or special characters for a name. The ones above define the minimal test-coverage requirement in this case.

Make sure these new tests fail by executing npm test. You won’t need to restart the server until you make changes to your app (yes, you should find a better way to manage this).

Make the tests pass

You should try doing this yourself before you skip ahead. I’ll give you a couple of clues and then provide the spoiler. In order to get these tests to pass, you’ll need to add a route to routes/index.js and you’ll need to modify the document structure in views/index.ejs.

Did you try it yourself?

Here’s one possible answer:

Make routes/index.js look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var express = require('express');
var router = express.Router();
/* GET home page. */
router.get('/', function(req, res, next) {
res.render('index', { title: 'The Friendly Greeting Generator', message: '' });
});
router.post('/', function(req, res, next) {
let message = 'Whatevs...'
if (req.body.name.length) {
let name = req.body.name.trim();
if (!name.length) {
message = 'Please don\'t waste my time';
}
else {
message = `What up, ${name}?`;
}
}
res.render('index', { title: 'The Friendly Greeting Generator', message: message });
});
module.exports = router;

Make views/index.js look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<!DOCTYPE html>
<html>
<head>
<title><%= title %></title>
<link rel='stylesheet' href='/stylesheets/style.css' />
</head>
<body>
<h1><%= title %></h1>
<p>Welcome to <%= title %></p>
<form action="/" method="post">
<label for="name">Name:</label>
<input name="name" type=input" />
<button type="submit">Submit</button>
</form>
<% if (message) { %>
<h3><%= message %></h3>
<% } %>
</body>
</html>

Note the EJS alligator tags (<% %> and <%= %>).

When all expectations are satisfied, you will see something like this:

1
2
3
4
5
6
7
8
Randomized with seed 17093
Started
.....
5 specs, 0 failures
Finished in 0.106 seconds
Randomized with seed 17093 (jasmine --random=true --seed=17093)

What’s next?

Got a test question? What kind of app should I build and test?

If you learned something, support my work through Wycliffe or Patreon.


A Dockerized, Torified, Express Application

Dark Web chatter is picking up. I’m interested in providing cool web services anonymously. This is my first attempt at using Docker Compose to stay ahead of this trend.

Assumption: all the software goodies are setup and ready to go on an Ubuntu 16.04 server (node, docker, docker-compose, et al).

Set up an Express App

The Express Application Generator strikes me as a little bloated, but I use it anyway because I’m super lazy.

1
sudo npm install express-generator -g

Once installed, set up a vanilla express project:

1
2
express --view=ejs tor-app
cd tor-app && npm install

The express-generator will tell you to run the app like this:

1
DEBUG=tor-app:* npm start

This, of course, is only useful for development. From here, we’ll Dockerize for deployment and Torify for anonymity.

Tor pre-configuration

In anticipation of setting up the actual Torified app container, create a new file called config/torrc. This file will be used by Tor inside the Docker container to serve up our app. Paste the following into config/torrc:

1
2
HiddenServiceDir /home/node/.tor/hidden_service/
HiddenServicePort 80 127.0.0.1:3000

Docker

Copy and paste the following into a new file called Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
FROM node:stretch
ENV NPM_CONFIG_LOGLEVEL warn
ENV DEBIAN_FRONTEND noninteractive
EXPOSE 9050
# `apt-utils` squelches a configuration warning
RUN apt-get update
RUN apt-get -y install apt-utils
#
# Here's where the `tor` stuff gets baked into the container
#
# Keys and repository stuff accurate as of 2017-10-20
# See: https://www.torproject.org/docs/debian.html.en#ubuntu
RUN echo "deb http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN echo "deb-src http://deb.torproject.org/torproject.org stretch main" | tee -a /etc/apt/sources.list.d/torproject.list
RUN gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
RUN gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install tor deb.torproject.org-keyring
#
# Tor raises some tricky directory permissions issues. Once started, Tor will
# write the hostname and private key into a directory on the host system. If
# the `node` user in the container does not have the same UID as the user on
# the host system, Tor will not be able to create and write to these
# directories. Execute `id -u` on the host to determine your UID.
#
# RUN usermod -u 1001 node
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
# Run the Tor service alongside the app itself
CMD /usr/bin/tor -f /etc/tor/torrc & npm start

Container/Host Permissions

Take special note of the comment posted above the RUN usermode -u 1001 node instruction in Dockerfile. If you get any errors on the container build/execute step described below, you’ll need to make sure your host user’s UID is the same as your container user’s UID (i.e., the node user).

Usually the user in the container has a UID of 1000. To determine the host user’s UID, execute id -u. If it’s not 1000, uncomment the usermod instruction in Dockerfile and make sure the numbers match.

Docker Compose

docker-compose does all of the heavy lifting for building the Dockerfile and start-up/shut-down operations. Paste the following into a file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
version: '3'
services:
node:
build: .
restart: always
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules
- ./config/torrc:/etc/tor/torrc

Bring the whole thing online by running

1
docker-compose up -d

Every now and then I get an error trying to obtain the GPG key:

1
gpg: keyserver receive failed: Cannot assign requested address

This usually solves itself on subsequent calls to docker-compose up.

Assuming the build and execution was successful, you can determine your .onion address like this:

1
docker-compose exec node cat /home/node/.tor/hidden_service/hostname

You should now be able to access your app from favourite Tor web browser.

If you’re interested in poking around inside the container, access the bash prompt like this:

1
docker-compose exec node bash

Notes

This is the first step in configuring and deploying a hidden service on the Tor network. Since working out the initial details, I’ve already thought of potential improvements to this approach. As it stands, only one hidden service can be deployed. It would be far better to create a Tor container able to proxy multiple apps. I will also be looking into setting up .onion vanity URLs and HTTPS.


mongodump and mongorestore Between Docker-composed Containers

I’m trying to refine the process by which I backup and restore Dockerized MongoDB containers. My previous effort is basically a brute-force copy-and-paste job on the container’s data directory. It works, but I’m concerned about restoring data between containers installed with different versions of MongoDB. Apparently this is tricky enough even with the benefit of recovery tools like mongodump and mongorestore, which is what I’m using below.

In short, I need to dump my data from a data-only MongoDB container, bundle the files uploaded to my Express application, and restore it all on another server. Here’s how I did it…

Dump the data

I’m a big fan of docker-compose. I use it to manage all my containers. The following method requires that the composition be running so that mongodump can be run against the running Mongo container (which, in turn, accesses the data-only container). Assuming the name of the container is myapp_mongo_1

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongodump --host $MONGO_PORT_27017_TCP_ADDR'

This will create a root-owned directory called myapp-mongo-dump in your current directory. It contains all the BSON and JSON meta-data for this database. For convenience, I change ownership of this resource:

1
sudo chown -R user:user myapp-mongo-dump

Then, for transport, I archive the directory:

1
tar zcvf myapp-mongo-dump.tar.gz myapp-mongo-dump

Archive the uploaded files

My app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar zcvf uploads.tar.gz uploads

Now I have two archived files: myapp-mongo-dump.tar.gz and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp myapp-mongo-dump.tar.gz uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
2
tar zxvf uploads.tar.gz
tar zxvf myapp-mongo-dump.tar.gz

Then I restore the data to the data-only container through the running Mongo instance (assumed to be called myapp_mongo_1):

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongorestore --host $MONGO_PORT_27017_TCP_ADDR'

With that, all data is restored. I didn’t even have to restart my containers to begin using the app on its new server.


MongoDB backup and restore between Dockerized Node apps

My bargain-basement cloud service provider, CloudAtCost recently lost one of my servers and all the data on it. This loss was exasperated by the fact that I didn’t backup my MongoDB data somewhere else. Now I’m working out the exact process after the fact so that I don’t suffer this loss again (it’s happened twice now with CloudAtCost, but hey, the price is right).

The following is a brute-force backup and recovering process. I suspect this approach has its weaknesses in that it may depend upon version-consistency between the MongoDB containers. This is not ideal for someone like myself who always installs the latest version when creating new containers. I aim to develop a more flexible process soon.

Context

I have a server running Ubuntu 16.04, which, in turn is serving up a Dockerized Express application (Nginx, MongoDB, and the app itself). The MongoDB data is backed up in a data-only container. To complicate matters, the application allows file uploads, which are being stored on the file system in the project’s root.

I need to dump the data from the data-only container, bundle the uploaded files, and restore it all on another server. Here’s how I did it…

Dump the data

I use docker-compose to manage my containers. To obtain the name of the MongoDB data-only container, I simply run docker ps -a. Assuming the name of the container is myapp_mongo_data

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data/db

This will put a file called backup.tar in the app’s root directory. It may belong to the root user. If so, run sudo chown user:user backup.tar.

Archive the uploaded files

The app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar -zcvf uploads.tar.gz uploads

Now I have two archived files: backup.tar and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp backup.tar uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
tar -zxvf uploads.tar.gz

Then I restore the data to the data container:

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Remove and restart containers

The project containers don’t need to be running when you restore the data in the previous step. If they are running, however, once the data is restored, remove the running containers and start again with docker-compose:

1
2
3
docker-compose stop
docker-compose rm
docker-compose up -d

I’m sure there is a reasonable explanation as to why removing the containers is necessary, but I don’t know what it is yet. In any case, removing the containers isn’t harmful because all the data is on the data-only container anyway.

Warning

As per the introduction, this process probably depends on version consistency between MongoDB containers.