node


cors-anywhere deployment with Docker Compose

My ad-tracker Express app serves up queued advertisements with a client-side call to Javascript’s fetch function. This, of course, raises the issue of Cross-Origin Resource Sharing.

I use the cors-anywhere node module to allow sites with ad-tracker advertisements to access the server. Naturally, docker-compose is my preferred deployment tool.

Set up the project

Create a project directory and initialize the application with npm:

1
2
mkdir -p sites/cors-anywhere-server && cd sites/cors-anywhere-server
npm init

Follow the npm init prompts.

Once initialized, add the cors-anywhere module to the project:

1
npm install cors-anywhere --save

Copy and paste the following into index.js (or whatever entry-point you specified in the initialization step):

1
2
3
4
5
6
7
8
9
10
11
12
13
// Listen on a specific host via the HOST environment variable
var host = process.env.HOST || '0.0.0.0';
// Listen on a specific port via the PORT environment variable
var port = process.env.PORT || 8080;
var cors_proxy = require('cors-anywhere');
cors_proxy.createServer({
originWhitelist: [], // Allow all origins
requireHeader: ['origin', 'x-requested-with'],
removeHeaders: ['cookie', 'cookie2']
}).listen(port, host, function() {
console.log('Running CORS Anywhere on ' + host + ':' + port);
});

This code is take verbatim from the cors-anywhere documentation.

To execute the application:

1
node index.js

If it executes successfully, you should see:

1
Running CORS Anywhere on 0.0.0.0:8080

Exit the app.

Docker

To create the Dockerized application image, paste the following into Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FROM node
ENV NPM_CONFIG_LOGLEVEL warn
EXPOSE 8080
# App setup
USER node
ENV HOME=/home/node
WORKDIR $HOME
ENV PATH $HOME/app/node_modules/.bin:$PATH
ADD package.json $HOME
RUN NODE_ENV=production npm install
CMD ["node", "./index.js"]

This will build the cors-anywhere into a Docker node container.

Docker Compose

Paste the following into docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
version: '3'
services:
node:
build: .
restart: always
ports:
- "8080"
environment:
- NODE_ENV=production
volumes:
- .:/home/node
- /home/node/node_modules

Build the image and deploy in one step:

1
docker-compose up

The last line of the console output should read:

1
node_1 | Running CORS Anywhere on 0.0.0.0:8080

At this point, any request proxied through the cors-anywhere-server will be allowed access to cross-domain resources. Your client-side fetch calls can now leverage this functionality by prefixing the destination URL with the cors-anywhere-server URL. It may look something like this:

(function() {
  var CORS_SERVER = 'https://cors-server.example.com:8080';
  var AD_SERVER = 'https://ads.example.com';

  fetch(CORS_SERVER + '/' + AD_SERVER).then(function(response) {
    return response.json();    
  }).then(function(json) {     
    console.log('CORS request successful!');
    console.log(json); 
  });
})();

Done!


A Home-Based Ubuntu 16.04 Production Server with Salvaged Equipment

Preface

As I’ve often griped before, my cloud service provider (cloudatcost.com) is not exactly reliable. I’m currently on Day 3 waiting for their tech support to address several downed servers. Three days isn’t even that bad considering I’ve waited up to two weeks in the past. In any case, though I wish them success, I’m sick of their nonsense and am starting to migrate my servers out of the cloud and into my house. As a cloud company that routinely loses its customers’ data, it’s prudent to prepare for their likely bankruptcy and closure.

I have an old smashed-up AMD Quad Core laptop I’m going to use as a server. The screen was totally broken, so as a laptop it’s kind of useless anyway. It’s a little lightweight on resources (only 4 GB of RAM), but this is much more than I’m used to. I used unetbootin to create an Ubuntu 16.04 bootable USB and installed the base system.

What follows are the common steps I take when setting up a production server. This bare minimum approach is a process I repeat frequently, so it’s worth documenting here. Once the OS is installed…

Change the root password

The install should put the created user into the sudo group. Change the root password with that user:

1
2
3
sudo su
passwd
exit

Update OS

An interesting thing happened during install… I couldn’t install additional software (i.e., open-ssh), so I skipped it. When it came time to install vim, I discovered I didn’t have access to any of the repositories. The answer here shed some light on the situation, but didn’t really resolve anything.

I ended up copying the example sources.list from the documentation to fix the problem:

1
sudo cp /usr/share/doc/apt/examples/sources.list /etc/apt/sources.list

I found out later that this contained all repositories for Ubuntu 14.04. So I ended up manually pasting this in /etc/apt/sources.list:

1
2
3
4
5
6
7
8
9
10
11
12
# deb cdrom:[Ubuntu 16.04 LTS _Xenial Xerus_ - Release amd64 (20160420.1)]/ xenial main restricted
deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
# deb http://archive.ubuntu.com/ubuntu xenial-proposed main restricted universe multiverse
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner

After that, update/upgrade worked (make sure it doesn’t actually work before messing around):

1
2
sudo apt update
sudo apt upgrade

Install open-ssh

The first thing I do is configure my machine for remote access. As above, I couldn’t install open-ssh during the OS installation, for some reason. After sources.list was sorted out, it all worked:

1
sudo apt-get install openssh-server

Check the official Ubuntu docs for configuration tips.

While I’m here, though, I need to set a static IP…

1
sudo vi /etc/network/interfaces

Paste this (or similar) under # The primary network interface, as per lewis4u.

1
2
3
4
5
auto enp0s25
iface enp0s25 inet static
address 192.168.0.150
netmask 255.255.255.0
gateway 192.168.0.1

Then flush, restart, and verify that the settings are correct:

1
2
3
sudo ip addr flush enp0s25
sudo systemctl restart networking.service
ip add

Start ssh

My ssh didn’t start running automatically after install. I did this to make ssh run on startup:

1
sudo systemctl enable ssh

And then I did this, which actually starts the service:

1
sudo service ssh start

Open a port on the router

This step, of course, depends entirely on the make and model of router behind which the server is operating. For me, I access the adminstrative control panel by logging in at 192.168.0.1 on my LAN.

I found the settings I needed to configure on my Belkin router under Firewall -> Virtual Servers. I want to serve up web apps (both HTTP and HTTPS) and allow SSH access. As such, I configured three access points by providing the following information for each:

  1. Description
  2. Inbound ports (i.e., 22, 80, and 443)
  3. TCP traffic type (no UDP)
  4. The private/static address I just set on my server
  5. Inbound private ports (22, 80, and 443 respectively)

Set up DNS

Again, this depends on where you registered your domain. I pointed a domain I have registered with GoDaddy to my modems’s IP address, which now receives requests and forwards them to my server.

Login via SSH

With my server, router, and DNS all properly configured, I don’t need to be physically sitting in front of my machine anymore. As such, I complete the following steps logged in remotely.

Set up app user

I like to have one user account control app deployment. Toward that end, I create an app user and add him to the sudo group:

1
2
sudo adduser app
sudo adduser app sudo

Install the essentials

git

Won’t get far without git:

1
sudo apt install git

vim

My favourite editor is vim, which is not installed by default.

1
sudo apt install vim

NERDTree

My favourite vim plugin:

1
2
3
4
mkdir -p ~/.vim/autoload ~/.vim/bundle
cd ~/.vim/autoload
wget https://raw.github.com/tpope/vim-pathogen/HEAD/autoload/pathogen.vim
vim ~/.vimrc

Add this to the .vimrc file:

1
2
3
4
call pathogen#infect()
map <C-n> :NERDTreeToggle<CR>
set softtabstop=2
set expandtab

Save and exit:

1
2
cd ~/.vim/bundle
git clone https://github.com/scrooloose/nerdtree.git

Now, when running vim, hit ctrl-n to toggle the file tree view.

Docker

Current installation instructions can be found here. The distilled process is as follows:

1
2
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

You can verify the key fingerprint:

1
sudo apt-key fingerprint 0EBFCD88

Which should return something like this:

1
2
3
4
pub 4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <docker@docker.com>
sub 4096R/F273FCD8 2017-02-22

Add repository and update:

1
2
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update

Install:

1
sudo apt install docker-ce

Create a docker user group:

1
sudo groupadd docker

Add yourself to the group:

1
sudo usermod -aG docker $USER

Add the app user to the group as well:

1
sudo usermod -aG docker $USER

Logout, login, and test docker without sudo:

1
docker run hello-world

If everything works, you should see the usual Hello, World! message.

Configure docker to start on boot:

1
sudo systemctl enable docker

docker-compose

This downloads the current stable version. Cross reference it with that offered here.

1
2
3
su
curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Install command completion while still root:

1
2
curl -L https://raw.githubusercontent.com/docker/compose/master/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
exit

Test:

1
docker-compose --version

node

These steps are distilled from here.

1
2
3
cd ~
curl -sL https://deb.nodesource.com/setup_6.x -o nodesource_setup.sh
sudo bash nodesource_setup.sh

Now install:

1
sudo apt-get install nodejs build-essential

Ruby

The steps followed conclude with installing rails. I only install ruby:

1
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev nodejs

Using rbenv:

1
2
3
4
5
6
7
8
9
10
11
cd
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
exec $SHELL
rbenv install 2.4.0

That last step can take a while…

1
2
rbenv global 2.4.0
ruby -v

Install Bundler:

1
gem install bundler

Done!

There it is… all my usual favourites on some busted up piece of junk laptop. I expect it to be exactly 1000% more reliable than cloudatcost.com.


mongodump and mongorestore Between Docker-composed Containers

I’m trying to refine the process by which I backup and restore Dockerized MongoDB containers. My previous effort is basically a brute-force copy-and-paste job on the container’s data directory. It works, but I’m concerned about restoring data between containers installed with different versions of MongoDB. Apparently this is tricky enough even with the benefit of recovery tools like mongodump and mongorestore, which is what I’m using below.

In short, I need to dump my data from a data-only MongoDB container, bundle the files uploaded to my Express application, and restore it all on another server. Here’s how I did it…

Dump the data

I’m a big fan of docker-compose. I use it to manage all my containers. The following method requires that the composition be running so that mongodump can be run against the running Mongo container (which, in turn, accesses the data-only container). Assuming the name of the container is myapp_mongo_1

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongodump --host $MONGO_PORT_27017_TCP_ADDR'

This will create a root-owned directory called myapp-mongo-dump in your current directory. It contains all the BSON and JSON meta-data for this database. For convenience, I change ownership of this resource:

1
sudo chown -R user:user myapp-mongo-dump

Then, for transport, I archive the directory:

1
tar zcvf myapp-mongo-dump.tar.gz myapp-mongo-dump

Archive the uploaded files

My app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar zcvf uploads.tar.gz uploads

Now I have two archived files: myapp-mongo-dump.tar.gz and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp myapp-mongo-dump.tar.gz uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
2
tar zxvf uploads.tar.gz
tar zxvf myapp-mongo-dump.tar.gz

Then I restore the data to the data-only container through the running Mongo instance (assumed to be called myapp_mongo_1):

1
docker run --rm --link myapp_mongo_1:mongo -v $(pwd)/myapp-mongo-dump:/dump mongo bash -c 'mongorestore --host $MONGO_PORT_27017_TCP_ADDR'

With that, all data is restored. I didn’t even have to restart my containers to begin using the app on its new server.


MongoDB backup and restore between Dockerized Node apps

My bargain-basement cloud service provider, CloudAtCost recently lost one of my servers and all the data on it. This loss was exasperated by the fact that I didn’t backup my MongoDB data somewhere else. Now I’m working out the exact process after the fact so that I don’t suffer this loss again (it’s happened twice now with CloudAtCost, but hey, the price is right).

The following is a brute-force backup and recovering process. I suspect this approach has its weaknesses in that it may depend upon version-consistency between the MongoDB containers. This is not ideal for someone like myself who always installs the latest version when creating new containers. I aim to develop a more flexible process soon.

Context

I have a server running Ubuntu 16.04, which, in turn is serving up a Dockerized Express application (Nginx, MongoDB, and the app itself). The MongoDB data is backed up in a data-only container. To complicate matters, the application allows file uploads, which are being stored on the file system in the project’s root.

I need to dump the data from the data-only container, bundle the uploaded files, and restore it all on another server. Here’s how I did it…

Dump the data

I use docker-compose to manage my containers. To obtain the name of the MongoDB data-only container, I simply run docker ps -a. Assuming the name of the container is myapp_mongo_data

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data/db

This will put a file called backup.tar in the app’s root directory. It may belong to the root user. If so, run sudo chown user:user backup.tar.

Archive the uploaded files

The app allows file uploads, so the database is pointing to a bunch of files stored on the file system. My files are contained in a directory called uploads/.

1
tar -zcvf uploads.tar.gz uploads

Now I have two archived files: backup.tar and uploads.tar.gz.

Transfer backup to the new server

Here I use scp:

1
scp backup.tar uploads.tar.gz user@example.com:~

Restore the files

In the previous command, for simplicity, I transferred the files into the user’s home folder. These will need to be moved into the root of the project folder on the new server. Once there, assuming the same app has been setup and deployed, I first unpack the uploaded files:

1
tar -zxvf uploads.tar.gz

Then I restore the data to the data container:

1
docker run --volumes-from myapp_mongo_data -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Remove and restart containers

The project containers don’t need to be running when you restore the data in the previous step. If they are running, however, once the data is restored, remove the running containers and start again with docker-compose:

1
2
3
docker-compose stop
docker-compose rm
docker-compose up -d

I’m sure there is a reasonable explanation as to why removing the containers is necessary, but I don’t know what it is yet. In any case, removing the containers isn’t harmful because all the data is on the data-only container anyway.

Warning

As per the introduction, this process probably depends on version consistency between MongoDB containers.


Insall node and npm with no sudo

I’ve been using node and his good buddy npm for a couple of years now. Up until three days ago, I would happily prefix sudo whenever an npm package gave me an EACCESS error. I’ve always known this is bad practice, but had never encountered an issue. This all changed when I attempted something fairly mundane: Deploy a Hexo blog with Capistrano.

For better or worse, my Capistrano deployment runs npm install as part of its routine. I tried mucking around with sudo and visudo, but to no avail. The blog would simply not deploy because of the restrictive sudo npm install step. At some point it finally occurred to me that npm shouldn’t need sudo anyway, so I’d best fix the problem properly. Instead I did a quick hack on my production server:

The quick hack

Just kidding, this is actually legit, especially given that I’m the only one able to muck around in production. The following is adapted from here. Basically, it allows you to use the -g option without sudo, because all global npm packages get stored under your home directory:

1
2
3
mkdir ~/npm-global
npm config set prefix '~/npm-global'
vim ~/.profile

Append the following to the ~/.profile just opened (or created):

1
export PATH=~/npm-global/bin:$PATH

Now update your system variables:

1
source ~/.profile

Install something globally to make sure it works:

1
npm install -g jshint

A nicer way, arguably

This is more appropriate for a production environment with multiple users. It is adapted from here.

Create a new group:

1
sudo groupadd nodegrp

Add current user to the group (with logname)

1
sudo usermod -a -G nodegrp `logname`

Get access to group:

1
newgrp nodegrp

You can check to see that the user has been added by running:

1
groups

Change group ownership on all the critical components:

1
2
3
sudo chgrp -R nodegrp /usr/lib/node_modules/
sudo chgrp nodegrp /usr/bin/node
sudo chgrp nodegrp /usr/bin/npm

On existing installs

After fixing the issue, there’s a real good chance that root still owns some of the packages installed in the user’s home directory. That’s easy to fix:

1
sudo chown -R $(whoami):$(whoami) ~/.npm