Wednesday, May 20, 2015

Series: How to create your own website based on Docker (Part 8 - Creating the ioJS REST API Docker container)

It's about time to add some application logic to our project

This is part 8 of the series: How to create your own website based on Docker.

In the last part of the series, we have created our "dockerized" mongodb noSQL database server to read our persisted entries from and based on our architecture we have decided, that only the REST API (which will be based on ioJS) is allowed to talk to our database container.

So now it's about time to create the actual REST API that can be called via our nginx reverse proxy (using api.project-webdev.com) to read some person object entry from our database. We'll also create a very simple way to create a Person as well as list all available persons. As soon as you've understood how things work, you'll be able to implement more features of the REST API yourself - so consider this as pretty easy example.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)


Technologies to be used

Our REST API will use the following technologies:
  • ioJS as JavaScript application server
  • hapiJS as REST framework
  • mongoose as mongoDB driver, to connect to our database container
  • pm2 to run our nodejs application (and restart it if it crashes for some reason)

First things first - creating the ioJS image

Creating the ioJS image is basically the same every time. Let's create a new directory called /opt/docker/projectwebdev-api/ and within this new directory we'll create another directory called app and our Dockerfile:
# mkdir -p /opt/docker/projectwebdev-api/app/
# > /opt/docker/projectwebdev-api/Dockerfile
The new Dockerfile is based on the official ioJS Dockerfile, but I've added added some application/image specific information, so that we can implement our ioJS application:

  • Added our ubuntu base image (we're not using debian wheezy like in the official image)
  • Installed the latest NPM, PM2 and gulp (for later; we're not using gulp for this little demo)
  • Added our working directories
  • Added some clean up code
  • Added PM2 as CMD (we'll talk about that soon)

So just create your /opt/docker/projectwebdev-api/Dockerfile with the following content:
# Pull base image.
FROM docker_ubuntubase

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update
RUN apt-get update --fix-missing
RUN curl -sL https://deb.nodesource.com/setup_iojs_2.x | bash -

RUN apt-get install -y iojs gcc make build-essential openssl make node-gyp
RUN npm install -g npm@latest
RUN npm install -g gulp
RUN npm install -g pm2@latest
RUN apt-get update --fix-missing

RUN mkdir -p /var/log/pm2
RUN mkdir -p /var/www/html

# Cleanup
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN apt-get autoremove -y
RUN ln -s /usr/bin/nodejs /usr/local/bin/node

WORKDIR /var/www/html

CMD ["pm2", "start", "index.js","--name","projectwebdevapi","--log","/var/log/pm2/pm2.log","--watch","--no-daemon"]
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/projectwebdev-api/Dockerfile

Adding our REST API code to our container

Now let's create a simple application, that listens to a simple GET request and returns and entry from our mongoDB container. Just to proof that it works, I'll create a REST API that returns a simple Person object that contains an id as well as a first and a last name.

In order to get this object later, I'd have to call http://api.projectwebdev.com/person/{id} and it will return that object in JSON format. We'll also add a router to return all persons as well as a route that allows to add a new person - but we'll cover that in a second.

Since PM2 will only start (and not build) our ioJS application, we have to make sure that NPM (packaged with ioJS or nodeJS) is installed on your server, so that you can build the project there.

So here is my simple flow:

  • I create the ioJS application on my local machine
  • Then I upload the files to my server
  • On my server I use npm install to fetch all dependencies
  • PM2 restart the application automatically if it detects changes

In a later blog posting I will explain how you can setup a Git Push-To-Deploy mechanism which will take care of this automatically, but for this simple application we're doing it manually.

To get started, I'll create a new directory on my local machine (which has ioJS installed) and create a basic application:
# mkdir -p /home/mastixmc{development/projectwebdev-api && $_
# npm init
# npm install hapi mongoose --save
npm init will ask you a bunch of questions, and then write a package.json for you. It attempts to make reasonable guesses about what you want things to be set to, and then writes a package.json file with the options you've selected. (Info: Every nodeJS/ioJS application needs to have a package.json as descriptor)

npm install hapi mongoose --save will download/install hapiJS and mongoose and will save the dependency in our package.json file, so our server can download it later as well.

Creating the application

In our new directory, we'll create a file called index.js, with the following contents (we'll get into details afterwards):
var hapi = require('hapi');
var mongoose = require('mongoose');
// connect to database
mongoose.connect('mongodb://'+process.env.MONGODB_1_PORT_3333_TCP_ADDR+':'+process.env.MONGODB_1_PORT_3333_TCP_PORT+'/persons', function (error) {
    if (error) {
        console.log("Connecting to the database failed!");
        console.log(error);
    }
});
// Mongoose Schema definition
var PersonSchema = new mongoose.Schema({
    id: String,
    firstName: String,
    lastName: String
});
// Mongoose Model definition
var Person = mongoose.model('person', PersonSchema);
// Create a server with a host and port
var server = new hapi.Server();
server.connection({
    port: 3000
});
// Add the route to get a person by id.
server.route({
    method: 'GET',
    path:'/person/{id}',
    handler: PersonIdReplyHandler
});
// Add the route to get all persons.
server.route({
    method: 'GET',
    path:'/person',
    handler: PersonReplyHandler
});
// Add the route to add a new person.
server.route({
    method: 'POST',
    path:'/person',
    handler: PersonAddHandler
});
// Return all users in the database.
function PersonReplyHandler(request, reply){
    Person.find({}, function (err, docs) {
        reply(docs);
    });
}
// Return a certain user based on its id.
function PersonIdReplyHandler(request, reply){
    if (request.params.id) {
        Person.find({ id: request.params.id }, function (err, docs) {
            reply(docs);
        });
    }
}
// add new person to the database.
function PersonAddHandler(request, reply){
    var newPerson = new Person();
    newPerson.id = request.payload.id;
    newPerson.lastName = request.payload.lastname;
    newPerson.firstName = request.payload.firstname;
    newPerson.save(function (err) {
        if (!err) {
            reply(newPerson).created('/person/' + newPerson.id);    // HTTP 201
        } else {
            reply("ERROR SAVING NEW PERSON!!!"); // HTTP 403
        }
    });
}
// Start the server
server.start();
Disclaimer: Since this is just a little example, I hope you don't mind that I've put everything into on file - in a real project, I'd recommend to structure the project correctly, so that it scales in larger deployments - but for now, we're fine. Also, I did not add any error-checking or whatsoever to this code as it's just for demonstration purposes.

Now I we can copy our index.js and package.json file to our server (/opt/docker/projectwebdev-api/app/), ssh into our server and run npm install within that directory. This will download all dependencies and create a node_modules folder for us. You'll have a fully deployed ioJS application on your Docker host now, which can be used by the projectwebdev-api container, since this directory is mounted into it.

Explaining the REST-API code

So what does this file do? Pretty simple:

HapiJS creates a server that will listen on port 3000 - I've also added the following routes including their handlers:

  • GET to /person, which will then call a PersonReplyHandler function, that uses Mongoose to fetch all persons stored in our database.
  • GET to /person/{id}, which will then call a PersonIdReplyHandler function, that uses Mongoose to fetch a person with a certain id from our database.
  • POST to /person, which will then call a PersonAddHandler function, that uses Mongoose to store a person in our database.

A Person consists of the following fields (we're using the Mongoose Schema here):
// Mongoose Schema definition
var PersonSchema = new mongoose.Schema({
    id: String,
    firstname: String,
    lastname: String
});
So the aforementioned handlers (e.g. PersonAddHandler) will make sure that this information is served or stored from/to the database.

Later, when you have set up your nginx reverse proxy, you'll be able to use the following requests to GET or POST persons. But we'll get into that in the last part!

Add a new person:
curl -X POST -H "Accept: application/json" -H "Content-Type: multipart/form-data" -F "id=999" -F "firstname=Sascha" -F "lastname=Sambale" http://api.project-webdev.com/person
Result:
[{
    "_id": "555c827959a2234601c5ddfa",
    "firstName": "Sascha",
    "lastName": "Sambale",
    "id": "999",
    "__v": 0
}]
Get all persons:
curl -X GET -H "Accept: application/json" http://api.project-webdev.com/person/
Result:
[{
    _id: "555c81f559a2234601c5ddf9",
    firstName: "John",
    lastName: "Doe",
    id: "15",
    __v: 0
}, {
    _id: "555c827959a2234601c5ddfa",
    firstName: "Sascha",
    lastName: "Sambale",
    id: "999",
    __v: 0
}]
Get a person with id 999:
curl -X GET -H "Accept: application/json" http://api.project-webdev.com/person/999
Result:
[{
    "_id": "555c827959a2234601c5ddfa",
    "firstName": "Sascha",
    "lastName": "Sambale",
    "id": "999",
    "__v": 0
}]
You'll be able to do that as soon as you've reached the end of this series! ;)

Explaining the database code

I guess the most important part of the database code is how we establish the connection to our mongodb container.
// connect to database
mongoose.connect('mongodb://'+process.env.MONGODB_1_PORT_3333_TCP_ADDR+':'+process.env.MONGODB_1_PORT_3333_TCP_PORT+'/persons', function (error) {
    if (error) {
        console.log("Connecting to the database failed!");
        console.log(error);
    }
});
Since we're using container links, we can not know which ip our mongodb container will get when it gets started. So we have to use environment variables that Docker provides us.

Docker uses this prefix format to define three distinct environment variables:

  • The prefix_ADDR variable contains the IP Address from the URL, for example WEBDB_PORT_8080_TCP_ADDR=172.17.0.82.
  • The prefix_PORT variable contains just the port number from the URL for example WEBDB_PORT_8080_TCP_PORT=8080.
  • The prefix_PROTO variable contains just the protocol from the URL for example WEBDB_PORT_8080_TCP_PROTO=tcp.

If the container exposes multiple ports, an environment variable set is defined for each one. This means, for example, if a container exposes 4 ports that Docker creates 12 environment variables, 3 for each port.

In our case the environment variables look like this:

  • MONGODB_1_PORT_3333_TCP_ADDR
  • MONGODB_1_PORT_3333_TCP_PORT
  • MONGODB_1_PORT_3333_TCP_PROTO

Where MONGODB is the name and PORT is the port number we've specified in our docker-compose.yml file:
mongodb:
    build: ./mongodb
    expose:
      - "3333"

    volumes:
        - ./logs/:/var/log/mongodb/
        - ./mongodb/db:/data/db
Docker Compose also creates environment variables with the name DOCKER_MONGODB, which we are not going to use as it might happen that we switch from Docker Compose to something else in the future.

So Docker provides the environment variables and ioJS uses the process.env object to access them. We can therefore create a mongodb connection URL that looks like this:
mongodb://172.17.0.82:3333/persons
... which will be the link to our Docker container that runs mongodb on port 3333... Connection established!

Running ioJS in production mode

As mentioned before, in order to start (and automatically restart our REST API application, when we update the application files or the application crashes for some reason) we're using PM2, which will be configured via command line paramaters in our CMD instruction (see our Dockerfile):
CMD ["pm2", "start", "index.js","--name","projectwebdevapi","--log","/var/log/pm2/pm2.log","--watch","--no-daemon"]
So what does this command do?

  • "pm2", "start", "index.js" starts our application from within our WORKDIR (/var/www/html/).
  • "--name","projectwebdevapi" names our application projectwebdevapi.
  • "--log","/var/log/pm2/pm2-project.log" logs everything to /var/log/pm2/pm2-project.log (and since this is a mounted directory it will be stored on our docker host in /opt/docker/logs - see our docker-compose.yml file).
  • "--watch" watches our WORKDIR (/var/www/html/) for changes and will restart the application if something has changed. So you'll be able to update the application on your docker host and the changes will be reflected on the live site automatically.
  • "--no-daemon" runs PM2 in the foreground so the container does not exit and keeps running.

That's pretty much it - now, whenever you start your container later  (in our case Docker Compose will start it), PM2 will start your application and will make sure that it keeps running.

In the next part we'll create the frontend application that calls our new REST-API!

Friday, May 15, 2015

Series: How to create your own website based on Docker (Part 7 - Creating the mongodb Docker container)


Creating our mongodb database image

This is part 7 of the series: How to create your own website based on Docker.

It's about time to create our first image/container that is part of the real application stack. This container acts as persistence layer for the REST API (which we will create in the next part of this series). So the only component that talks to the database is the ioJS REST API container. In this part of the series, we'll have a look into how you can create your own mongodb container based on the official mongodb image.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

Let's get started

Let's create a new directory called /opt/docker/mongodb/ and within this new directory we'll create two folders and one file:
# mkdir -p /opt/docker/mongodb/config/
# mkdir /opt/docker/mongodb/db/
# > /opt/docker/mongodb/Dockerfile
Since I don't want to re-invent the wheel, I'll have a look at the official mongodb Docker image and we're basically using the same mongodb 3.0 Dockerfile for our design. Since we want to run this mongodb database on our own Ubuntu Base Container, we need to make some changes to official mongodb Docker image.

The official mongodb Dockerfile uses Debian wheezy as base image, which is not what we want:
FROM debian:wheezy
We are going to use our own Ubuntu Base Image for the mongodb image and since we use Docker Compose, we must specify the correct base image name, which is a concatenation of "docker_" and the image name that we have specified in our docker-compose.yml - so in our case that would be "docker_ubuntubase". So we're changing the aforementioned line to use our base image:
# Pull base image.
FROM docker_ubuntubase 
Since the original Dockerfile only allows us to mount /data/db as volume, so we're extending it to also allow the mongodb log directory:

Replace the following line:
VOLUME /data/db
With this line:
VOLUME ["/data/db","/var/log/mongodb/"]
I'd like to have my configurations in a subfolder called "config", so we need to adjust another line:

Replace the following lines:
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
With these lines:
COPY ./config/docker-entrypoint.sh /tmp/entrypoint.sh
RUN ["chmod", "+x", "/tmp/entrypoint.sh"]
ENTRYPOINT ["/tmp/entrypoint.sh"]
These lines copy a script called ./config/docker-entrypoint.sh to the /tmp/ folder in the container, make it executable and run it once, once the container has started. You can find the docker-entrypoint.sh file in the official mongodb docker repository on GitHub. Just copy that file into the config directory, which you have to create if you haven't done so already.

Let's create our own mongodb configuration file to set some parameters.

To do so, create a file called /opt/docker/mongodb/config/mongodb.conf and add the following lines (important: YAML does not accept tabs; use spaces instead!):
systemLog:
   destination: file
   path: "/var/log/mongodb/mongodb-projectwebdev.log"
   logAppend: true
storage:
   journal:
      enabled: true
net:
   port: 3333
   http:
       enabled: false
       JSONPEnabled: false
       RESTInterfaceEnabled: false
Now add the following lines to your Dockerfile to copy our new custom config file to our image:
RUN mkdir -p /var/log/mongodb && chown -R mongodb:mongodb /var/log/mongodb
COPY ./config/mongodb.conf /etc/mongod.conf
Since we want to load our custom config now, we'll need to changed the way we start mongodb, so we change the following line from
CMD ["mongod"]
to
CMD ["mongod", "-f", "/etc/mongod.conf"]
Our folder structure must look like this now:
+-- /opt/docker/mongodb
¦   +-- config
¦   ¦   +-- docker-entrypoint.sh
¦   ¦   +-- mongodb.conf
¦   +-- db
¦   +-- Dockerfile
Another thing we can remove is the EXPOSE instruction, since we already specified that in our docker-compose.yml.

So the complete Dockerfile will look like this now:
# Pull base image.
FROM docker_ubuntubase

# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mongodb && useradd -r -g mongodb mongodb

RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates curl \
numactl \
&& rm -rf /var/lib/apt/lists/*

# grab gosu for easy step-down from root
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu

# gpg: key 7F0CEB10: public key "Richard Kreuter <richard@10gen.com>" imported
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 492EAFE8CD016A07919F1D2B9ECBEC467F0CEB10

ENV MONGO_MAJOR 3.0
ENV MONGO_VERSION 3.0.3

RUN echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/$MONGO_MAJOR main" > /etc/apt/sources.list.d/mongodb-org.list

RUN set -x \
&& apt-get update \
&& apt-get install -y mongodb-org=$MONGO_VERSION \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mongodb \
&& mv /etc/mongod.conf /etc/mongod.conf.orig

RUN mkdir -p /data/db && chown -R mongodb:mongodb /data/db
RUN mkdir -p /var/log/mongodb && chown -R mongodb:mongodb /var/log/mongodb

VOLUME ["/data/db","/var/log/mongodb/"]

COPY ./config/docker-entrypoint.sh /tmp/entrypoint.sh
COPY ./config/mongodb.conf /etc/mongod.conf
RUN ["chmod", "+x", "/tmp/entrypoint.sh"]

ENTRYPOINT ["/tmp/entrypoint.sh"]

CMD ["mongod", "-f", "/etc/mongod.conf"]
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/mongodb/Dockerfile

This is pretty much it. That's all we need to create our mongodb database container, that will run on the default port 3333 - but will only be accessible from the REST API, since we linked that container to the ioJS REST API container only, see our docker-compose.yml file again:
projectwebdevapi:
    build: ./projectwebdev-api
    expose:
        - "3000"
    links:
        - mongodb:db

    volumes:
        - ./logs/:/var/log/supervisor/
        - ./projectwebdev-api/app:/var/www/html
In the next chapter it's getting more interesting: Let's create our REST API container, that talks to the mongodb container!

Series: How to create your own website based on Docker (Part 6 - Creating Ubuntu Base Image)

Creating our Ubuntu Base Image

This is part 6 of the series: How to create your own website based on Docker.

Before we can run a Docker container, we need to create a Docker image. So why is that? Well, Docker images are read-only templates to create Docker containers and these images define everything Docker needs to know to create containers (you can run several containers based on the same image).

In this part of the series, we're going to create the base image for all our other images (as they should all run on Ubuntu). This image will be a basic Ubuntu image that brings all the software/tools/drivers that is going to be needed by all images depending on it - that's why I'm calling it the "base image".

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

Using the official Ubuntu Base Image

There are two cool things that Docker provides and we will make use of:
  • You can use the Docker Hub to get already created images (you can also upload your image to Docker Hub, but we're not doing it here).
  • Whatever we install/set up in our Ubuntu image,can be reused in all other containers that are built on this base image - so these settings are shared among all other containers that are based on that image.
Important notice: When running servers in production I do always recommend to use official images as base image, since you can't be sure if some bad guy added some malicious stuff to that image and let's be honest, it's much cooler to create the image yourself! To get a list of available (official) images, just visit the Docker Hub Registry.

For our projectwebdev website we're using the official Ubuntu image and will add our custom software and settings. You will see that the steps we're doing now are pretty much the same every time, so I'll write them down more generic first, before we get into details:
  1. Create a directory within your Docker directory (in our case that would be /opt/docker/ubuntu-base).
  2. Create a Dockerfile (the template that describes the image) within that new directory (so the file would be /opt/docker/ubuntu-base/Dockerfile).
  3. Create additional directories that contain either scripts or config files that should be added to the image/container.
  4. Create additional directories to act as volume mounts for the containers.
As mentioned before, we're going to use the official image and will then install the software that will be needed by all containers that are based on that image. So let's create our Dockerfile (/opt/docker/ubuntu-base/Dockerfile) by adding the image to pull from the Docker Hub.

Get the base image

If we want to use the latest available Ubuntu distribution, we can type something like this:
# Pull base image.
FROM ubuntu:latest
In our case that would be Ubuntu 15.04 (Vivid Vervet), but I'll always recommend to specify the version number explicitly so we don't get surprised with a new Ubuntu version, as soon as we build our images after October when Ubuntu 15.10 (Wily Werewolf) is released.
# Pull base image.
FROM ubuntu:15.04
As you can see here, there are more versions of Ubuntu available, so you can also pull Ubuntu 14.04 LTS (Trusty Thar) from the hub.

Install all needed packages

Since we're working with several technologies later, it's good to have a solid software base under the hood. That's why I've installed the following packages in my base image, just to make sure they are available:
  • build-essential
  • curl
  • git
  • man
  • software-properties-common
  • unzip
  • vim
  • wget
I do always run apt-get update and apt-get dist-upgrade every time I build my images, to make sure that I'm on the latest packages, whenever I rebuild everything - I also clean up some space by deleting all unnecessary package stuff.

So the second part of our Dockerfile looks like this:
# Install the software/packages needed for all other containers
RUN \
  sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
  apt-get update && \
  apt-get -y upgrade && \
  apt-get install -y build-essential software-properties-common curl git man unzip vim wget && \
  rm -rf /var/lib/apt/lists/*

Setup some variables and run

The next step is to set an environment variable (ENV) and a WORKDIR (sets the working directory for the following CMD instruction).
# Set environment variables.
ENV HOME /root 
# Define working directory.
WORKDIR /root
No we're telling our image what command to run when the container is started - in our case that would be bash:
# Define default command.
CMD ["bash"]
That's it - that's our Dockerfile - so here it is completely:
# Pull base image.
FROM ubuntu:15.04
# Install the software needed for all other containers
RUN \
  sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
  apt-get update && \
  apt-get -y upgrade && \
  apt-get install -y build-essential software-properties-common curl git man unzip vim wget && \
  rm -rf /var/lib/apt/lists/*
# Set environment variables.
ENV HOME /root
# Define working directory.
WORKDIR /root
# Define default command.
CMD ["bash"]
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/ubuntu-base/Dockerfile

Testing the Ubuntu Base Image/Container without Docker Compose

You can basically test you new container without using Docker Compose. First build you new image like this:
# cd /opt/docker/ubuntu-base
# docker build -t projectwebdev/ubuntu-base .
Result:
Sending build context to Docker daemon 3.584 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:15.04
15.04: Pulling from ubuntu
b68f8c8d2140: Pull complete
1d57666667e5: Pull complete
a216ec781532: Pull complete
bd94ae587483: Already exists
ubuntu:15.04: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:6994d0f1e915ff22a9b77433c19ce619eda61e5a431a7ba89230327b2f289a95
Status: Downloaded newer image for ubuntu:15.04
 ---> bd94ae587483
Step 1 : RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list &&   apt-get update &&   apt-get -y upgrade &&   apt-get install -y build-essential &&   apt-get install -y software-properties-common &&   apt-get install -y byobu curl git htop man unzip vim wget &&   rm -rf /var/lib/apt/lists/*
 ---> Running in 2d49adfb4fce
Ign http://archive.ubuntu.com vivid InRelease
Ign http://archive.ubuntu.com vivid-updates InRelease
[...]
Step 2 : ENV HOME /root
 ---> Running in 1923ed0e21a0
 ---> a5b574f11a0f
Removing intermediate container 1923ed0e21a0
Step 3 : WORKDIR /root
 ---> Running in 4bfbeeea1733
 ---> dff8fc1d0b06
Removing intermediate container 4bfbeeea1733
Step 4 : CMD bash
 ---> Running in 6b086dd626ac
 ---> cfad3f94c992
Removing intermediate container 6b086dd626ac
Successfully built cfad3f94c992
Now we can run our container and connect to bash (remember that we have set bash as CMD in our Dockerfile):
# docker run --name ubuntu-base-test -t -i projectwebdev/ubuntu-base
Here's the result - we've been able to trigger commands withing our new Ubuntu container - I've also called env to show you the environment variable HOME that we have set:
# docker run --name ubuntu-base-test -t -i projectwebdev/ubuntu-base
root@06913990a3c8:~# env
HOSTNAME=06913990a3c8
[...]
HOME=/root
[...]

Cleanup image file and docker container

Since this was just a basic test, we'll have to remove the image and the container to save some space.
# docker rm -f ubuntu-base-test
# docker rmi -f projectwebdev/ubuntu-base
That's it! Our Ubuntu base image is up and running and can therefore be started via Docker Compose. Let's create the mongodb container in the next part!

Thursday, May 14, 2015

Series: How to create your own website based on Docker (Part 5 - Creating our Docker Compose file)

Let's implement our docker container architecture

This is part 5 of the series: How to create your own website based on Docker.

In the last part of the series, we have planned and created our Docker container architecture. So now it's about time to turn this architecture into a real scenario - and that's what we need Docker Compose for.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

What is Docker Compose?

Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Compose is great for development environments, staging servers, and CI. We don't recommend that you use it in production yet. (Source: https://docs.docker.com/compose)

There are three steps involved when using docker compose:

  1. We need image files for each container (we'll start with that in the next chapter)
  2. Then we need to create a docker-compose.yml file that tell docker compose which containers must be started, including all options (like volumes, links, ports,...)
  3. At last we need to run docker-compose up to start up our container architecture (the configuration from the YAML file)
Since we have just created our architecture, we're starting with step 2 now and will create the image files later. This will show you how we can create a docker compose yaml file based on our architecture.

Implementing our container design

Let's recap - this is what our architecture looks like:


We're going to create a web site called projectwebdev, so the following container names are based on that name of the site. In the diagram above we can see that we have the following containers and options:
  1. nginx reverse proxy
    • links:
      • nginx website
      • ioJS REST API
    • volumes:
      • log files (/opt/docker/logs)
  2. nginx web site
    • links:
      • none
    • volumes:
      • web site files (/opt/docker/projectwebdev/html)
      • log files (/opt/docker/logs)
  3. ioJS REST API
    • links:
      • mongoDB database
    • volumes:
      • ioJS application files (/opt/docker/projectwebdev-api/app)
      • log files (/opt/docker/logs)
  4. mongoDB database
    • links:
      • none
    • volumes:
      • mongoDB files (/opt/docker/mongodb/db)
      • log files (/opt/docker/logs)

The Docker directory structure on my VM

I will use the following folder structure on my Ubuntu VM to host all Docker images/containers:
/opt/docker/

├── logs

├── mongodb

├── nginx-reverse-proxy

├── projectwebdev

├── projectwebdev-api

├── ubuntu-base

└── docker-compose.yml
So the docker-compose.yml file will be in the root directory of all docker image directories (in which we will dive into later). With this setup, I can later just copy the /opt/docker/ folder on to another server and then just run docker-compose up to get everything up and running again.

You can also see that this directory structure already contains a logs/ directory, which will be the collection point for all container logs we've been talking about in the last part of this series.

Create the Docker Compose file

If you've never heard of YAML before, let me just tell you what it is. YAML is a recursive acronym for "YAML Ain't Markup Language". Early in its development, YAML was said to mean "Yet Another Markup Language", but it was then reinterpreted (backronyming the original acronym) to distinguish its purpose as data-oriented, rather than document markup. YAML’s purpose is to have a human friendly data serialization standard for all programming languages. (see: http://yaml.org)

In our YAML file we will tell Docker Compose how our containers must be started, which volumes should be mounted, which containers should be linked together and what ports should be exposed. So it's basically everything from that list above.

Let's get into details - This is what our docker-compose.yaml file looks like:
ubuntubase:
    build: ./ubuntu-base
projectwebdev:
    build: ./projectwebdev
    expose:
        - "8081"
    volumes:
        - ./logs/:/var/log/nginx/
        - ./projectwebdev/html:/var/www/html:ro
projectwebdevapi:
    build: ./projectwebdev-api
    expose:
        - "3000"
    links:
        - mongodb:db
    volumes:
        - ./logs/:/var/log/pm2/
        - ./projectwebdev-api/app:/var/www/html
mongodb:
    build: ./mongodb
    expose:
        - "3333"
    volumes:
        - ./logs/:/var/log/mongodb/
        - ./mongodb/db:/data/db
nginxreverseproxy:
    build: ./nginx-reverse-proxy
    expose:
        - "80"
        - "443"
    links:
        - projectwebdev:blog
        - projectwebdevapi:blogapi
    ports:
        - "80:80"
    volumes:
        - ./logs/:/var/log/nginx/
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/docker-compose.yml

Let's pick the nginx reverse proxy to explain our settings. Besides all other Docker Compose YAML possibilies, we'll only use build, exposelinks, ports and volumes.

build: This is the path to the directory containing the Dockerfile for the image. We have supplied that value as a relative path, which means that it is interpreted relatively to the location of the YAML file itself. This directory is also the build context that is sent to the Docker daemon. All files belonging to the nginx reverse proxy reside in folder ./nginx-reverse-proxy, therefore we tell Docker Compose to build the image based on the following Dockerfile /opt/docker/nginx-reverse-proxy/Dockerfile, which we're going to create later.

expose: This section specifies the ports to be exposed without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified - see the architecture diagram above, these exposed ports are the ports with the purple background color.

links: Here we specify all links to containers in other services. You can either specify both the service name and the link alias (SERVICE:ALIAS), or just the service name (which will also be used for the alias). In our design we'll use aliases, so we'll tell Docker that whenever we want to talk to our containers we want them to use blog (for the projectwebdev website) and blogapi (for our ioJS REST API).

ports: The ports we want to expose to the Docker host - see the yellow port in the architecture diagram above. You can either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen). Since we want to make sure that it's always the same port (in our case it's port 80) we specify the HOST and the CONTAINER port explicitly (which in both cases would be 80). If your nginx reverse proxy in your container uses port 8000 and you want that port to be accessible from outside via port 80, you'll specifiy it like this: "80:8000". Important: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML will parse numbers in the format xx:yy as sexagesimal (base 60). For this reason, Docker recommends always explicitly specifying your port mappings as String.

volumes: This section contains all mount paths as volumes, optionally specifying a path on the host machine (HOST:CONTAINER), or an access mode (HOST:CONTAINER:ro). The latter one (:ro = readonly) is used in our projectwebdev container, since we don't want the container to change the files for any reason. Only our host may provide the markup that is needed for the website.

We have now implemented our architecture with Docker Compose! Let's create each image and container so we can fire up docker compose. We'll start with our Ubuntu Base Image!

Tuesday, May 12, 2015

Series: How to create your own website based on Docker (Part 4 - Planning Docker container architecture)

Let's design our docker container architecture

This is part 4 of the series: How to create your own website based on Docker.

Docker and Docker Compose are now up and running. So it's about time to let them all play together.

Before we start planning our container architecture, we need to make sure that we understand what we're trying to achieve.
  1. We want to be able to port our apps to any other platform as easy as possible
  2. We want our applications to be as separated as possible (every container should have one purpose)
  3. We want to create more instances of an application container if needed
  4. We don't want crashed applications to crash other applications
So based on this list we need to figure out what components will be needed for the site.

Let's define all components to be "dockerized"

Component 1: nginx reverse proxy

I usually start with nginx as reverse proxy in front of all other services. This allows me to have a single entry point for all requests and to distribute them internally to all containers which should be accessible from the web. This reverse proxy will listen on port 80 and will redirect all requests based on the context root, the subdomain and/or the hostname.

Here's an example:
  • www.project-webdev.com:80 => redirects to my blog which listens on port 8081 internally (the projectwebdev blog will be hosted on another nginx machine).
  • api.project-webdev.com:80 => redirects to my REST API which listens on port 3001 internally (the API will be an ioJS/nodeJS application).
  • Also possible: www.project-webdev.com:80/api => redirects to my REST API which listens on port 3001 internally (the API will be an ioJS/nodeJS application).

As you can see, all requests will go through the nginx reverse proxy and will be redirected internally to the appropriate service.

What's important to mention: The internal ports (e.g. 8081, 3001,...) should not be exposed to public, so it should not be possible to access the blog directly (like www.project-webdev.com:8081).

This would be our first component, right? Wait! Where should our new nginx reverse proxy run on?

We need an operating system... so basically our first component would actually be a container that contains the operating system. For that I'm going to use Ubuntu 14.04 LTS.

So right now our current architecture would look like this:
nginx reverse proxy docker container

Component 2: nginx web server for our website

The next thing we'll need for our web site is the markup for the website. So all we need now is another nginx web service, but this time it will not act as reverse proxy, but as a real web server hosting our files. Since we don't want any port conflicts my basic rule is that 4-digit ports are never exposed to public. (Port 80 and 443 (SSL) are allowed to be accessed from outside).

Since Docker works with container links, we need to add a link to our reverse proxy, so that it can communicate internally with our blog application (nginx web site), which listens on port 8081.


As mentioned before, port 8081 can not be accessed from outside - therefore painted in purple.

You can also see that we've mounted a directory on the docker host into our blog's docker container. We're doing this, because we want to be able to change the website from outside later, without restarting the container.

Technically it would look like the following - I'll go into details later:
In the docker container (our provider's Ubuntu server), we'll create a directory called /opt/docker/projectwebdev/html/ which will be mounted as /var/www/html/ in the container (the directory which nginx uses to load the HTML, CSS and JS files from later). So whenever nginx (the one in our nginx web site container) receives a request from a visitor, it will load the files from our real server (from /opt/docker/projectwebdev/html/) and will provide it to him - I think you've got it, right? It's not that hard.


Component 3: ioJS REST API container

Our website should fetch information from a REST API. Therefore we will need an ioJS application that will provide all data for the website asynchronously. The website will use a call to http://api.project-webdev.com to fetch the contents or any other information needed.

Since this is a new URL, we need to link this container to our nginx as well, so that we can build our redirect to that container internally.

Component 4: mongoDB database container

A REST API does not make sense without any data persistence in the back. Therefore we need to add our mongoDB to that architecture as well. This instance listens on port 3333 and should only be accessible via REST API (and therefore implicitly via our nginx reverse proxy), which is why we need to add a link to our REST API so that it can access the data in the mongoDB.


Additional component: Logs

When running this application stack in the wild later, it's very important to be able to analyse the logs (e.g. using the ELK stack). Since we have several containers, it's does not make to get the logs from each instance separately. So we're creating another volume mount which acts as central storage for all log files of all used containers. This directory can later be used by log file analysis tools, so you can analyse the hell out of your logs. :)

Conclusion

We have now several containers that act as "platform" for a certain purpose. They are all completely encapsulated and are sharing their resources (thanks to Docker).
  1. nginx reverse proxy
    • links:
      • nginx website
      • ioJS REST API
    • volumes:
      • log files (/opt/docker/logs)
  2. nginx web site
    • links:
      • none
    • volumes:
      • web site files (/opt/docker/projectwebdev/html)
      • log files (/opt/docker/logs)
  3. ioJS REST API
    • links:
      • mongoDB database
    • volumes:
      • ioJS application files (/opt/docker/projectwebdev-api/app)
      • log files (/opt/docker/logs)
  4. mongoDB database
    • links:
      • none
    • volumes:
      • mongoDB files (/opt/docker/mongodb/db)
      • log files (/opt/docker/logs)
That's it... that's our "dockerized" architecture for our projectwebdev website based on Docker containers. Let's create our Docker Compose file now... in the next part of this series.

Monday, May 11, 2015

Series: How to create your own website based on Docker (Part 3 - Installing Docker)


It's time to get really started...

This is part 3 of the series: How to create your own website based on Docker.

If you still don't know what Docker is or what it does? Just read the official "What is docker" document!

In this part of the series, we're going to install Docker and Docker Compose - although Docker does not recommend to use Docker Compose for production use, we'll still give it a shot. If you've never heard of Docker Compose, let me tell you in a few words what it does.

Docker Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. In short: You'll define a YAML file in which you'll specify how the Docker containers must be started and how they are linked together. Docker Compose will then start them for you and will make sure that they are started in the right order. It will also take care of naming these containers baed on your settings. But we'll get into that in one of the next parts.

Let's install Docker & Docker Compose

As I've mentioned before, Docker requires a 64-bit installation regardless of your Ubuntu version. Additionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version or a newer maintained version are also acceptable.

Run the following command, which will download a shell script and will trigger the installation of Docker. When running this command, it will ask you for your password. Just provide the password that you have set for johndoe in part 2 of this series.
# wget -qO- https://get.docker.com/ | sh
To verify that Docker has been installed correctly, just type the following to see the options that you have when running Docker:
# sudo docker --help
Now let's install Docker Compose by typing the following:
# curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
# chmod +x /usr/local/bin/docker-compose
Let's add johndoe to a new group called docker:
# sudo usermod -aG docker johndoe
Now you should be able to run docker containers without using sudo.

In order to start our service automatically when we reboot the system, we need to add it to the default runlevel:

sudo update-rc.d docker defaults
Finally we also need to setup our UFW again, so that it can work with docker, by exposing another port (remember we've done that already with the SSH port):
#sudo ufw allow 2375/tcp
We also need to change some UFW settings to make Docker work correctly:
# sudo vi /etc/default/ufw
Set the DEFAULT_FORWARD_POLICY policy to:
DEFAULT_FORWARD_POLICY="ACCEPT"
Reload UFW to use the new setting.
# sudo ufw reload
Now check the status of the firewall again:
# sudo ufw status
It should now look similar to this:
Status: active
To               Action      From
--               ------      ----
2233/tcp         ALLOW       Anywhere
2375/tcp         ALLOW       Anywhere
Now Docker as well Docker Compose are installed - let's think about a Docker architecture! :)

Series: How to create your own website based on Docker (Part 2 - Setting up Ubuntu for production use)

Setting up Ubuntu as docker host

Let's get started

This is part 2 of the series: How to create your own website based on Docker.

So why do I want to create my website completely based on Docker containers? Well, that's pretty easy, because a) Docker is cool and b) Docker allows me to move my containers to new providers quickly.

Let's imagine, that a new cheap & fast cloud service is brought to the web or my hosting provider gives me a chance to move to a newer/faster hardware, then it would be cool to quickly move my whole page (including all databases, apps and other services that might be running on my "old" machine) to the new appliance.

That's exactly what I need Docker for, because then my "dockerized" apps are completely portable and can run everywhere - on a cloud, on a virtual machine or on my local computer - Linux is pretty much mandatory, though.

If you still don't know what Docker is or what it does? Just read the official "What is docker" document!

Setting up Ubuntu for production use

The big advantage of using Docker is that we don't have to spend that much time creating a production-ready server. This machine will basically only act as platform for all my containers (from now on called "Docker host") and therefore only needs the minimum security configuration. But still - later - in the docker containers - you have to take care of the application security of course - but we'll get to that later.

Although Docker is supported on Ubuntu from 13.10 & up, I'm using the latest greatest Ubuntu distribution: Ubuntu 15.04 (Vivid Vervet). It comes with the latest kernel and is therefore well-prepared for my Docker installation (remember Docker requires a 64-bit installation regardless of your Ubuntu version. Additionally, your kernel must be 3.10 at minimum).

I'm going to install everything on this (small) machine to test its performance and have the opportunity to move to a faster one once everything has been set up: Hetzner vServer VX11 (Dual Core, 2GB of RAM). Let's see how this small machine performs in the real world, running nginx, nodejs and mongodb containers.

Let's start setting up the Ubuntu machine - our new Docker host.

Add a new user to the system


First of all you need make sure that no one (besides you) can access your machine via SSH, so we need to create a new user. Let's say our new user is called "johndoe" - please use your own user name here - but for the sake of simplicity I'll keep using johndoe as our new user in this series.
# adduser johndoe
This command will ask you some questions (including your password) and will then create a new user for you.

Since you want to be able to use sudo later, you need to add root privileges to that user.

Just type the following command and an editor will appear that allows you to add the user to the:
# visudo
Now find the following section: #user privilege specification and add the following line below the root entry:
root ALL=(ALL:ALL) ALL
johndoe ALL=(ALL:ALL) ALL
Now hit Control+O to save then Control+X to exit Nano editor.

Before you exit, you should also change the root password - just to make sure that no one else knows it - just enter the following command and enter your new password twice:
# passwd

Secure your SSH access correctly


Now that you have your user set up, you can log out from your machine (just type exit) and log in with your new user again. Although it's not necessary, I recommend to do so, so that you can test whether your SSH login (with your new user) works as expected.

On a Unix based machine (e.g. Linux or OSX), you would connect to SSH like that:
# ssh johndoe@yourmachine.com
Now that we're logged in as johndoe, we will secure our SSH access. So type the following command to get into the ssh daemon configuration - I will be using vi from now on - but you can also use nano as editor:
# sudo vi /etc/ssh/sshd_config
Now we will change the standard SSH port, so that port sniffers will have a hard time to guess your port - for that please change the following line (in vi just press "i" to change to insert mode):
# What ports, IPs and protocols we listen for
Port 2233
We've now changed the port from 22 to 2233. Please write it down the port number and don't forget the value you have specified here - otherwise you won't be able to login via SSH anymore! This will not prevent hackers from trying to port scan your server, but it will prevent scripts trying to access your machine on the standard SSH port.

Now we'll tell SSH to not allow the root user to login - so we're changing the value of PermitRootLogin to "no"!
AuthenticationLoginGraceTime 120
PermitRootLogin no
StrictMode yes
If you're the only one accessing this machine you can also add the following line to the end of the file:
AllowUsers johndoe
Having changed that, only johndoe can access this machine via SSH now.

Now just his ESC and enter ":wq" to write the changes to the file and reload the SSH daemon:
# sudo service sshd restart
Ok... let's test our new secured SSH service. Just exit from the machine and log in via SSH again, but this time you'll have to specify the port:
 # ssh johndoe@yourmachine.com -p 2233
You can also try to login as root user, but that should not work as we've told the daemon not to allow the root user to login:
  # ssh root@yourmachine.com -p 2233

  Install & enable the Ubuntu firewall

Since we have now set up our SSH access we should set up our firewall now - this will make sure that you can only access the machine via port 2233 (your new SSH port):

Let's install the UFW - Uncomplicated Firewall by typing the following command:
# sudo apt-get install ufw
Once the firewall is installed, check its status:
# sudo ufw status
It will probably tell you that it's not enabled - that's ok for now. So we'll tell it to allow incoming requests to our new port:
# sudo ufw allow 2233/tcp
And we'll also tell it to deny all incoming and allow all outgoing requests by default:
# sudo ufw default deny incoming
# sudo ufw default allow outgoing
Now let's enable the firewall:
# sudo ufw enable
Now check the status of the firewall again:
# sudo ufw status
It should now look similar to this:
Status: active
To               Action      From
--               ------      ----
2233/tcp         ALLOW       Anywhere
That's pretty much it - your Ubuntu machine is now secured from illegal SSH access.

Ok... let's test our new more-secured SSH service. Just exit from the machine and log in via SSH again - again, you'll have to specify the new port:
 # ssh johndoe@yourmachine.com -p 2233
Now you should be logged in and ready to install docker (next part of the series)!

Series: How to create your own website based on Docker (Part 1 - Introduction)

What you are going to learn in this series

This is my first official blog post ever and it's not going to be the last. You probably came here, because you searched for a way to learn how to create a website using Docker and state-of-the-art web technologies. If yes - then you're right. This series will, however, not get into details of the actual implementation, but will basically explain how an Ubuntu machine can be set up to act as Docker host for a web site, based on different docker containers.

In other blog posts, besides writing about new technologies in the world of web development, I will also tell you how to write AngularJS 2.0 applications, what coding guidelines are important for web applications and what can be done to turbo-charge your website so that it performs well in slower networks.

In this series the basic story is that I'm going to build a simple web site using the following technologies:
  • Ubuntu - hosting platform
  • Docker - lightweight container virtualization
  • Docker Compose - orchestration for my Docker containers
  • ioJS- JavaScript backend for the my REST API
  • nginx - reverse proxy and web server for the blog
  • mongodb - database for blog entries and meta data
  • Twitter Bootstrap - the basic grid layout for the page
  • Gulp - the JavaScript build system
  • Yeoman - A scaffolding tool for projects
  • Hapi - A easy-to-use REST API for nodejs.
  • HTML5 - markup language for the front end
  • JavaScript - for the front- and backend code
  • AngularJS 2.0- for the frontend logic (MVVM)
  • CSS3 - we need to style the page somehow, right?
The goal is to create a simple website, that talks to a REST API (and a database) - and everything will be based on docker. You can use this setup later to build your own full blown website.

I will also use this technology to create my own blog later, which means that I'm currently using this blog (http://project-webdev.blogspot.de) only for documenting my way to my own blog, until I switch to the new one. The new blog will be available under the following URL: http://www.project-webdev.com - so stay tuned!

Who I am

Since this is Part 1 of the series, let me just start with an introduction of myself! :)

My name is Sascha Sambale and I'm a Software Engineer specialized in Java and JavaScript (as well as other frontend technologies). I work as a Software Architect (specialized in web development and web performance) for the Robert Bosch GmbH in Stuttgart, Germany. This blog is not affiliated with Robert Bosch GmbH in any kind of way - it's strictly personal and I'm only sharing my personal opinions and thoughts, that are in no way related to my job.

What will be covered

The blog that I'm going to create will be all about web technologies - new trends, tutorials, hints, tips and tricks. All you need to know to get started with new web technologies. It's not my goal to make money with this blog - it's all about knowledge sharing.

About this series

This series will show you how you can set up a web site using the aforementioned technologies (especially Docker). It will consist of several parts and will be updated as soon as I've managed to work on a new chapter. It will basically start with setting up Ubuntu as Docker host. I will include all necessary scripts and steps that will help you to get your system up and running.

Last words

Make sure that you add this blog to your RSS reader to get updated as soon as I publish a new chapter of this series. I am not going to promise that I'll update this blog every day, but you'll get updates as soon as I've implemented the next milestone.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

The following parts are/will be available

Have fun and I hope you can learn something from this blog,

Sascha