Democratizing orchestration with Docker

I’ve always wanted to have a tool to easily setup my own distributed application without caring too much about what infrastructure to use and how to configure it.

With the new upcoming release of Docker 1.12 this is now becoming possible. One of the main features of Docker 1.12 is the built-in orchestration functionality that makes it easy to distribute sets of tasks on different nodes. Note that I didn’t write containers but tasks here. A task is currently synonymous with container but in the future it could be something else as for instance a unikernel. The set that contains the tasks is called a service which will have it’s state constantly matched with the desired declared state.

The built-in orchestration is optional and used only if running
docker swarm init
To then create a service and declare its desired state you run
docker services create

The swarm mode architecture looks like this:

Swarm mode architecture
Swarm mode architecture

A node is either a manager or a worker where a manager maintains the desired state and a worker executes tasks.
The swarm architecture uses the Raft consensus protocol to provide fault-tolerance so that you no longer need to use and configure a third-party tool such as Consul or etcd. The swarm strives to maintain the desired state so if a task goes down a new one is automatically launched.

Another nice improvement is that swarm is now secure by default meaning that it will use mutual TLS authentication and encryption across all the cluster. The managers in the swarm issues certificates to new nodes using a self-signed certificate (generated by swarm) or an external certificate .

Let’s see how we can create a swarm cluster using Docker 1.12:

First we need enter swarm mode by initializing the swarm:

docker swarm init --listen-addr <manager-host-ip>:2377

This will create a manager node and the ip-address is where the address of the host the swarm manager is running on.

Let’s create a swarm worker machine:

docker-machine create -d virtualbox worker1

and join a swarm node to the swarm:

docker-machine ssh worker1 docker swarm join <manager-host-ip>:2377

docker node ls will print out the nodes in our cluster:

christian@christian-XPS-15-9530:~$ docker node ls
ID                           NAME                   MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
bqw4uiiaprmm4ympnrqnqkujq    worker1                Accepted    Ready   Active        
cjoldi2ycyjvrwo69v27e75yw *  christian-XPS-15-9530  Accepted    Ready   Active        Leader

We can now create a service containing a couple of tasks (i.e. containers):

docker service create --name web --replicas 2 --publish 5000/tcp mrjana/simpleweb

Executing `docker service tasks web` will list all the containers in the cluster:

ID                         NAME   SERVICE  IMAGE             LAST STATE           DESIRED STATE  NODE
9e5tlv3ehem4rrni06ma6qfg7   web.1   web   mrjana/simpleweb   Preparing 5 seconds   Running      christian-XPS-15-9530
5y97k8j57hu103gxe2xxwvx58   web.2   web   mrjana/simpleweb   Preparing 5 seconds   Running      worker1

And we see that two tasks have been launched on two different nodes.

But how can we enable the containers to talk to each other? This is where overlay network come in. They provide a multi-host network on which the containers can communicate:

docker network create --driver overlay mynetwork

If we now create a service on this network the containers would be able to communicate:

docker service create --network mynetwork --name web mrjana/simpleweb

If we inspect the service with docker service inspect web it will print out the current exposed node port for the service:

       "ExposedPorts": [
               "Protocol": "tcp",
               "Port": 5000,
               "HostPort": 30000

We can now call the service using the ip of any node in the cluster on this port. The request will be automatically routed to a node containing the service.


There’s a lot more to say about the orchestration features in Docker 1.12 but what’s shown above hopefully gives you an idea of how it provides a powerful yet simple way to build highly available clusters.

This Post Has 2 Comments

Leave a Reply