Docker in Docker with Jenkins and Supervisord

I’ve been searching for a way to host Jenkins in a Docker container and inside this container also be able to run integration tests inside other Docker containers. This approach is called Docker in Docker (or DIND) and requires a bit of Docker knowledge and tweaks to get working properly. To understand this blog you’ll need some basic knowledge about Docker and Supervisord already.

Building a Jenkins image with a Docker daemon

First of all Jenkins have an official Jenkins Docker Image that we’d like to use. This image defines an EntryPoint that points to a script starting Jenkins. However the image doesn’t contain Docker so we have to create our own Dockerfile that inherits from this image and adds Docker. This can be achieved like this:

This will install the latest version of Docker when building the image, for example:

However the Docker daemon is not started automatically when we start a container based on this image. This is because Docker ignores initialization daemons such as upstart, systemd etc so we need to handle this ourselves. Since Docker is designed to run one process per container we need to use a process control system. Supervisor is very popular for this use case in the Docker world and this is what we’re going to use as well. This means that we need to update our Dockerfile to also install supervisor (we’ll see a complete Dockerfile later so don’t worry). Supervisor needs a configuration file in order to know what processes to provision. This is an example of a working supervisor config file that starts Jenkins and the Docker daemon:

To start supervisor we must copy the supervisord.conf from our file system into Docker and add a command to our Dockerfile that starts it:

Since we’re starting supervisord when the container is started the Jenkins EntryPoint (that previously ran /usr/local/bin/ is now ignored. This is why we also start Jenkins from supervisord.

It’s very important to set the environment parameter for Jenkins. Supervisor doesn’t include environment variables declared for another user (jenkins in this case) so we need to pass them along ourselves. The most important part is probably that we need to set JENKINS_HOME to /var/jenkins_home otherwise the volume defined by the Jenkins image will not work as expected and Jenkins will assume the home folder to be /var/jenkins_home/.jenkins. Also note that Jenkins must be executed as user “jenkins” and not root.

Here’s an example of the Dockerfile so far:

Starting the container

Alright, we’ve managed to build an image (docker build -t johanhaleby/jenkins .) but there are some things we need to think about when starting a container based on our image. First of all if we’re going to use Docker in Docker we need run our container in so called privileged mode. This is done by using the --privileged flag when starting the container. Another thing we may need to add is another DNS provider than our default one. This is because it’s common to have /etc/resolv.conf point to on your host but this doesn’t work from the Docker container. A workaround is to use the --dns flag and point to, for example, Googles public DNS service using Here’s an example:

This particular script mounts the Jenkins volume (/var/jenkins_home) into folder /home/johanhaleby/jenkins_data so that the data is persisted accross restarts. Note that you could also use the data volume container pattern if you need to make the data portable between different servers. We also start the container using the root user but this is not something that’s recommended in a production environment. Another solution would be to install sudo and then configure the jenkins user to be “sudoer” (preferably with a password) but this is out of scope for this blog post.

Now that the container is started let’s login:

If you try running something like:

you’ll run into the following error:

After some help from stackoverflow things started to clear up. We cannot only start the Docker daemon process as we did in our supervisord config (command=/usr/bin/docker -d). The Docker daemon process requires other processes to be running (such as cgroup) prior to starting it. Luckily there’s a project called DIND that has solved this issue for us. DIND contains a magic wrapper script that takes care of all the prerequisites needed to start Docker inside another Docker container. We must download this script, copy it to our image from our Dockerfile and update our supervisor config to call this script instead of /usr/bin/docker -d.

Once this is done we are able to run Docker images from within the Jenkins container. But there’s still one more caveat. Jenkins itself is running as user jenkins and it doesn’t have access to use Docker without using sudo (which is not installed). Thus we need to allow the jenkins user to run Docker without the need for sudo. We can do this by adding the jenkins user to the docker group:

Once this is done Jenkins should be able to run Docker images from inside jobs while Jenkins itself is also running in Docker. How to actually create Jenkins jobs taking advantage of this is the subject of another blog post.

Stopping and restarting

It’s obviously nice to be able to stop the container and restart it with everything working as it did prior to the restart. However when starting a stopped Jenkins container using:

you can run into errors such as:

when running a Docker job inside Jenkins. The workaround is to run:

before starting Docker. dmsetup manages logical devices that use the device-mapper driver and this command ensures that all nodes in /dev/mapper correspond to mapped devices currently loaded by the device-mapper kernel driver. This is not something that is included in the wrapdocker script from DIND so what I’ve done is to update it by adding dmsetup mknodes as the first command in the script.

After doing this and rebuilding our Docker image it’s possible to restart the container for example after a system reboot.

Update 2015-03-16: I committed a pull request to the DIND project (which was accepted) that adds dmsetup mknodes to the wrapdocker script so this workaround should no longer be necessary.

What we lose

What we lose with this approach is that we’re no longer able to pass in command-line arguments to Jenkins when starting a container (since we’re starting Jenkins from supervisor). For example running the official Jenkins image allows us to do things like:

I.e. we lose the ability to specify <jenkins_arguments> when starting the container. If this is required then the workaround is to append the arguments to the command in the supervisor config:


As we’ve seen there are a couple of obstacles we need to pass in order to get Docker in Docker working with Jenkins. Here’s a complete Dockerfile for reference:

and here’s the supervisord config:

To build the image run:

and to start it run:


When restarting the Jenkins container (using docker run) all downloaded images are lost (don’t worry, they will be downloaded automatically by Docker so you won’t lose anything except that it takes longer run our Jenkins jobs that are using Docker for the first time). A better approach would be to mount the folder where Docker keeps its images as a volume (or data volume container if it needs to be portable).

This Post Has 7 Comments

  1. I followed the steps you mentioned here. After creating the container out of the jenkins image and when I login into ‘myjenkins’ container using – docker exec -it myjenkins bash
    and execute – docker info I get the following error:
    FATA[0000] Get http:///var/run/docker.sock/v1.18/info: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?

    I’m using the latest docker 1.6 version on a ubuntu 14.04 host.

  2. Thank you for the pointer Johan. It’s working like a charm now!
    Installing the “apparmor” package solved the issue. I’ve also used the latest docker installation commands instead of setting up the repositories. So, my Dockerfile looks like this:

    FROM jenkins

    # Switch user to root so that we can install apps (jenkins image switches to user “jenkins” in the end)
    USER root

    # Install Docker prerequisites
    RUN apt-get update -qq && apt-get install -qqy \
    apt-transport-https \
    apparmor \
    ca-certificates \
    lxc \

    # Create log folder for supervisor, jenkins and docker
    RUN mkdir -p /var/log/supervisor
    RUN mkdir -p /var/log/docker
    RUN mkdir -p /var/log/jenkins

    COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf

    # Install Docker from Docker Inc. repositories.
    RUN curl -sSL | sh

    # Add jenkins user to the docker groups so that the jenkins user can run docker without sudo
    RUN gpasswd -a jenkins docker

    # Install the magic wrapper
    ADD ./wrapdocker /usr/local/bin/wrapdocker
    RUN chmod +x /usr/local/bin/wrapdocker

    CMD /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf

  3. Thanks Johan for this article. I would like to point out what I tried based on what Jérôme Petazzoni has written in his blog about Docker-in-Docker challenges ( and his solution to create siblings instead of children.

    The variant does the following:
    1. Install docker within the container
    2. Do not attempt to start the docker daemon
    3. Instead while starting the docker container map the docker unix socket of the container to the host socket by passing “-v /var/run/docker.sock:/var/run/docker.sock”
    It works super awesome also considering that the docker daemon requires special configuration in most environments like setting private registry certificates, proxies etc. You need to do this only in the parent host and all the containers that start in this host can internally launch docker containers using the same configuration.

    1. Thanks for you comment. I’m aware of this approach as well and in hindsight it might have been a better option (since it’s WAY easier to get working). However you don’t get the same amount isolation. For example you can see and mess up docker images from with the Jenkins container.

  4. I am having an issue while building an image. It’s throwing this error:

    System error: write /sys/fs/cgroup/docker/01f5670fbee1f6687f58f3a943b1e1bdaec2630197fa4da1b19cc3db7e3d3883/cgroup.procs: no space left on device”

    This is the disk usage before building the image:

    root@73ae68a87981:/# df -h
    Filesystem Size Used Avail Use% Mounted on
    none 60G 13G 44G 23% /
    tmpfs 4.9G 0 4.9G 0% /dev
    tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
    /dev/vda2 60G 13G 44G 23% /etc/hosts
    shm 64M 0 64M 0% /dev/shm

    This is after the error:

    root@73ae68a87981:/# df -h
    Filesystem Size Used Avail Use% Mounted on
    none 60G 60G 0G 100% /
    tmpfs 4.9G 0 4.9G 0% /dev
    tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
    /dev/vda2 60G 60G 0G 100% /etc/hosts
    shm 64M 0 64M 0% /dev/shm

    Is there a way to increase the disk space?

  5. Hi,
    I must appreciate you for providing such a valuable content for us. This is one amazing piece of article. Helped a lot in increasing my knowledge on DevOps.
    DevOps Consult

Leave a Reply

Close Menu