Creating containerized build environments with the Jenkins Pipeline plugin and Docker. Well, almost.

Docker and Jenkins are like the chocolate and peanut butter of the DevOps world. The combination of the two present a ton of new opportunities and headaches. I’m going to talk about both.

For this post, I’m assuming you are already familiar with setting up Jenkins and comfortable with Docker. Rather than rehash a lot of existing posts on Jenkins and Docker, I would suggest heading on over to the Riot Games Engineering blog where they have a ton of excellent articles on integrating Docker and Jenkins. I’m going to focus on my specific set up, but I’ve borrowed a lot of ideas from them.

Target setup

I say “target” because all of the pieces to don’t yet do what I’d like them to do. It’s simple really: set up a Jenkins master in a container on one host with multiple JNLP agent containers across multiple hosts. The agent hosts could run in different AWS VPCs and/or accounts using ECS.

My goal here was to have a generic agent configuration that could be deployed onto any host. Each project would then be responsible for for defining its own build environment and that is expressed through a container. This would put the build environment configuration in hands of the development team rather than the team that is managing the Jenkins infrastructure. I REALLY wanted to avoid having agents with a specific set of build tools. Containerized build environments can do this, it’s just getting everything to play nice that is the real challenge.

To get me there, I’m also leveraging the Jenkins Pipeline/Workflow plugin. This set of plugins gives you a very elegant DSL for describing build pipelines. Even better, it has pretty slick support for using containerized build environments via the Cloudbees Docker Pipeline plugin. It’s pretty simple to do something like so:

node('test-agent') {
    stage "Container Prep"
    // do the thing in the container
    docker.image('maven:3.3.3-jdk-8').inside {
        // get the codez
        stage 'Checkout'
        git url: 'https://github.com/damnhandy/Handy-URI-Templates.git'
        stage 'Build'
        // Do the build
        sh "./mvnw clean install"
    }
}

This pipeline will execute the build on a Jenkins agent named “test-agent” and will attempt run the build inside a container based on the “maven”3.3.3-jdk-8” image. This particular pipeline runs fine when the agent runs directly on the host, but it fails when the Jenkins agent runs in a container.

Don’t do Docker in Docker

By having either the Jenkins master or slave in a container, one might assume that I’d need to run the container in privileged mode and doing the whole “Docker-in-Docker” thing. I’m not. Jérôme Petazzoni published very informative post titled “Using Docker-in-Docker for your CI or testing environment? Think twice.” You should read it. The rest of this post assumes that you did.

If you’re still using a copy of the wrapdocker script, you should ask yourself “why?”. It’s much simpler to do something like so:

docker run -v ${JENKINS_HOME}:/var/jenkins_home \
     -v /var/run/docker.sock:/var/run/docker.sock \
     -v $(which docker):/bin/docker -p 8080:8080 \
     -p 50000:50000 damnhandy/jenkins

This will bring up Jenkins and it will be able to call the docker command and do everything a “Docker-in-Docker” set can do. There’s no need for privileged mode or the wrapdocker script.

One caveat here: you’re not going to be able to simply reuse the official Jenkins image to do this because the jenkins user needs to be a part of the docker and/or users group in order to be able to make use of the socket. Once you do that, Jenkins can happily call docker from within the container, and you can build and run other containers with ease.

The Jenkins JNLP agent container

The Jenkins agent container follows similar rules as the master. It too needs access to the docker socket and executable and you can do something like this:

docker run -v ${JENKINS_HOME}:/var/jenkins_home \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v $(which docker):/bin/docker --name=jenkins-slave \
    -d damnhandy/jenkins-slave -url http://192.168.99.100:8080/ \
    a0a1b92971030d5f5dd69bd972c6cd899f705ddd3699ca3c5e92f937d860be7e test-agent

Like the Jenkins master, you have to ensure that the jenkins user is in a group that has the privileges to access the docker socket. I’m using a fork of the Jenkins JNLP slave container and adding the necessary groups. Once you do this, your agent will come up and you’ll be able to execute builds against the agent. Almost.

The exact moment where the wheels came off

The moment you start to execute a build that runs with in a container, things go off the rails pretty quickly. The problem is that you have the agent container binding to a host directory ${JENKINS_HOME}:/var/jenkins_home and then the build container needs access to the same directory. The Cloudbees Docker Pipeline plugin will execute the following when using the docker.inside() function:

docker run -t -d -u 1000:1000 -w /var/jenkins_home/workspace/uri-templates-in-docker \
-v /var/jenkins_home/workspace/uri-templates-in-docker:/var/jenkins_home/workspace/uri-templates-in-docker:rw \
-e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** \
-e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** \
maven:3.3.3-jdk-8 cat

The container is trying to mount the the host directory /var/jenkins_home/workspace/uri-templates-in-docker into this containerized build environment for Maven 3.3.3 and tries to set that directory as the current working directory. This all works great if the Jenkins agent is running directly on the host, outside of a container. When running inside a container, I’m basically trying to do this:

And this absolutely does not work. Because I’m mapping the docker socket from the host to the Jenkins agent container,  any volumes that are mounted in this “faux docker-in-docker” manner are actually referenced from the host, not from perspective of the Jenkins agent container. So assuming the directory of the ${JENKINS_HOME} on the host was something like /opt/jenkins_home, something like this “should” work:

docker run -t -d -u 1000:1000 -w /opt/jenkins_home/workspace/uri-templates-in-docker \
-v /opt/jenkins_home:/var/jenkins_home/workspace/uri-templates-in-docker:rw \
-e ******** 
maven:3.3.3-jdk-8 cat

But there’s a few problems with this approach:

  • Since we’re kind of running “docker-in-docker”, getting the path of host directory is tricky.
  • It’s not exactly portable since the containers need to have more intimate knowledge of the hosts directory structure.

There is a better way.

The beauty of Docker data volume containers

It’s taken me about 18 months to finally understand why one would want to use a container for storing data. Now I get it. For this use case, a docker volume container is an incredibly elegant way of sharing a volume between multiple containers. It provides a clean abstraction around the volume and provides a host-independent way of referencing the volume. With data volume containers, you end up with something like this:

Again, borrowing some ideas from Maxfield at Riot Games, I created a data volume container pretty much the same way he describes. Now while Docker 1.9+ have the ability to create named volumes, theres a few major issues with using them right now:

  • The documentation is seriously lacking. And when I say lacking, I mean it doesn’t exist. See issue #20465
  • Volumes created with docker volume create will always be owned by root. This is being fixed for Docker 1.11, but it doesn’t help much when you’re using docker 1.9 and 1.10. Since Jenkins runs as jenkins, this doesn’t work.

Since my target environment is Amazon ECS which is using Docker 1.9, I’ll continue with data volume containers. I used Maxfield’s Dockerfile verbatim and created the container like so:

docker create --name=jenkins-data damnhandy/jenkins-data

And now start the Jenkins agent like so:

docker run --volumes-from=jenkins-data \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v $(which docker):/bin/docker --name=jenkins-slave \
    -d damnhandy/jenkins-slave -url http://192.168.99.100:8080/ \
    a0a1b92971030d5f5dd69bd972c6cd899f705ddd3699ca3c5e92f937d860be7e test-agent

So far so good. The bad news is that the Docker Pipeline plugin still insists on mounting a volume from the host, which in my case doesn’t actually exist in the Jenkins agent container. So for now, the Cloudbees Docker Pipeline plugin is a non-starter.

However, it is possible to bypass the Docker Pipeline plugin and change the pipeline script to be as follows:

node('test-agent') {
    // Get some code from a GitHub repository
    git url: 'https://github.com/damnhandy/Handy-URI-Templates.git'
    sh 'docker run -t -u 1000:1000 --volumes-from=jenkins-data -w /var/jenkins_home/workspace/uri-templates-in-docker maven:3.3.3-jdk-8 ./mvnw package'
}

And this mostly works. The project will build but fails on the tests because the pipeline git task doesn’t handle submodules very well. However, this is an issue with the specific project and we at least have the build failing 2/3’s the way through the build in the target containerized build enviornment.

Wrapping Up

Containerized build environments are such a great idea and will save a lot of hassle down the line. I’m also loving the new Jenkins pipeline plugins, even though there’s a few rough edges.  I’ve posted a the code for a working environment that illustrates this set up here:

https://github.com/damnhandy/jenkins-pipeline-docker

Remember, not everything here works as desired, but it at demos what could be possible. I hope this post helped folks better understand how to execute docker builds in a Jenkins container and get better grasp of how docker manages data volumes. There’s still a lot more to learn and few PRs to create 😉

3 thoughts on “Creating containerized build environments with the Jenkins Pipeline plugin and Docker. Well, almost.

  1. Hi. Great article and thanks to github sources, I managed to tried it.
    Unfortunately, I still feel like I am missing something here…

    DinD + jenkins-slave might be something you wan’t to avoid, but it works really well…

    As you wrote it, I tried to mount docker.sock and docker client, within the container, but I got the cannot open shared object file due to OS spec.

    Then I got back to http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

    I now says : “Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries. If you want to use e.g. Docker from your Jenkins CI system, you have multiple options:
    installing the Docker CLI using your base image’s packaging system (i.e. if your image is based on Debian, use .deb packages) […] ”

    I ended up installing docker in the jenkins-jnlp-agent of your project. But with docker.sock mounted, it won’t be isolated anymore as images will be sibling (and not children) of the jenkins-jnlp-agent container.

    As I said, I might be missing something about Docker container jenkins slave. Your article let me understand docker volumes shared between master and slave, that’s still a good point.

    Like

  2. Hey, great minds think alike! https://dkastner.github.io/gocd-docker-pres/#1

    Anyway, good article, it’s interesting to see how things are handled on the Jenkins side.

    A couple things I’ve found from working with docker-based CI systems:

    Docker may be adding some kind of transparent volume sharing ability, but I’ve found it’s just easier to not worry about mapping volumes to containers or “data-volume” containers. If I need to stick a build artifact into a container, I simply create a new temporary image.

    In my Dockerfile:

    FROM myservice
    ADD /file/from/build/agent

    docker build -t temp-image-1234
    ID=docker run -d temp-image-1234 tests

    Then if I want artifacts back from a build run, I use docker cp to get them back:

    docker cp $ID:/path/to/artifact .

    An advantage of this is that you can keep the build container around for debugging if you wish.

    It’s been much easier to configure agents with settings and credentials to connect to a remote machine running docker, rather than messing with volume sharing. The idea is that an agent can be spun up on any cluster or machine and not worry about volume sharing paths. That is, when the agent is started, it is configured with DOCKER_HOST, DOCKER_TLS_VERIFY, and DOCKER_CERT_PATH that point to the remote docker build machine. This way the build machine can be cleaned/restarted independently of the cluster of agents.

    Liked by 1 person

Comments are closed.