Docker + Jenkins: Dynamically Provisioning SLES 11 Build Containers


Using JenkinsDocker Plugin, we can dynamically spin-up SLES 11 build slaves on-demand to run our builds. One of the hurdles to getting there was to create a SLES 11 Docker base-image, since there are no SLES 11 container images available at the Docker Hub Registry. We used SUSE’s Kiwi imaging tool to create a base SLES 11 Docker image for ourselves, and then layered our build environment and Jenkins build slave support on top of it. After configuring Jenkins’ Docker plugin to use our home-grown SLES image, we were off and running with our containerized SLES builds!

Jenkins/Docker Plugin

The path to Docker-izing our build slaves started with stumbling across this Docker Plugin for Jenkins: This plugin allows one to use Docker to dynamically provision a build slave, run a single build, and then tear-down that slave, optionally saving it. This is very similar in workflow to the build VM provisioning system that I created while working in VMware’s Release Engineering team, but much lighter weight. Compared to VMs, Docker containers can be spun up in milliseconds instead of a in few minutes and Docker containers are much lighter on hardware resources.

The above link to the Jenkins wiki provides details about how to configure your environment as well as how to configure your container images. Some high-level notes:

  • Your base OS needs to have Docker listening on a TCP port. By default, Docker only listens on a Unix socket.
  • The container needs run “sshd” for Jenkins to connect to it. I suspect that once the container is provisioned, Jenkins just treats it as a plain-old SSH slave.
  • In my testing, the Docker/Jenkins plugin was not able to connect via SSH to the containers it provisioned when using Docker 1.2.0. After trial and error, I found that the current version of the Jenkins plugin (0.6) works well with Docker 1.0-1.1.2, but Docker 1.2.0+ did not work with this Jenkins Plugin. I used Puppet to make sure that our Ubuntu build server base VMs only had Docker 1.1.2 installed. Ex:
    • # VW-10576: install docker on the ubuntu master/slaves
      # * Have Docker listen on a TCP port per instructions at:
      # * Use Docker 1.1.2 and not anything newer. At the time of writing this
      # comment, Docker 1.2.0+ does not work with the Jenkins/Docker
      # plugin (the port for sshd fails to map to an external port).
      class { 'docker':
        tcp_bind => 'tcp://',
        version  => '1.1.2',
  • There is a sample Docker/Jenkins slave based on “ubuntu:latest” available at: I would recommend getting that working as a proof-of-concept before venturing into building your own custom build slave containers. It’s helpful to be familiar with the “Dockerfile” for that image as well:

Once you have the Docker Plugin installed, you need to go to your Jenkins “System Configuration” page and add your Docker host as a new cloud provider. In my proof-of-concept case, this is an Ubuntu 12.04 VM running Docker 1.1.2, listening on port 4243, configured to use the “evarga/jenkins-slave” image, providing the “docker-slave” label which I can then configure my Jenkins build job to be restricted to. The Jenkins configuration looks like this:

Jenkins' "System Configuration" for a Docker host

Jenkins’ “System Configuration” for a Docker host

I then configured a job named “docker-test” to use that “docker-slave” label and run a shell script with basic commands like “ps -eafwww”, “cat /etc/issue”, and “java -version”. Running that job, I see that it successfully spins up a container of “evarga/jenkins-slave” and runs my little script. Note the hostname at the top of the log, and output of “ps” in the screenshot below:

A proof-of-concept of spinning up a Docker container on demand

A proof-of-concept of spinning up a Docker container on demand


Creating Our SLES 11 Base Image

Having built up the confidence that we can spin up other people’s containers on-demand, we now turned to creating our SLES 11 Docker build image. For reasons that I can only assume are licensing issues, SLES 11 does not have a base image up on the Docker Hub Registry in the same vein as the images that Ubuntu, Fedora, CentOS, and others have available.

Luckily I stumbled upon the following blog post:

At Virtual Instruments we were already using Kiwi to build the OVAs of our build VMs, so we were already familiar with using Kiwi. Since we’d already been using Kiwi to create the OVA of our build environment it wasn’t much more work to follow that blog post and get Kiwi to generate a tarball that could be consumed by “docker import”. This worked well for the next proof-of-concept phase, but ultimately we decided to go down another path.

Rather than have Kiwi generate fully configured build images for us, we decided it’d be best to follow the conventions of the “Docker Way” and have Kiwi generate a SLES 11 base image which we could then use with a “FROM” statement in a “Dockerfile” and install the build environment via the Dockerfile. One of the advantages of this is that we only have to use Kiwi to generate the base image the first time. After there we can stay in Docker-land to build the subsequent images. Additionally, having a shared base image among all of our build image tags should allow for space savings as Docker optimizes the layering of filesystems over a common base image.

Configuring the Image for Use with Jenkins

Taking a SLES 11 image with our build environment installed and getting it to work with the Jenkins Docker plugin took a little bit of work, mainly spent trying to configure “sshd” correctly. Below is the “Dockerfile” that builds upon a SLES image with our build environment installed and prepares it for use with Jenkins:

# This Dockerfile is used to build an image containing basic
# configuration to be used as a Jenkins slave build node.

MAINTAINER Dan Tehranian <>

# Add user & group "jenkins" to the image and set its password
RUN groupadd jenkins
RUN useradd -m -g jenkins -s /bin/bash jenkins
RUN echo "jenkins:jenkins" | chpasswd

# Having "sshd" running in the container is a requirement of the Jenkins/Docker
# plugin. See:

# Create the ssh host keys needed for sshd
RUN ssh-keygen -A

# Fix sshd's configuration for use within the container. See VW-10576 for details.
RUN sed -i -e 's/^UsePAM .*/UsePAM no/' /etc/ssh/sshd_config
RUN sed -i -e 's/^PasswordAuthentication .*/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Expose the standard SSH port

# Start the ssh daemon
CMD ["/usr/sbin/sshd -D"]

Running a Maven Build Inside of a SLES 11 Docker Container

Having created this new image and pushed it to our internal docker repo, we can now go back to Jenkins’ “System Configuration” page and add a new image to our Docker cloud provider. Creating a new Jenkins “Maven Job” which utilizes this new SLES 11 image and running a build, we can see our SLES 11 container getting spun up, code getting checked out from our internal git repo, and Maven being invoked:

Hooray! A Successful Maven Build

Hooray! A successful Maven build inside of a Docker container!

Output From the Maven Build. LGTM!

Output from the Maven Build that was run in the container. LGTM!



There are a whole slew of benefits to a system like this:

  • We don’t have to run & support SLES 11 VMs in our infrastructure alongside the easier-to-manage Ubuntu VMs. We can just run Ubuntu 12.04 VMs as the base OS and spin up SLES slaves as needed. This makes testing of our Puppet repository a lot easier as this gives us a homogeneous OS environment!
  • We can have portable and separate build environment images for each of our branches. Ex: legacy product branches can continue to have old versions of the JDK and third party libraries that are updated only when needed, but our mainline development can have a build image with tools that are updated independently.
    • This is significantly better than the “toolchain repository” solution that we had at VMware, where several 100s of GBs of binaries were checked into a monolithic Perforce repo.
  • Thanks to Docker image tags, we can tag the build image at each GA release and keep that build environment saved. This makes reproducing builds significantly easier!
  • Having a Docker image of the build environment allows our developers to do local builds via their IDEs, if they so choose. Using Vagrant’s Docker provider, developers can spin up a Docker container of the build environment for their respective branch on their local machines, regardless of their host OS – Windows, Mac, or Linux. This allows developers to build RPMs with the same libraries and tools that the build system would!

17 thoughts on “Docker + Jenkins: Dynamically Provisioning SLES 11 Build Containers

  1. Nice write up Dan! How did you managed to get the SLES 11 docker image to work on Ubuntu machines? They are using different kernel versions are they not? At the end you mentioned devs can use the Vagrant docker plugin to run the docker images. Is that what the Jenkins slaves are doing as well or does SLES simply use the debian kernel that your Ubuntu host OS uses?

    • Hi Scott. Running a SLES 11 Docker image on Ubuntu is really no different than running Fedora or CentOS on Ubuntu. To borrow terms form VMware that we’re both familiar with, the host OS’s kernel is shared with the guest OS. So yes, the SLES container is using the Ubuntu kernel. This is best seen by running “uname -a” from within the container and observing that the kernel version has “-ubuntu” appended to it.

      re: Are the Jenkins slaves running Vagrant? – No, the Jenkins slaves have Docker installed and their respective Docker daemons are listening on a public TCP port (port #4243). The Jenkins master uses the Docker plugin to contact that remote daemon and tells it to spin up a new container, which runs “sshd”. Then Jenkins can connect to that new container via “ssh” as the “jenkins” user.

  2. Pingback: Building Docker Images within Docker Containers via Jenkins | Dan Tehranian's Blog

  3. Hi Dan,
    Thanks for this great article!
    Would you mind sharing your previous one about the VMWare Jenkins Slave Dynamic Provisioning ?
    (seems like the link you provided leading to your LinkedIn profile)

    • Hi Michael,

      There should be high-level details in my LinkedIn profile under my VMware experience. We basically built Python automaton around vSphere 4.x’s APIs to provision linked clones on demand.

      Is there anything more specific you’d like to know about?

      • Hi Dan,
        Thanks for getting back.
        Our plan is to have a number dynamic slaves forked on each job run, performing the tests on these slaves and tear them down after job is complete.
        I am trying things like pysphere, pyvmomi and Jenkins vSphere plugin..
        Would like to hear your experience about those (or may be something else?), what would you advise ?

      • Hi Michael,

        We used “pyvmomi”, which IIRC was still closed-source at the time (2010-2011). We had to write a custom solution instead of leveraging the Jenkins vSphere plugin because VMware was using a custom build system which pre-dated Jenkins.

        Once I’d left VMware and started using Jenkins, I did discover the Jenkins vSphere plugin and thought that it looked like a very compelling solution because it would save months of development work around writing the “pyvmomi” automation yourself. Ultimately I never had a need to use that plugin, but I’ve been following it’s development & changelog and it seems pretty mature. I would give that a shot if using multiple VMs are necessary.

        If your jobs are capable of being run inside of containers, you may want to give the Jenkins Docker Build Step plugin a shot as well. It will be lighter weight than running full-blown VMs:

  4. Hi,

    I was trying to test with version 1.5.0, though you have explicitly mentioned about 1.2.0+.

    I’ve combined some RUN commands in my Dockerfile alongwith your Dockerfile (i.e. myDocker + dind-jenkins-slave + evarga/jenkins-slave )

    Just want to verify with you on master I’ve started docker using
    docker -H tcp:// -d &

    I’m able to connect via SSHD, getting the version in Jenkins configuration as 1.5.0

    But it fail to launch slave node, when trying to build on Jenkins.

  5. Thank you for the article. I was able to successfully set up the docker cloud provider and test button returns success, but confused on the settings for the actual Jenkins job. Can you post screen shot of how to use the evarga/jenkins-slave container to run a shell script? Thank you.

      • Thank you Dan, I thought it was how that worked, and when I put the name in, it says 1 slave in instance, but when I run the build, I just get the build in gray saying: (pending—Waiting for next available executor). Does the container need to be started on the Docker host first? How does that bootstrap mechanism work?

      • re: “Waiting for the next available executor” – Right, the Docker Jenkins plugin should automatically spin up the container on the host(s) you configured in your Docker Cloud.

      • I see in the Jenkins logs the following:

        Asked to provision 1 slave(s) for: docker-dind
        Jun 16, 2015 11:38:50 PM INFO com.nirima.jenkins.plugins.docker.DockerCloud provision
        Will provision “tehranian/dind-jenkins-slave” for: docker-dind
        Jun 16, 2015 11:38:50 PM WARNING com.nirima.jenkins.plugins.docker.DockerCloud provision
        Bad template tehranian/dind-jenkins-slave: null. Trying next template…

        Which just gets repeated indefinitely till I cancel the build.

  6. Pingback: Docker를 Jenkins 슬레이브로 사용하자 | 실용주의 이야기

  7. Pingback: Succeeding through Laziness and Open Source | Dan Tehranian's Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s