Recently while attending DevOps Days Austin 2015, I participated in a breakout session focused on how to test code for configuration management tools like Puppet, Chef, and Ansible. Having started to use Ansible to manage our infrastructure at Delphix I was searching for a way to automate the testing of our configuration management code across a variety of platforms, including Ubuntu, CentOS, RHEL, and Delphix’s custom Illumos-based OS, DelphixOS. Dealing with testing across all of those platforms is a seemingly daunting task to say the least!
Intro to Test Kitchen
The conversation in that breakout session introduced me to Test Kitchen (GitHub), a tool that I’ve been very impressed by and have had quite a bit of fun writing tests for. Test Kitchen is a tool for automated testing of configuration management code written for tools like Ansible. It automates the process of spinning up test VMs, running your configuration management tool against those VMs, executing verification tests against those VMs, and then tearing down the test VMs.
What’s makes Test Kitchen so powerful and useful is its modular design:
- Test Kitchen supports provisioning VMs in a variety of infrastructure environments, including Vagrant, VMware vSphere, Amazon EC2, Docker, and more.
- Test Kitchen supports a variety of provisioners, including Chef, Puppet, Ansible, and Salt.
- Through its Busser framework, Test Kitchen supports a variety of test execution engines, including ServerSpec, RSpec, Bash Automated Testing System (BATS), Cucumber, and more. You can write tests in whatever language suits your needs or skill-set!
Using Test Kitchen
After learning about Test Kitchen at the DevOps Days conference, I did some more research and stumbled across the following presentation which was instrumental in getting started with Test Kitchen and Ansible: Testing Ansible Roles with Test Kitchen, Serverspec and RSpec (SlideShare).
In summary one needs to add three files to their Ansible role to begin using Test Kitchen:
- A “.kitchen.yml” file at the top-level. This file describes:
- The driver to use for VM provisioning. Ex: Vagrant, AWS, Docker, etc.
- The provisioner to use. Ex: Puppet, Chef, Ansible.
- A list of 1 or more operating to test against. Ex: Ubuntu 12.04, Ubuntu 14.04, CentOS 6.5, or even a custom VM image specified by URL.
- A list of test suites to run.
- A “test/integration/test-suite-name/test-suite-name.yml” file which contains the Ansible playbook to be applied.
- One or more test files in “test/integration/test-suite-name/test-driver-name/”. For example, when using the BATS test-runner to run a test suite named “default”: “test/integration/default/bats/my-test.bats”.
- test/integration/default/bats – A directory containing several discoverable BATS tests
Running Test Kitchen
Using Test Kitchen couldn’t be easier. From the directory that contains your “.kitchen.yml” file, just run “kitchen test” to automatically create your VMs, configure them, and run tests against them:
$ kitchen test -----> Starting Kitchen (v1.4.1) -----> Cleaning up any prior instances of -----> Destroying ... Finished destroying (0m0.00s). -----> Testing -----> Creating ... Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'opscode-ubuntu-14.04'... ==> default: Matching MAC address for NAT networking... ==> default: Setting the name of the VM: kitchen-ansible-package-caching-proxy-default-ubuntu-1404_default_1435180384440_80322 ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 => 2222 (adapter 1) ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... .. ... -----> Running bats test suite ✓ Accessing the apt-cacher-ng vhost should load the configuration page for Apt-Cacher-NG ✓ Hitting the apt-cacher proxy on the proxy port should succeed ✓ The previous command that hit ftp.debian.org should have placed some files in the cache ✓ Accessing the devpi server on port 3141 should return a valid JSON response ✓ Accessing the devpi server via the nginx vhost should return a valid JSON response ✓ Downloading a Python package via our PyPI proxy should succeed ✓ We should still be able to install Python packages when the devpi contianer's backend is broken ✓ The vhost for the docker registry should be available ✓ The docker registry's /_ping url should return valid JSON ✓ The docker registry's /v1/_ping url should return valid JSON ✓ The front-end serer's root url should return http 204 ✓ The front-end server's /_status location should return statistics from our web server ✓ Accessing http://www.google.com through our proxy should always return a cache miss ✓ Downloading a file that is not in the cache should result in a cache miss ✓ Downloading a file that is in the cache should result in a cache hit ✓ Setting the header 'X-Refresh: true' should result in a bypass of the cache ✓ Trying to purge when it's not in the cache should return 404 ✓ Downloading the file again after purging from the cache should yield a cache miss ✓ The yum repo's vhost should return HTTP 200 19 tests, 0 failures Finished verifying (1m52.26s). -----> Kitchen is finished. (1m52.49s)
And there you have it, one command to automate your entire VM testing workflow!
Giving individual developers on our team the ability to quickly run a suite of automated tests is a big win, but that’s only the first step. The workflow we’re planning is to have Jenkins also run these automated Ansible tests every time someone pushes to our git repo. If those tests succeed we can automatically trigger a run of Ansible against our production inventory. If, on the other hand, the Jenkins job which runs the tests is failing (red), we can use that to prevent Ansible from running against our production inventory. This would be a big win for validating infrastructure changes before pushing them to production.