VMware AppCatalyst First Impressions

As previously mentioned in my DockerCon 2015 Wrap-Up, one of the more practical
announcements from last week’s DockerCon was that VMware announced a free variant of Fusion called AppCatalyst. The availability of AppCatalyst along with a corresponding plugin for Vagrant written by Fabio Rapposelli gives developers a less buggy and more performant alternative to using Oracle’s VirtualBox as their VM provider for Vagrant. vmware_cloud_logo

Here is the announcement itself along with William Lam‘s excellent guide to getting started w/AppCatalyst & the AppCatalyst Vagrant plugin.

Taking AppCatalyst for a Test Drive

One of the first things I did when I returned to work after DockerCon was to download AppCatalyst and its Vagrant plugin, and take them for a spin. By and large, it works as advertised. Getting the VMware Project Photon VM running in Vagrant per William’s guide was a cinch.

Having gotten that Photon VM working, I immediately turned my attention to getting an arbitrary “vmware_desktop” Vagrant box from HashiCorp’s Atlas working. Atlas is HashiCorp’s commercial service, but they make a large collection of community-congtributed Vagrant boxes for various infrastructure platforms freely-available. I figured that I should be able to use Vagrant to automatically download one of the “vmware_desktop” boxes from Atlas and then spin it up locally with AppCatalyst using only a single command, “vagrant up”.

In practice, I hit an issue which Fabio was quick to provide me with a workaround for: https://github.com/vmware/vagrant-vmware-appcatalyst/issues/5

The crux of the issue is that AppCatalyst is geared towards provisioning Linux VMs and not other OS types, ex. Windows. This is quite understandable as VMware would not want to cannibalize Fusion sales for folks that buy Fusion to run Windows on their Macs. Unfortunately this OS identification logic seems to be coming from the “guestos” setting in the box’s .VMX file, and apparently many of the “vmware_desktop” boxes on Altas do not use a value for that VMX key that AppCatalyst will accept. As Fabio suggested, the work-around was to override that setting from the VMX file to a value that AppCatalyst will accept.

A Tip for Starting the AppCatalyst Daemon Automatically

Another minor issue I hit when trying AppCatalyst for the first time was that I’d forgotten to manually start the AppCatalyst daemon, “/opt/vmware/appcatalyst/bin/appcatalyst-daemon”. D’oh!

Because I found it annoying to launch a separate terminal window to start this daemon every time I wanted to interact with AppCatalyst, I followed through on a co-worker’s suggestion to automate the starting of this process on my Mac via launchd. (Thanks Dan K!)

Here’s how I did it:

$ cat >~/Library/LaunchAgents/com.vmware.appcatalyst.daemon.plist <<EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>Label</key>
    <string>com.vmware.appcatalyst.daemon</string>
    <key>Program</key>
    <string>/opt/vmware/appcatalyst/bin/appcatalyst-daemon</string>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>StandardOutPath</key>
    <string>/tmp/appcatalyst.log</string>
    <key>StandardErrorPath</key>
    <string>/tmp/appcatalyst.log</string>
  </dict>
</plist>
EOF

After logging out and logging back in, the AppCatalyst daemon should be running and its log file will be at “/tmp/appcatalyst.log”. Ex:

$ tail -f /tmp/appcatalyst.log
2015/06/30 09:41:03 DEFAULT_VM_PATH=/Users/dtehranian/Documents/AppCatalyst
2015/06/30 09:41:03 DEFAULT_PARENT_VM_PATH=/opt/vmware/appcatalyst/photonvm/photon.vmx
2015/06/30 09:41:03 DEFAULT_LOG_PATH=/Users/dtehranian/Library/Logs/VMware
2015/06/30 09:41:03 PORT=8080
2015/06/30 09:41:03 Swagger path: /opt/vmware/appcatalyst/bin/swagger
2015/06/30 09:41:03 appcatalyst daemon started.

Since AppCatalyst is still in Tech Preview, I’m hoping VMware adds this sort of auto-start functionality for the daemon before the final release of the software.

Conclusion

If you or your development team is using VirtualBox as your VM provider for Vagrant, go try out AppCatalyst. It’s based on the significantly better technical core of Fusion and if it grows in popularity maybe one day it could be the default provider for Vagrant! 🙂

DockerCon 2015 Wrap-Up

I attended DockerCon 2015 in San Francisco from June 22-23. The official wrap-ups for Day 1 and Day 2 are available from Docker, Inc. Keynote videos are posted here. Slides from every presentation are available here.

Here are my personal notes and take-aways from the conference:

The Good

  • Attendance was much larger than I expected, reportedly at 2,000 attendees. It reminded me a lot of VMworld back in 2007. Lots of buzz.
  • There were many interesting announcements in the keynotes:
    • Diogo Mónica unveiled and demoed Notary, a tool for publishing and verifying the authenticity of content. (video)
    • Solomon Hykes announced that service discovery is being added into the Docker stack. Currently one needs to use an external tools like registrator, Consul, and etcd for this.
    • Solomon announced that multi-host networking is coming.
    • Solomon announced that Docker is splitting out its internal plumbing from the Docker daemon. First up is splitting out the container runtime plumbing into a new project called RunC. The net effect is that this creates a reusable component that other software can use for running containers. This will also make it easy to run containers without the full Docker daemon as well.
    • Solomon announced the Open Container Project & Open Container Format – Basically Docker, Inc. and CoreOS have buried the hatchet and are working with the Linux Foundation and over a dozen other companies to create open standards around containers. Libcontainer and RunC are bring donated to this project by Docker, while CoreOS is contributing the folks who were working on AppC. More info on the announcement here.
    • Docker revealed how they will start to monetize their success. They announced an on-prem Docker registry with a support plan starting at $150/month for 10 hosts.
  • Diptanu Choudhury unveiled Netflix’s Titan system in Reliably Shipping Containers in a Resource Rich World using Titan. Titan is a combination of Docker and Apache Mesos, providing a highly resilient and dynamic PaaS that is native to public clouds and runs across multiple geographies.
  • VMware announced he availability of AppCatalyst, a free, CLI-only version of VMware Fusion. That software, combined with the Vagrant plugin for AppCatalyst that Fabio Rapposelli released, means that developers no-longer need to pay for VMware Fusion in order to have a more stable and performant alternative to Oracle’s VirtualBox for use with Vagrant. William Lam has written a great Getting Started Guide for AppCatalyst.
  • The prize for most entertaining presentation goes to Bryan Cantrill for Running Aground: Debugging Docker in Production. Praise for his talk & funny excerpts from that talk were all over Twitter:

The Bad

I was pretty disappointed with most of the content of the presentations on the “Advanced” track. There were a lot of fluffy talks about micro-services, service discovery, and auto-scaling groups. Besides not getting into great technical detail, I was frustrated by these talks because there was essentially no net-new content for anyone who frequents meetups in the Bay Area, follows Hacker News, or follows a few key accounts on Twitter.

Speaking to other attendees, I found that I was not the only one who felt that these talks were very high-level and repetitive. Bryan Cantrill even eluded to this in his own talk when he mentioned “micro-services” for the first time, adding, “Don’t worry, this won’t be one of those talks.”

Closing Thoughts

I had a great time at DockerCon 2015. The announcements and presentations around security and network were particularly interesting to me because there were new things being announced in those areas. I could have done w/o all of the fluffy talks about micro-services and auto-scaling.

It was also great to meet new people and catch up with former colleagues. I got to hear a lot of interesting ways developers are using Docker in their development and production environments and can’t wait to implement some of the things I learned at my current employer.

Testing Ansible Roles with Test Kitchen

Recently while attending DevOps Days Austin 2015, I participated in a breakout session focused on how to test code for configuration management tools like Puppet, Chef, and Ansible. Having started to use Ansible to manage our infrastructure at Delphix I was searching for a way to automate the testing of our configuration management code across a variety of platforms, including Ubuntu, CentOS, RHEL, and Delphix’s custom Illumos-based OS, DelphixOS. Dealing with testing across all of those platforms is a seemingly daunting task to say the least!

Intro to Test Kitchen

The conversation in that breakout session introduced me to Test Kitchen (GitHub), a tool that I’ve been very impressed by and have had quite a bit of fun writing tests for. Test Kitchen is a tool for automated testing of configuration management code written for tools like Ansible. It automates the process of spinning up test VMs, running your configuration management tool against those VMs, executing verification tests against those VMs, and then tearing down the test VMs.

What’s makes Test Kitchen so powerful and useful is its modular design:

Using Test Kitchen

After learning about Test Kitchen at the DevOps Days conference, I did some more research and stumbled across the following presentation which was instrumental in getting started with Test Kitchen and Ansible: Testing Ansible Roles with Test Kitchen, Serverspec and RSpec (SlideShare).

In summary one needs to add three files to their Ansible role to begin using Test Kitchen:

  • A “.kitchen.yml” file at the top-level. This file describes:
    • The driver to use for VM provisioning. Ex: Vagrant, AWS, Docker, etc.
    • The provisioner to use. Ex: Puppet, Chef, Ansible.
    • A list of 1 or more operating to test against. Ex: Ubuntu 12.04, Ubuntu 14.04, CentOS 6.5, or even a custom VM image specified by URL.
    • A list of test suites to run.
  • A “test/integration/test-suite-name/test-suite-name.yml” file which contains the Ansible playbook to be applied.
  • One or more test files in “test/integration/test-suite-name/test-driver-name/”. For example, when using the BATS test-runner to run a test suite named “default”: “test/integration/default/bats/my-test.bats”.

Example Code

A full example of Test Kitchen w/Ansible is available via the delphix.package-caching-proxy Ansible role in Delphix’s GitHub repo. Here are direct links to the aforementioned files/directories:682240

Running Test Kitchen

Using Test Kitchen couldn’t be easier. From the directory that contains your “.kitchen.yml” file, just run “kitchen test” to automatically create your VMs, configure them, and run tests against them:

$ kitchen test
-----> Starting Kitchen (v1.4.1)
-----> Cleaning up any prior instances of 
-----> Destroying ...
 Finished destroying  (0m0.00s).
-----> Testing 
-----> Creating ...
 Bringing machine 'default' up with 'virtualbox' provider...
 ==> default: Importing base box 'opscode-ubuntu-14.04'...
==> default: Matching MAC address for NAT networking...
 ==> default: Setting the name of the VM: kitchen-ansible-package-caching-proxy-default-ubuntu-1404_default_1435180384440_80322
 ==> default: Clearing any previously set network interfaces...
 ==> default: Preparing network interfaces based on configuration...
 default: Adapter 1: nat
 ==> default: Forwarding ports...
 default: 22 => 2222 (adapter 1)
 ==> default: Booting VM...
 ==> default: Waiting for machine to boot. This may take a few minutes...

..  ...

-----> Running bats test suite
 ✓ Accessing the apt-cacher-ng vhost should load the configuration page for Apt-Cacher-NG
 ✓ Hitting the apt-cacher proxy on the proxy port should succeed
 ✓ The previous command that hit ftp.debian.org should have placed some files in the cache
 ✓ Accessing the devpi server on port 3141 should return a valid JSON response
 ✓ Accessing the devpi server via the nginx vhost should return a valid JSON response
 ✓ Downloading a Python package via our PyPI proxy should succeed
 ✓ We should still be able to install Python packages when the devpi contianer's backend is broken
 ✓ The vhost for the docker registry should be available
 ✓ The docker registry's /_ping url should return valid JSON
 ✓ The docker registry's /v1/_ping url should return valid JSON
 ✓ The front-end serer's root url should return http 204
 ✓ The front-end server's /_status location should return statistics from our web server
 ✓ Accessing http://www.google.com through our proxy should always return a cache miss
 ✓ Downloading a file that is not in the cache should result in a cache miss
 ✓ Downloading a file that is in the cache should result in a cache hit
 ✓ Setting the header 'X-Refresh: true' should result in a bypass of the cache
 ✓ Trying to purge when it's not in the cache should return 404
 ✓ Downloading the file again after purging from the cache should yield a cache miss
 ✓ The yum repo's vhost should return HTTP 200

 19 tests, 0 failures
 Finished verifying  (1m52.26s).
-----> Kitchen is finished. (1m52.49s)

And there you have it, one command to automate your entire VM testing workflow!

Next Steps

Giving individual developers on our team the ability to quickly run a suite of automated tests is a big win, but that’s only the first step. The workflow we’re planning is to have Jenkins also run these automated Ansible tests every time someone pushes to our git repo. If those tests succeed we can automatically trigger a run of Ansible against our production inventory. If, on the other hand, the Jenkins job which runs the tests is failing (red), we can use that to prevent Ansible from running against our production inventory. This would be a big win for validating infrastructure changes before pushing them to production.

ansible_logo_black_square

Ansible Role for Package Hosting & Caching

The Operations Team at Delphix has recently published an Ansible role called delphix.package-caching-proxy. We are using this role internally to host binary packages (ex. RPMs, Python “pip” packages), as well as to locally cache external packages that our infrastructure and build process depends upon.

This role provides:

It also provides the hooks for monitoring the front-end Nginx server through collectd. More details here.

Why Is this Useful?

This sort of infrastructure can be useful in a variety of situations, for example:

  • When your organization has remote offices/employees whose productivity would benefit from having fast, local access to large binaries like ISOs, OVAs, or OS packages.
  • When your dev process depends on external dependencies from services that are susceptible to outages, ex. NPM or PyPI.
  • When your dev process depends on third-party artifacts that are pinned to certain versions and you want a local copy of those pinned dependencies in case those specific versions become unavailable in the future.

Sample Usage

While there are a verity of configuration options for this role, the default configuration can be deployed with an Ansible playbook as simple as the following:

---
- hosts: all
  roles:
    - delphix.package-caching-proxy

Underneath the Covers

This role works by deploying a front-end Nginx webserver to do HTTP caching, and also configures several Nginx server blocks (analogous to Apache vhosts) which delegate to Docker containers for the apps that run the Docker Registry, the PyPI server, etc.

Downloading, Source Code, and Additional Documentation/Examples

This role is hosted in Ansible Galaxy at https://galaxy.ansible.com/list#/roles/3008, and the source code is available on GitHub at: https://github.com/delphix/ansible-package-caching-proxy.

Additional documentation and examples are available in the README.md file in the GitHub repo, at: https://github.com/delphix/ansible-package-caching-proxy/blob/master/README.md

Acknowledgements

Shoutouts to some deserving folks:

  • My former Development Infrastructure Engineering Team at VMware, who proved this idea out by implementing a similar set of caching proxy servers for our global remote offices in order to improve developer productivity.
  • The folks who conceived the Snakes on a Plane Docker Global Hack Day project.