Best VMworld 2014 Sessions

The videos from the VMworld 2014 Sessions have slowly been making their way online over the past few months. Some videos are freely available on the VMworld 2014 YouTube Playlist. Other videos are still only available to attendees of the conference who have credentials to access to the VMworld 2014 video site. Hopefully all of the videos will be made available to the public on YouTube in the near future.

I’ve gone through most of the videos and found the following videos to be especially helpful in our team’s day-to-day activities:

Testing Puppet Code with Vagrant

At Virtual Instruments we use Vagrant boxes to locally test our Puppet changes before pushing those changes into production. Here are some details about how we do this.

Puppet Support in Vagrant

Vagrant has built-in support for using Puppet as a machine provisioner, either by contacting a Puppet master to receive modules and manifests or by running “puppet apply” with a local set of modules and manifests (aka. masterless Puppet). We chose to use masterless Puppet with Vagrant in our test environment due to its simplicity of setup.

Starting with a Box for the Base OS

Before we can use Puppet to provision our machine, we need to have a base OS available with Puppet installed. At Virtual Instruments our R&D infrastructure is standardized on Ubuntu 12.04 which means that we want our Vagrant base box to be a otherwise minimal installation of Ubuntu 12.04 with Puppet also installed. Luckily this is a very common configuration and there are pre-made Vagrant boxes available for download at VagrantCloud.com. We’re going to use the box named “puppetlabs/ubuntu-12.04-64-puppet“.

If you are using a different OS you can search the Vagrant Cloud site for a Vagrant base box that matches the OS of your choice. See: https://vagrantcloud.com/discover/featured

If you can find a base box for your OS but not a base box for that OS which has Puppet pre-installed, you can use one of @mitchellh‘s nifty Puppet-bootstrap scripts with a Vagrant Shell Provisioner to get Puppet installed into your base box. See the README included in that repo for details: https://github.com/hashicorp/puppet-bootstrap/blob/master/README.md

The Vagrantfile

Having found a suitable base box, one can use the following “Vagrantfile” to start that box and invoke Puppet to provision the machine.

VAGRANTFILE_API_VERSION = "2"

# set the following hostname to a name that Puppet will match against. ex:
# "vi-cron9.lab.vi.local"
MY_HOSTNAME = "vi-nginx-proxy9.lab.vi.local"


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # from: https://vagrantcloud.com/search?utf8=✓&sort=&provider=&q=puppetlabs+12.04
  config.vm.box = "puppetlabs/ubuntu-12.04-64-puppet"
  config.vm.hostname = MY_HOSTNAME

  # needed to load hiera data for puppet
  config.vm.synced_folder "hieradata", "/data/puppet-production/hieradata"

  # Vagrant/Puppet docs:
  #   http://docs.vagrantup.com/v2/provisioning/puppet_apply.html
  config.vm.provision :puppet do |puppet|
    puppet.facter = {
      "is_vagrant_vm" => "true"
    }
    puppet.hiera_config_path = "hiera.yaml"
    puppet.manifest_file  = "site.pp"
    puppet.manifests_path = "manifests"
    puppet.module_path = "modules"
    # puppet.options = "--verbose --debug"
  end

end

Breaking Down the Vagrantfile

Setting Our Hostname

# Set the following hostname to a name that Puppet will match against. ex:
# "vi-cron9.lab.vi.local"
MY_HOSTNAME = "vi-nginx-proxy9.lab.vi.local"

Puppet determines which resources to apply based on the hostname of our VM. For ease of use, our “Vagrantfile” has a variable called “MY_HOSTNAME” defined at the top of the file which allows users to easily define which node they want to provision locally.

Defining Which Box to Use

# From: https://vagrantcloud.com/search?utf8=✓&sort=&provider=&q=puppetlabs+12.04
config.vm.box = "puppetlabs/ubuntu-12.04-64-puppet"

The value for “config.vm.box” is the name of the box we found on vagrantcloud.com. This allows Vagrant to automatically download the base VM image from the Vagrant Cloud service.

Puppet-Specific Configurations

  # Needed to load Hiera data for Puppet
  config.vm.synced_folder "hieradata", "/data/puppet-production/hieradata"

  # Vagrant/Puppet docs:
  #   http://docs.vagrantup.com/v2/provisioning/puppet_apply.html
  config.vm.provision :puppet do |puppet|
    puppet.facter = {
      "is_vagrant_vm" => "true"
    }
    puppet.hiera_config_path = "hiera.yaml"
    puppet.manifest_file  = "site.pp"
    puppet.manifests_path = "manifests"
    puppet.module_path = "modules"
    # puppet.options = "--verbose --debug"
  end

Here we are setting up the configuration of the Puppet provisioner. See the full documentation for Vagrant’s masterless Puppet provisioner at: https://docs.vagrantup.com/v2/provisioning/puppet_apply.html

Basically this code:

  • Sets up a shared folder to make our Hiera data available to the guest OS
  • Set a custom Facter fact called “is_vagrant_vm” to “true“. This fact can then be used by our manifests for edge-cases around running VMs locally (like routing collectd/SAR data to a non-production Graphite server to avoid pollution of the production Graphite server)
  • Tells the Puppet provisioner where the root Puppet manifest file is and where necessary Puppet modules can be found.

Conclusion

Vagrant is a powerful tool for testing Puppet code changes locally. With a simple “vagrant up” one can fully provision a VM from scratch. One can also use the “vagrant provision” command to locally test incremental updates to Puppet code as it is iteratively being developed, or to test changes to long-running mutable VMs.

Building Vagrant Boxes with Nested VMs using Packer

In “Improving Developer Productivity with Vagrant” I discussed the productivity benefits gained from using Vagrant in our software development tool chain. Here are some more details about the mechanics of how we created those Vagrant boxes as part of every build of our product.

Using Packer to Build VMware-Compatible Vagrant Boxes

Packer is a tool for creating machine images which was also written by Hashicorp, the authors of Vagrant. It can build machine images for almost any type of environment, including Amazon AWSDocker, Google Compute Engine, KVM, Vagrant, VMwareXen, and more.

We used Packer’s built-in VMware builder and Vagrant post-processor to create the Vagrant boxes for users to run on their local desktops/laptops via VMware Fusion or Workstation.

Note: This required each user to install Vagrant’s for-purchase VMware plugin. In our usage of running Vagrant boxes locally we noted that the VMware virtualization providers delivered far better IO performance and stability than the free Oracle VirtualBox provider. In short, the for-purchase Vagrant-VMware plugin was worth every penny!

Running VMware Workstation VMs Nested in ESXi

One of the hurdles I came across in integrating the building of the Vagrant boxes into our existing build system is that Packer’s VMware builder needs to spin up a VM using Workstation or Fusion in order to perform configuration of the Vagrant box. Given that our builds were already running in static VMs, this meant that we needed to be able to run Workstation VMs nested within an ESXi VM with a Linux guest OS!

This sort of VM-nesting was somewhat complicated to setup in the days of vSphere 5.0, but in vSphere 5.1+ this has become a lot simpler. With vSphere 5.1+ one just needs to make sure that their ESXi VMs are running with “Virtual Hardware Version 9” or newer, and one must enable “Hardware assisted virtualization” for the VM within the vSphere web client.

Here’s what the correct configuration for supporting nested VMs looks like:

2014-09-28 02.05.03 pm

Packer’s Built-in Remote vSphere Hypervisor Builder

One question that an informed user of Packer may correctly ask is: “Why not use Packer’s built-in Remote vSphere Hypervisor Builder and create the VM directly on ESXi? Wouldn’t this remove the need for running nested VMs?”

I agree that this would be a better solution in theory. There are several reasons why I chose to go with nested VMs instead:

  1. The “Remote vSphere Hypervisor Builder” requires manually running an “esxcli” command on your ESXi boxes to enable some sort of “GuestIP hack”. Doing this type of configuration on our production ESXi cluster seemed sketchy to me.
  2. The “Remote vSphere Hypervisor Builder” doesn’t work through vSphere, but instead directly ssh’es into your ESXi boxes as a privileged user in order to create the VM. The login credentials for that privileged ESXi/ssh user must be kept in the Packer build script or some other area of our build system. Again, this seems less than ideal to me.
  3. As far as I can tell from the docs, the “Remote vSphere Hypervisor Builder” only works with the “vmware-iso” builder and not the “vmware-vmx” builder. This would’ve painted us into a corner as we had plans to switch from the “vmware-iso” builder to the “vmware-vmx” builder once it had become available.
  4. The “Remote vSphere Hypervisor Builder” was not available when I implemented our nested VM solution because we were early adopters of Packer. It was easier to stick with a working solution that we already had 😛

Automating the Install of VMware Workstation via Puppet

One other mechanical piece I’ll share is how we automated the installation of VMware Workstation 10.0 into our static build VMs. Since all of the build VM configuration is done via Puppet, we could automate the installation of Workstation 10 with the following bit of Puppet code:

# Install VMware Workstation 10
  $vmware_installer = '/mnt/devops/software/vmware/VMware-Workstation-Full-10.0.0-1295980.x86_64.bundle'
  $vmware_installer_options = '--eulas-agreed --required'
  exec {'Install VMware Workstation 10':
    command => "${vmware_installer} ${vmware_installer_options}",
    creates => '/usr/lib/vmware/config',
    user    => 'root',
    require => [Mount['/mnt/devops'], Package['kernel-default-devel']],
  }