Succeeding through Laziness and Open Source

Back in mid-2014 I was in the midst of Docker-izing the build process at Virtual Instruments. As part of that work I’d open sourced one component of that system, the Docker-in-Docker Jenkins build slave which I’d created.

docker-meme

While claiming that I was driven by altruistic motivations when posting this code to GitHub (GH) would make for a great ex-post narrative, I have to admit that the real reasons for making the code publicly available were much more practical:

  • At the time the Docker image repositories on the Docker Hub Registry had to be tied to a GitHub repo (They’ve added Bitbucket support since then).
  • I was too cheap to pay for a private GitHub repo.

… And thus the code for the Docker-in-Docker Jenkins slave became open source! 😀

Unfortunately, making this image publicly available presented some challenges soon thereafter: Folks started linking their blog posts to it, people I’d never met emailed me asking for help in getting set up w/this system, others started filing issues against me on either GH or the Docker Hub Registry, and I started receiving pull-requests (PRs) to my GH repo.

Having switched employers just a few months after posting the code to GH, dealing with the issues and PRs was a bit of a challenge: My new employer didn’t have a Dockerized build system (yet), and short of setting up my own personal Jenkins server and Dockerized build slaves, there was no way for me to verify issues/fixes/PRs for this side-project. And so “tehranian/dind-jenkins-slave” stagnated on GH with relatively little participation from me.

Having largely forgotten about this project, I was quite surprised a few weeks ago when perusing the GH repo for Disqus. I accidentally discovered that the engineering team at Disqus had forked my repo and had been actively committing changes to their fork!

Their changes had:

  • Optimized the container’s layers to make it smaller in size,
  • Updated the image to work with new versions of Docker,
  • And also modified some environment variable names to avoid collisions with names that popular frameworks would use.

Prompted by this, I went back to my own GH repo, looked at the graph of all other forks, and saw that several others had forked my GH repo as well.

One such fork had updated my image to work with Docker Swarm and also to be able to easily use SSH keys for authenticating with the build slave instead of using password-based auth.

“How cool!”, I thought. I’d put an idea into the public domain a year ago, others had found it, and improved it in ways that I couldn’t have imagined. Further, their improvements were now available for myself and others to use!

My Delphix colleague Michael Coyle summed this all up very nicely, saying “As a software developer I can only realistically work for one organization at a time. Open source allows developers from different organizations to collaborate with each other without boundaries. In that way one actually can contribute to more than one organization at once.”

In hindsight I’m absolutely delighted that my unwillingness to purchase a private GitHub repo led to me contributing the Docker-in-Docker Jenkins slave to the public domain. There was nothing proprietary that Virtual Instruments could have used in its product, and by making it available other organizations like Disqus, CloudBees have been able to benefit, along with software developers on the other side of the planet. How exciting!

Advertisements

Managing Secrets with Ansible Vault – The Missing Guide (Part 2 of 2)

(This post is part 2/2 in a series. For part 1 see: Managing Secrets with Ansible Vault – The Missing Guide (Part 1 of 2))

How to use Ansible Vault with Test Kitchen

Once you’ve codified all of your secrets into Ansible “var files” and encrypted them with Ansible Vault, you’ll probably want to test the deployment of these secrets with Test Kitchen. Unfortunately you will quickly find that Test Kitchen does not play with Vault in an ideal way: In order for Test Kitchen to run “ansible-playbook” it now needs the password to your Vault in order to decrypt the secrets within the var files.

How does the “kitchen-ansible” plugin expect to receive the password to your Vault? Via a plain-text file on your filesystem, as specified by the “ansible_vault_password_file” parameter in your “.kitchen.yml” file. Oh boy!

This does not seems like a scalable solution to me… I hardly trust myself to manage a plain-text file with the password to our Vault. Beyond that, I would be terrified to let an entire organization of folks know the password to the Vault and instruct them to store that password in a plain-text file in their own respective file systems just so that they could run Test Kitchen tests as they iterate w/Ansible. In practice this would be only marginally better than simply checking in the secrets as plain-text into git, as all this structure around Vault and Ansible vars would only be pushing the problem of secret management one level higher.

So how can we test with Test Kitchen when using Ansible Vault? Here’s a nifty solution to the problem that builds upon the solution that we implemented in Part 1 of this guide:

  • Define a well-known Unix hostname for your Test Kitchen VM. Ex: “test-kitchen”
  • Create two versions of your vars files: One for production which is encrypted, the other for your test environment which is unencrypted. The structure of the files will be largely the same (ex. the files to be placed, w/their respective owner, group, mode), but the contents of the files for production will differ from the files for your test environment.
  • In “tasks/main.yml”, use “include_vars” to include the appropriate var file for whichever environment you happen to be in. This can be done by using the “with_first_found” arg to “include_vars”. See example below.
# .kitchen.yml
---
# Set the hostname of our Test Kitchen-created VM to be “test-kitchen”
driver:
  name: vagrant
  vm_hostname: test-kitchen
...<snip>...

##########

# vars/vpn-secrets-prod.yml - A Vault-encrypted file
$ANSIBLE_VAULT;1.1;AES256
34336333316361306432303864336464623165316461396266626562393232316565383263663234
3963633535363737613136656535343436613335636663380a373766653966663337666539613166
32313738303263303130353665333031373930353938653766653732623061326462633065393134
3135386639333637630a393439343733616439373731383932383562356164633832363639636633
64373237333661653066346566366135326539636564343632666363663866653264396564396162
62353461326435373433633034313338376265396130363965313464656332373737306462323433
34646361363065656331336337313763313939303533646138323834336330323533353239363663
...<snip>...

# vars/vpn-secrets-test-kitchen.yml
---
vpn_secret_files:
  /etc/openvpn/easy-rsa/keys/ec2-openvpn.key:
    owner: root
    group: root
    mode: "u=r,go="
    content: |
      -----BEGIN PRIVATE KEY-----
      MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQD5koXgI24E360f
      nhxCfOPVORzFW1CN7u/zOQdvKoIStogF0UQifDCnY/POEjoBmzBrg/UyAmsqLIli
      xMtRIuvEhwaGEUQPoZNCaRW+1XtJ3kDvr9MVTlJTcNGOlGe/E+HyAKBq5vinxzzM
      9ba8M9Nc1PQ93B1OTUY1QGHVYRvSFYDJ5Fnz23xKeNsnY3hmRkV7CDZXSdy9nbmy
      1X9uz7z5bG7PKUVD3JZjI75CHAEDJKtscBv9ez/z16YTxwahIL3CXfqBq8peyAZ0
      n4Mzj4Lt8Cwaw2Kw3w3gMhbhf4fy284+hYqHe9uqYJC6dJJSKDIXqoLSD+e8aN+v
      BAEQcAWXAgMBAAECggEAbmHJ6HqDHJC5h3Rs11NZiWL7QKbEmCIH6rFcgmRwp0oo
      GzqVQhNfiYmBubECCtfSsJrqhbXgJAUStqaHrlkdogx+bCmSyr8R3JuRzJerMd6l
      Jd3EJHZBnzoU1VT6Fd77Xge868tASySp1ZUPv2nEoBhn9jw2kf1HgiH5o2CR53ZP
      pnL72Ng7MHpKuyoAZ9DtUU7yGG4RTCN2JuPGD6IwKoXBs1b7tqsMncz86u6Iibwk
      Np4j3vPmSLfQxvBP85T0xzSURlnP+bFCaJDPfXYIgDLROkrFAgJ2ADCm4gwfk93i
      Z/wnk8tFjnxUy2V5UbtWqqkVHmvdHHCc/6bZfcNOsQKBgQD/v94YX3vhgZRiz1kZ
      c0v2lxFZqNgMPC7EADmO34nFq7KtmVXYQfpoiooGDfQXTqfVGQsyTcpg5HLZvlyb
      qm9oaXpZY4yP/SLF6Pc00/iDTleSxGROyqhsaBotXpqSSC3rv92D9Zas/Xdz3lHD
      NSY9EVsiFId7O4OkvLuZVDvZQwKBgQD50Rs873/yUdyCwKx9/GF4yWVRg7//FTyQ
      Cj1KCBK5tDqOc+hiIS1GF0HRkcvIot71owTe+PG9OouXlUuxWrtc+fzgGSPaYjMp
      Ub69EcSNtUsK8MUS+VADbR5VDzS27OM1g+pJO7BbHpPWuEI1cjYmW/+3cCzFYnIV
      5z6OctbjHQKBgEVQWP8+EbMijXbiP4G4T+Q7OUaVjkhynzIb5X2ldA+Q41JNdoiw
      CRAATDwr1/XhKXeF3BT8JFdyUvZUs4C1BpDD1ZcYdeYocx40b5tvv7DGsNFkTNNV
      9aO76yxUsYvn6Bo22/CBxR6Ja7CJlptTclOmuo5YBggOLzWcuTNrMvVFAoGASIoV
      lK4ewuhOVZFJBRRB4Wbpiq/tEk7CVTkD7vlFJrNUxYSWl9f2Y4HhVM83Ez1n7H+3
      rF8xIrdbTVrGresguLDGYvQp2wHkxTy9W/1Ky7M25ShgsU+/kh8fTaeqsOs8Vo/F
      ehpg7TSFzTWX1Bkj7COOr19dQLuDUSTin05tY2kCgYB35ZHVDMR6TlW0Kp/l7gAx
      FQx5hojllzHr3RRv8a4rBbhsdAJGBr5QHZbzVeuw1z6NlDc/4brer3y52FnnHbD3
      fkUrvh+g1xHeXF4Yekr5Mu2D7PoQoFRRai2hjPnIHRLmHI45EPri3USoHuNPl+qB
      l23chS70zQ9VDmqEs9gjLA==
      -----END PRIVATE KEY-----
  /etc/openvpn/easy-rsa/keys/ca.key:
    owner: root
    group: root
    mode: "u=r,go="
    content: |
      -----BEGIN PRIVATE KEY-----
      MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDPm22e2QTeTnLN
      PT//6kyB8tM/2kE6+LsFD3TFA4XvS3gwNZLybjXpPtncF4qLxjq3c4uSBp2tuAa2
      VvWUCAyQX4EcOuCFhh1AIUHX9O4F2JhLtNH366D6LmfGE7Lck85R6bzErYJ5OzBN
      /3WSGtWmLbQWhXTvNwG5re17Ds7DLQ6/XRXCg91lAbtGqYCvw9F6X8N3VNdcovqN
      Ud+tJ4XjmGfPD8ZgSk/iVKeLzz5fuNxON+ygdUJ9IQJGu7kvJOhWD1F3p3lzuS4E
      7zyR8r9QK6lGdk2/ifmY5f+tmI92fvVl2HD2DroEVp42hCYEpNogm8BKXHFHBA9N
      0mugGMzVAgMBAAECggEAAK5a2rWNjYkmWUQFLLrBC4AXb1Mw+ZeNTYPydx7+1n0h
      5M6YL9Fqvdwl7NHq83BwCuAHKjB5XfOHmhuI7LZmDCc0DjqnN+jruaUiSSoVidFf
      Foh+U9jjC08RqhWwdYbKm3wv0VlcXzdxfiADa7pIzyXBPH2tl4dPqyNF7yxqQzum
      F42D4IExbYYkGR7bP6RePrUiaO3iU/EwDL5Dey4+93K+EaxbdxIhMLclvnQ8I0tl
      tFGn4AbbOqPqzPxWZhWk2gT//jMTtJh6FxQLQkvDoEnta5UYQ2E38r33jK+Wasga
      lGZEyNOTMq1MMdPrCzXloJSnerCXC4vTFt62AOdIQQKBgQD3SvUeaXV7Xf67vL0t
      EdBG9YL0Zz2MxxoVAth44svMzQ4gR6/pkakEhMzR51I/Skl/wzCJFHY1Z4nq9DoA
      RY5APjO63uHdZEKKYZ1MTXmO6F+IkUY5MCBvyCtLsnkAcToyuyDuhV4NBfjydw6E
      L5S1H9NI7klvaPxq5I7KkzSeJQKBgQDW6sDBvi6ctV3w8GCTUjP5Ker1FuKYL7Yn
      HI6RIGnWB2hS8NbEe8ODgzsVOVnC6x+WCNBiu/GmF8wlue7PCH7rLEa8diiM+J9/
      QYXtezfLIhPqhPJZDj5IX7bIotkvUzv+ywvUfCtJ3aCAu8DMi09x1GRgU6go/4ZK
      SCmVmj588QKBgQCNhr2gCRTuZM37nbnayF4drjajL06/eddIfRdsn8epTxWtjbl0
      gCNt7Z7W5n9gr2A/GXN2kFpSmA4LhHiJXUVbKP4sDZDQRqf6UIFYgOJ30i+SlinN
      Yui9cJ6utNahVSvMiuH/AB7iby+ZfF+3cQ+3VR5zl8Q5WalUd7fs4bB0bQKBgBI1
      x+lipO5wS6pro7M35uF41Mi5jK+ac1OzDr1rQqx46jUE5R224uUUzH/K4Tkr1PxQ
      eN+0zw/kuk6EB6ERNjfVA5VaaaswMcuFkMSDiUGz/H4Fj8dN9qcJPSKY8dAZvF6l
      c7YoYz6aAcyGnBp4v12EwpCK5he7NvS6UpOzgxHxAoGBAOjiBQtwikKLzLYwg1gF
      QYh1TLvEJIRFYEFQveVUKxmSskN4W6VQrTrcqobYHM9tOSbSe+Ib/y/khpaEz0PE
      E5gxeUbxhTj0PVvOKJmyCKWDPL8o61MGVhX1nAJarfbdP1XM9fl4S3pZH14bIhOU
      FG0e4jNsDq6vdwytV9R/GyAv
      -----END PRIVATE KEY-----

#######

# tasks/main.yml
#
# Leverage the fact that our ".kitchen.yml" file is setting the hostname of
# test VMs to "test-kitchen". Using "with_first_found" we can load the
# unencrypted "vpn-secrets-test-kitchen.yml" for test VMs, otherwise load the
# Ansible Vault-encrypted "vpn-secrets-prod.yml" file.
#
# Use "no_log: true" to keep from echoing the key contents to stdout.
# See: http://docs.ansible.com/faq.html#how-do-i-keep-secret-data-in-my-playbook
#
- name: VPN Server | Load VPN secret keys
  include_vars: "{{ item }}"
  no_log: true
  with_first_found:
    - "vpn-secrets-{{ ansible_hostname }}.yml"
    - "vpn-secrets-prod.yml"

- name: VPN Server | Copy secret files
  copy:
    dest="{{ item.key }}"
    content="{{ item.value.content }}"
    owner="{{ item.value.owner }}"
    group="{{ item.value.group }}"
    mode="{{ item.value.mode }}"
  with_dict: vpn_secret_files
  no_log: true
  notify:
    - restart openvpn

The magic lies in the “with_first_found” argument above. In the Test Kitchen environment “vpn-secrets-{{ ansible_hostname }}.yml” will interpolate to “vpn-secrets-test-kitchen.yml” because of our well-defined hostname. Since this “vpn-secrets-test-kitchen.yml” file exists in unencrypted form under “vars/”, Ansible will grab that var file for your Test Kitchen environment. If the hostname is something other than “test-kitchen” (ie. production), then Ansible’s “with_first_found” will reach the “vpn-secrets-prod.yml” var file, which is encrypted with Vault and will require a password to unlock and proceed.

Sanity Checking Ourselves with Serverspec

Now that we have Vault working nicely with Test Kitchen, a final step would be to add automated tests to make sure that we are indeed deploying files with the correct permissions, now and in the future. For more details on using Ansible & Test Kitchen with Serverspec, see Testing Ansible Roles with Test Kitchen. Here’s what a Serverspec test for our above files would look like:

# test/integration/default/serverspec/secret_keys_spec.rb

require 'serverspec'

# Secret keys should not be world readable.
secret_keys = [
  '/etc/openvpn/dh2048.pem',
  '/etc/openvpn/ipp.txt',
  '/etc/openvpn/openvpn.key',
  '/etc/openvpn/ta.key',
  '/etc/openvpn/easy-rsa/keys/ca.key',
  '/etc/openvpn/easy-rsa/keys/ec2-openvpn.key'
]

for secret_key in secret_keys
  describe file(secret_key) do
    it { should be_file }
    it { should be_mode 400 }
    it { should be_owned_by 'root' }
    it { should be_grouped_into 'root' }
  end
end

Deploying to Production with Jenkins

A final piece of the puzzle to figure out was how to actually run “ansible-playbook” with a code base that utilizes Ansible Vault within the context of a job-runner like Jenkins. In order words, how to provide Jenkins with the password to unlock the Vault. I found a couple of options here:

  • Put the Vault password into a locked-down file (mode 400) on your Jenkins slaves that run Ansible. This only works if your Jenkins slaves have some level of security around the users that Jenkins uses. I’m not crazy about passwords in text files, but in theory this shouldn’t be any worse than a locked-down, 400-mode file like those in “/etc/sudoers.d/…”.
  • Modify the Jenkins job that runs Ansible to require a Password parameter, run “ansible-playbook” within that job with that password parameter being echo’d in, and then use the Jenkins Mask Passwords plugin to mask the contents of that password from your build logs. The downside of this is that it complicates automated execution of the Jenkins job that invokes Ansible as it now requires a password to be invoked.
  • Store the Ansible Vault password in another secret management system like HashiCorp’s Vault. This starts to get pretty meta 🙂

Ultimately you have to decide which of these three options fits best within your infrastructure and workflow.

Conclusion

There you have it, my two-part guide to using Ansible Vault from soup to nuts. Hopefully you’ve found these notes to be useful in getting an end to end system for securely managing your infrastructure’s secrets. Please let me know in the comments if I’ve left anything out. Thanks!

ansible_logo_black_square

Managing Secrets with Ansible Vault – The Missing Guide (Part 1 of 2)

(This post is part 1/2 in a series. For part 2 see: Managing Secrets with Ansible Vault – The Missing Guide (Part 2 of 2))

Background and Introduction to Ansible Vault

Once you’ve started using Ansible to codify the configuration of your infrastructure, you will undoubtedly run into a situation where you need to manage some of your infrastructure’s “secrets”. Examples of such secrets include SSH private keys, SSL certificates, or passwords. How do you codify and automate the distribution of these secrets? By checking these secrets into a source control system or posting for review in a code review tool in plain-text, you’d be instantly making them visible to a large number of people within your organization.

Luckily Ansible has created a tool to address this: Ansible Vault. The documentation for Ansible Vault describes its easy to use interface for encrypting, decrypting, and re-keying your secrets for storing in source control. Unfortunately the documentation provides little information on best practices for how to use Ansible Vault to deploy those secrets via a playbook, how to prevent the contents of those secrets from being echoed in plain-text to STDOUT when run with “–verbose” mode (ouch!), and how to test your playbooks when they contain such encrypted secrets, and how to integrate this into Jenkins.

Having recently spent time writing an Ansible role for deploying an OpenVPN server and having had to figure out the answer to a lot of these issues, I’m now happy to present “The Missing Guide to Managing Secrets with Ansible Vault”.

Storing and Deploying Secret Files

The first mental-hurdle to overcome around deploying secret files (ex. SSH private keys) with Ansible Vault is that that one must use a totally different mechanism for deploying files than with the traditional Ansible copy mechanism. Ex: Typically one would check in non-secret files into the “files/” directory of their Ansible role, and drop those files into place on the remote host with Ansible’s “copy” module using the “src” and “dest” parameters. Easy as pie.

Things work quite differently for encrypted secret files however, as mentioned in this StackOverflow post. Instead of checking in an encrypted version of the file to the “files/” subdirectory, one must place the contents of the file into an Ansible variable and deploy that file using the “contents” arg of the copy module. Here’s a working example:

# Unencrypted version of “vars/vpn-secrets.yml”
---
vpn_secret_files:
  /etc/openvpn/easy-rsa/keys/ec2-openvpn.key:
    owner: root
    group: root
    mode: "u=r,go="
    content: |
      -----BEGIN PRIVATE KEY-----
      MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQD5koXgI24E360f
      nhxCfOPVORzFW1CN7u/zOQdvKoIStogF0UQifDCnY/POEjoBmzBrg/UyAmsqLIli
      xMtRIuvEhwaGEUQPoZNCaRW+1XtJ3kDvr9MVTlJTcNGOlGe/E+HyAKBq5vinxzzM
      9ba8M9Nc1PQ93B1OTUY1QGHVYRvSFYDJ5Fnz23xKeNsnY3hmRkV7CDZXSdy9nbmy
      1X9uz7z5bG7PKUVD3JZjI75CHAEDJKtscBv9ez/z16YTxwahIL3CXfqBq8peyAZ0
      n4Mzj4Lt8Cwaw2Kw3w3gMhbhf4fy284+hYqHe9uqYJC6dJJSKDIXqoLSD+e8aN+v
      BAEQcAWXAgMBAAECggEAbmHJ6HqDHJC5h3Rs11NZiWL7QKbEmCIH6rFcgmRwp0oo
      GzqVQhNfiYmBubECCtfSsJrqhbXgJAUStqaHrlkdogx+bCmSyr8R3JuRzJerMd6l
      Jd3EJHZBnzoU1VT6Fd77Xge868tASySp1ZUPv2nEoBhn9jw2kf1HgiH5o2CR53ZP
      pnL72Ng7MHpKuyoAZ9DtUU7yGG4RTCN2JuPGD6IwKoXBs1b7tqsMncz86u6Iibwk
      Np4j3vPmSLfQxvBP85T0xzSURlnP+bFCaJDPfXYIgDLROkrFAgJ2ADCm4gwfk93i
      Z/wnk8tFjnxUy2V5UbtWqqkVHmvdHHCc/6bZfcNOsQKBgQD/v94YX3vhgZRiz1kZ
      c0v2lxFZqNgMPC7EADmO34nFq7KtmVXYQfpoiooGDfQXTqfVGQsyTcpg5HLZvlyb
      qm9oaXpZY4yP/SLF6Pc00/iDTleSxGROyqhsaBotXpqSSC3rv92D9Zas/Xdz3lHD
      NSY9EVsiFId7O4OkvLuZVDvZQwKBgQD50Rs873/yUdyCwKx9/GF4yWVRg7//FTyQ
      Cj1KCBK5tDqOc+hiIS1GF0HRkcvIot71owTe+PG9OouXlUuxWrtc+fzgGSPaYjMp
      Ub69EcSNtUsK8MUS+VADbR5VDzS27OM1g+pJO7BbHpPWuEI1cjYmW/+3cCzFYnIV
      5z6OctbjHQKBgEVQWP8+EbMijXbiP4G4T+Q7OUaVjkhynzIb5X2ldA+Q41JNdoiw
      CRAATDwr1/XhKXeF3BT8JFdyUvZUs4C1BpDD1ZcYdeYocx40b5tvv7DGsNFkTNNV
      9aO76yxUsYvn6Bo22/CBxR6Ja7CJlptTclOmuo5YBggOLzWcuTNrMvVFAoGASIoV
      lK4ewuhOVZFJBRRB4Wbpiq/tEk7CVTkD7vlFJrNUxYSWl9f2Y4HhVM83Ez1n7H+3
      rF8xIrdbTVrGresguLDGYvQp2wHkxTy9W/1Ky7M25ShgsU+/kh8fTaeqsOs8Vo/F
      ehpg7TSFzTWX1Bkj7COOr19dQLuDUSTin05tY2kCgYB35ZHVDMR6TlW0Kp/l7gAx
      FQx5hojllzHr3RRv8a4rBbhsdAJGBr5QHZbzVeuw1z6NlDc/4brer3y52FnnHbD3
      fkUrvh+g1xHeXF4Yekr5Mu2D7PoQoFRRai2hjPnIHRLmHI45EPri3USoHuNPl+qB
      l23chS70zQ9VDmqEs9gjLA==
      -----END PRIVATE KEY-----
  /etc/openvpn/easy-rsa/keys/ca.key:
    owner: root
    group: root
    mode: "u=r,go="
    content: |
      -----BEGIN PRIVATE KEY-----
      MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDPm22e2QTeTnLN
      PT//6kyB8tM/2kE6+LsFD3TFA4XvS3gwNZLybjXpPtncF4qLxjq3c4uSBp2tuAa2
      VvWUCAyQX4EcOuCFhh1AIUHX9O4F2JhLtNH366D6LmfGE7Lck85R6bzErYJ5OzBN
      /3WSGtWmLbQWhXTvNwG5re17Ds7DLQ6/XRXCg91lAbtGqYCvw9F6X8N3VNdcovqN
      Ud+tJ4XjmGfPD8ZgSk/iVKeLzz5fuNxON+ygdUJ9IQJGu7kvJOhWD1F3p3lzuS4E
      7zyR8r9QK6lGdk2/ifmY5f+tmI92fvVl2HD2DroEVp42hCYEpNogm8BKXHFHBA9N
      0mugGMzVAgMBAAECggEAAK5a2rWNjYkmWUQFLLrBC4AXb1Mw+ZeNTYPydx7+1n0h
      5M6YL9Fqvdwl7NHq83BwCuAHKjB5XfOHmhuI7LZmDCc0DjqnN+jruaUiSSoVidFf
      Foh+U9jjC08RqhWwdYbKm3wv0VlcXzdxfiADa7pIzyXBPH2tl4dPqyNF7yxqQzum
      F42D4IExbYYkGR7bP6RePrUiaO3iU/EwDL5Dey4+93K+EaxbdxIhMLclvnQ8I0tl
      tFGn4AbbOqPqzPxWZhWk2gT//jMTtJh6FxQLQkvDoEnta5UYQ2E38r33jK+Wasga
      lGZEyNOTMq1MMdPrCzXloJSnerCXC4vTFt62AOdIQQKBgQD3SvUeaXV7Xf67vL0t
      EdBG9YL0Zz2MxxoVAth44svMzQ4gR6/pkakEhMzR51I/Skl/wzCJFHY1Z4nq9DoA
      RY5APjO63uHdZEKKYZ1MTXmO6F+IkUY5MCBvyCtLsnkAcToyuyDuhV4NBfjydw6E
      L5S1H9NI7klvaPxq5I7KkzSeJQKBgQDW6sDBvi6ctV3w8GCTUjP5Ker1FuKYL7Yn
      HI6RIGnWB2hS8NbEe8ODgzsVOVnC6x+WCNBiu/GmF8wlue7PCH7rLEa8diiM+J9/
      QYXtezfLIhPqhPJZDj5IX7bIotkvUzv+ywvUfCtJ3aCAu8DMi09x1GRgU6go/4ZK
      SCmVmj588QKBgQCNhr2gCRTuZM37nbnayF4drjajL06/eddIfRdsn8epTxWtjbl0
      gCNt7Z7W5n9gr2A/GXN2kFpSmA4LhHiJXUVbKP4sDZDQRqf6UIFYgOJ30i+SlinN
      Yui9cJ6utNahVSvMiuH/AB7iby+ZfF+3cQ+3VR5zl8Q5WalUd7fs4bB0bQKBgBI1
      x+lipO5wS6pro7M35uF41Mi5jK+ac1OzDr1rQqx46jUE5R224uUUzH/K4Tkr1PxQ
      eN+0zw/kuk6EB6ERNjfVA5VaaaswMcuFkMSDiUGz/H4Fj8dN9qcJPSKY8dAZvF6l
      c7YoYz6aAcyGnBp4v12EwpCK5he7NvS6UpOzgxHxAoGBAOjiBQtwikKLzLYwg1gF
      QYh1TLvEJIRFYEFQveVUKxmSskN4W6VQrTrcqobYHM9tOSbSe+Ib/y/khpaEz0PE
      E5gxeUbxhTj0PVvOKJmyCKWDPL8o61MGVhX1nAJarfbdP1XM9fl4S3pZH14bIhOU
      FG0e4jNsDq6vdwytV9R/GyAv
      -----END PRIVATE KEY-----
...<snip>...

#######

# tasks/main.yml

#
# Use "no_log: true" to keep from echoing secrets to stdout.
# See: http://docs.ansible.com/faq.html#how-do-i-keep-secret-data-in-my-playbook
#
---
- name: VPN Server | Load VPN secret keys
  include_vars: "vpn-secrets.yml"
  no_log: true

- name: VPN Server | Copy secret files
  copy:
    dest="{{ item.key }}"
    content="{{ item.value.content }}"
    owner="{{ item.value.owner }}"
    group="{{ item.value.group }}"
    mode="{{ item.value.mode }}"
  with_dict: vpn_secret_files
  no_log: true
  notify:
    - restart openvpn

Here we see that “vars/vpn-secrets.yml” contains a multi-level hash where the first level is the destination file name (ex. “/etc/openvpn/easy-rsa/keys/ec2-openvpn.key”) and the secondary level for each filename contains respective attributes for the secret file (ex. owner, group, mode, and file contents). Those attributes are then passed straight through as args to the “copy” command which is iterating over the keys of that hash via the “with_dict: vpn_secret_files” argument.

Also note the use of “no_log: true” for both the “include_vars” and “copy” commands, above. This is necessary otherwise Ansible will echo the contents of your secret files to STDOUT when executing those commands.

So what does this look like when run with “ansible-playbook –ask-vault-pass …”?

       TASK: [dlpx.vpn-server | VPN Server | Load VPN secret keys] *******************
       ok: [localhost] => {"censored": "results hidden due to no_log parameter"}

       TASK: [dlpx.vpn-server | VPN Server | Copy secret files] **********************
       changed: [localhost] => {"censored": "results hidden due to no_log parameter", "changed": true}
       changed: [localhost] => {"censored": "results hidden due to no_log parameter", "changed": true}
       changed: [localhost] => {"censored": "results hidden due to no_log parameter", "changed": true}
       changed: [localhost] => {"censored": "results hidden due to no_log parameter", "changed": true}
       changed: [localhost] => {"censored": "results hidden due to no_log parameter", "changed": true}
       changed: [localhost] => {"censored": "results hidden due to no_log parameter", "changed": true}

       NOTIFIED: [dlpx.vpn-server | restart openvpn] *********************************
        REMOTE_MODULE service name=openvpn state=restarted
       changed: [localhost] => {"changed": true, "name": "openvpn", "state": "started"}

Hooray! Encrypted files are copied to the remote host securely. We now have a logical framework to re-use throughout our Ansible code base.

An Aside on Unix File Modes (Here be Dragons!)

I typically use the octal representation for Unix file modes instead of the string-based symbolic representation, but I had difficulty using octal representations with this deployment method. Although one could represent the file mode as an integer within the Ansible variable file, the “mode” arg to “copy” needs to have that value quoted as a string because of the double curly braces that Jinja needs to interpolate the variable. (Curly braces have a special meaning in YAML and thus need to be quoted).

One could theoretically re-cast that string back to an int() using Jinja’s “|int” filter, but I couldn’t seem to get this to work, so I eventually broke down and used symbolic file modes. Oh well, we can always write Test Kitchen tests to verify the correct permissions on these files later…

The Next Steps

Thus far we’ve covered how to use Ansible Vault to store your secrets safely in source control, and how to organize your Ansible variables/tasks to securely deploy those secrets. In Part 2 of this guide we’ll go over how to use Vault when testing with Test Kitchen, and also different ways that this could be integrated into a Jenkins job.

On to part 2 of “Managing Secrets with Ansible Vault – The Missing Guide”

ansible_logo_black_square

Ansible vs Puppet – Hands-On with Ansible

This is part 2/2 in a series. For part #1 see: Ansible vs Puppet – An Overview of the Solutions.

Notes & Findings From Going Hands-On with Ansible

After playing with Ansible for a week to Ansible-ize Graphite/Grafana (via Docker) and Jenkins (via an Ansible Galaxy role), here are my notes about Ansible:

  • “Batteries Included” and OSS Module Quality
    • While Ansible does include more modules out of the box, the “batteries included” claim is misleading. IMO an Ansible shop will have to rely heavily upon Ansible Galaxy to find community-created modules (Ex: for installing Jenkins, dockerd, or ntp), just as a Puppet shop would have to rely upon PuppetForge.
    • The quality and quantity of the modules on Ansible Galaxy is about on par with what is available at PuppetForge. Just as with PuppetForge, there are multiple implementations for any given module (ex: nginx, ntp, jenkins), each with their own quirks, strengths, and deficiencies.
    • Perhaps this is a deficiency of all of the configuration management systems. Ultimately a shop’s familiarity with Python or Ruby may add some preference here.
  • Package Installations
    • Coming from Puppet-land this seemed worthy of pointing out: Ansible does not abstract an OS’s package manager the same way that Puppet does with the “package” resource. Users explicitly call out the package manager to be used. Ex: the “apt” module or “yum” module. One can see that Ansible provides a tad bit less abstraction. FWIW a package installed via “pip” or “gem” in Puppet still requires explicit naming of the package provider. Not saying that either is better or worse here. Just a noticable difference to an Ansible newbie.
  • Programming Language Constructs
  • Noop Mode
  • Agent-less
    • Ansible’s agent-less, SSH-based push workflow actually was notably easier to deal with than a Puppetmaster, slave agents, SSL certs, etc.
  • Learning Curve
    • If I use my imagination and pretend that I was starting to use a configuration management tool for the first time, I perceive that I’d have an easier time picking up Ansible. Even though I’m not a fan of YAML by any stretch of the imagination, Ansible playbooks are a bit easier to write & understand than Puppet manifests.

Conclusions

After three years of using Puppet at VMware and Virtual Instruments, the thought of not continuing to use the market leader in configuration management tools seemed like a radical idea when it was first suggested to me. After spending several weeks researching Ansible and using it hands-on, I came to the conclusion that Ansible is a perfectly viable alternative to Puppet. I tend to agree with Lyft’s conclusion that if you have a centralized Ops team in change of deployments then they can own a Puppet codebase. On the other hand if you want more wide-spread ownership of your configuration management scripts, a tool with a shallower learning curve like Ansible is a better choice.

Building Docker Images within Docker Containers via Jenkins

If you’re like me and you’ve Dockerized your build process by running your Jenkins builds from within dynamically provisioned Docker containers, where do you turn next? You may want the creation of any Docker images themselves to also happen within Docker containers. In order words, running Docker nested within Docker (DinD).

docker-meme

I’ve recently published a Docker image to facilitate building other Docker images from within Jenkins/Docker slave containers. Details at:

Why would one want to build Docker images nested within Docker containers?

  1. For consistency. If you’re building your JARs, RPMs, etc, from within Docker containers, it makes sense to use the same high-level process for building other artifacts such as Docker images.
  2. For Docker version freedom. As I mentioned in a previous post, the Jenkins/Docker plugin can be finicky with regards to compatibility with the version of Docker that you are running on your base OS. In order words, Jenkins/Docker plugin 0.7 will not work with Docker 1.2+, so if you really need a feature from a newer version of Docker when building your images you either have to wait for a fix from the Jenkins plugin author, or you can run Docker-nested-in-Docker with the Jenkins plugin-compatible Docker 1.1.x on the host and a newer version of Docker nested within the container. Yes, this actually works!
  3. This:

Docker + Jenkins: Dynamically Provisioning SLES 11 Build Containers

TL; DR

Using JenkinsDocker Plugin, we can dynamically spin-up SLES 11 build slaves on-demand to run our builds. One of the hurdles to getting there was to create a SLES 11 Docker base-image, since there are no SLES 11 container images available at the Docker Hub Registry. We used SUSE’s Kiwi imaging tool to create a base SLES 11 Docker image for ourselves, and then layered our build environment and Jenkins build slave support on top of it. After configuring Jenkins’ Docker plugin to use our home-grown SLES image, we were off and running with our containerized SLES builds!

Jenkins/Docker Plugin

The path to Docker-izing our build slaves started with stumbling across this Docker Plugin for Jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin. This plugin allows one to use Docker to dynamically provision a build slave, run a single build, and then tear-down that slave, optionally saving it. This is very similar in workflow to the build VM provisioning system that I created while working in VMware’s Release Engineering team, but much lighter weight. Compared to VMs, Docker containers can be spun up in milliseconds instead of a in few minutes and Docker containers are much lighter on hardware resources.

The above link to the Jenkins wiki provides details about how to configure your environment as well as how to configure your container images. Some high-level notes:

  • Your base OS needs to have Docker listening on a TCP port. By default, Docker only listens on a Unix socket.
  • The container needs run “sshd” for Jenkins to connect to it. I suspect that once the container is provisioned, Jenkins just treats it as a plain-old SSH slave.
  • In my testing, the Docker/Jenkins plugin was not able to connect via SSH to the containers it provisioned when using Docker 1.2.0. After trial and error, I found that the current version of the Jenkins plugin (0.6) works well with Docker 1.0-1.1.2, but Docker 1.2.0+ did not work with this Jenkins Plugin. I used Puppet to make sure that our Ubuntu build server base VMs only had Docker 1.1.2 installed. Ex:
    • # VW-10576: install docker on the ubuntu master/slaves
      # * Have Docker listen on a TCP port per instructions at:
      # https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
      # * Use Docker 1.1.2 and not anything newer. At the time of writing this
      # comment, Docker 1.2.0+ does not work with the Jenkins/Docker
      # plugin (the port for sshd fails to map to an external port).
      class { 'docker':
        tcp_bind => 'tcp://0.0.0.0:4243',
        version  => '1.1.2',
      }
  • There is a sample Docker/Jenkins slave based on “ubuntu:latest” available at: https://registry.hub.docker.com/u/evarga/jenkins-slave/. I would recommend getting that working as a proof-of-concept before venturing into building your own custom build slave containers. It’s helpful to be familiar with the “Dockerfile” for that image as well: https://registry.hub.docker.com/u/evarga/jenkins-slave/dockerfile/

Once you have the Docker Plugin installed, you need to go to your Jenkins “System Configuration” page and add your Docker host as a new cloud provider. In my proof-of-concept case, this is an Ubuntu 12.04 VM running Docker 1.1.2, listening on port 4243, configured to use the “evarga/jenkins-slave” image, providing the “docker-slave” label which I can then configure my Jenkins build job to be restricted to. The Jenkins configuration looks like this:

Jenkins' "System Configuration" for a Docker host

Jenkins’ “System Configuration” for a Docker host

I then configured a job named “docker-test” to use that “docker-slave” label and run a shell script with basic commands like “ps -eafwww”, “cat /etc/issue”, and “java -version”. Running that job, I see that it successfully spins up a container of “evarga/jenkins-slave” and runs my little script. Note the hostname at the top of the log, and output of “ps” in the screenshot below:

A proof-of-concept of spinning up a Docker container on demand

A proof-of-concept of spinning up a Docker container on demand

 

Creating Our SLES 11 Base Image

Having built up the confidence that we can spin up other people’s containers on-demand, we now turned to creating our SLES 11 Docker build image. For reasons that I can only assume are licensing issues, SLES 11 does not have a base image up on the Docker Hub Registry in the same vein as the images that Ubuntu, Fedora, CentOS, and others have available.

Luckily I stumbled upon the following blog post: http://flavio.castelli.name/2014/05/06/building-docker-containers-with-kiwi/

At Virtual Instruments we were already using Kiwi to build the OVAs of our build VMs, so we were already familiar with using Kiwi. Since we’d already been using Kiwi to create the OVA of our build environment it wasn’t much more work to follow that blog post and get Kiwi to generate a tarball that could be consumed by “docker import”. This worked well for the next proof-of-concept phase, but ultimately we decided to go down another path.

Rather than have Kiwi generate fully configured build images for us, we decided it’d be best to follow the conventions of the “Docker Way” and have Kiwi generate a SLES 11 base image which we could then use with a “FROM” statement in a “Dockerfile” and install the build environment via the Dockerfile. One of the advantages of this is that we only have to use Kiwi to generate the base image the first time. After there we can stay in Docker-land to build the subsequent images. Additionally, having a shared base image among all of our build image tags should allow for space savings as Docker optimizes the layering of filesystems over a common base image.

Configuring the Image for Use with Jenkins

Taking a SLES 11 image with our build environment installed and getting it to work with the Jenkins Docker plugin took a little bit of work, mainly spent trying to configure “sshd” correctly. Below is the “Dockerfile” that builds upon a SLES image with our build environment installed and prepares it for use with Jenkins:

# This Dockerfile is used to build an image containing basic
# configuration to be used as a Jenkins slave build node.

FROM vi-docker.lab.vi.local/pa-dev-env-master
MAINTAINER Dan Tehranian <REDACTED@virtualinstruments.com>


# Add user & group "jenkins" to the image and set its password
RUN groupadd jenkins
RUN useradd -m -g jenkins -s /bin/bash jenkins
RUN echo "jenkins:jenkins" | chpasswd


# Having "sshd" running in the container is a requirement of the Jenkins/Docker
# plugin. See: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin

# Create the ssh host keys needed for sshd
RUN ssh-keygen -A

# Fix sshd's configuration for use within the container. See VW-10576 for details.
RUN sed -i -e 's/^UsePAM .*/UsePAM no/' /etc/ssh/sshd_config
RUN sed -i -e 's/^PasswordAuthentication .*/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Expose the standard SSH port
EXPOSE 22

# Start the ssh daemon
CMD ["/usr/sbin/sshd -D"]

Running a Maven Build Inside of a SLES 11 Docker Container

Having created this new image and pushed it to our internal docker repo, we can now go back to Jenkins’ “System Configuration” page and add a new image to our Docker cloud provider. Creating a new Jenkins “Maven Job” which utilizes this new SLES 11 image and running a build, we can see our SLES 11 container getting spun up, code getting checked out from our internal git repo, and Maven being invoked:

Hooray! A Successful Maven Build

Hooray! A successful Maven build inside of a Docker container!

Output From the Maven Build. LGTM!

Output from the Maven Build that was run in the container. LGTM!

 

Wins

There are a whole slew of benefits to a system like this:

  • We don’t have to run & support SLES 11 VMs in our infrastructure alongside the easier-to-manage Ubuntu VMs. We can just run Ubuntu 12.04 VMs as the base OS and spin up SLES slaves as needed. This makes testing of our Puppet repository a lot easier as this gives us a homogeneous OS environment!
  • We can have portable and separate build environment images for each of our branches. Ex: legacy product branches can continue to have old versions of the JDK and third party libraries that are updated only when needed, but our mainline development can have a build image with tools that are updated independently.
    • This is significantly better than the “toolchain repository” solution that we had at VMware, where several 100s of GBs of binaries were checked into a monolithic Perforce repo.
  • Thanks to Docker image tags, we can tag the build image at each GA release and keep that build environment saved. This makes reproducing builds significantly easier!
  • Having a Docker image of the build environment allows our developers to do local builds via their IDEs, if they so choose. Using Vagrant’s Docker provider, developers can spin up a Docker container of the build environment for their respective branch on their local machines, regardless of their host OS – Windows, Mac, or Linux. This allows developers to build RPMs with the same libraries and tools that the build system would!

A Local Caching Proxy for “pypi.python.org” via Docker

TL; DR

If your infrastructure automation installs packages from PyPI (the Python Package Index) via “pip” or similar tools, you can save yourself from annoying “pypi.python.org timeout” errors by running a local caching proxy of the PyPI service. After trying several of these services, I found “devpi” to be the most resilient. It’s available as both a Python package or as a Docker container that you can run in your data center.

Problem

If you have infrastructure automation that tries to install packages from PyPI then you’ve undoubtedly encountered encounded availability issues with the PyPI web service, hosted at “pypi.python.org”. Example email alerts that we see from our Puppet infrastructure look like:

Tue Sep 02 23:47:23 -0700 2014 /Stage[main]/Jenkins::Dev_jenkins_slave/Package[jenkins-check-for-success]
(err): Could not evaluate: Could not get latest version: 
Timeout while contacting pypi.python.org: execution expired

One could choose to ignore these sorts of connectivity issues since they are transient, but there’s quite a few negative consequences to that:

  • If you’re spinning up new machines on-demand and they require a Python package as part of their configuration, then your ability to consistently spin up these machines successfully has become compromised by a dependency which is completely out of your own control.
  • If your infrastructure automation is configured to send email alerts on these types of errors, you’ll be getting un-actionable emails that add to the noise of email alerts that you get from your infrastructure. This effectivly makes your alerting system less valuable as your team will be trained to ignore their email alerts.
  • As you scale your infrastructure to hundreds or thousands of nodes, you’ll be receiving a lot of alerts about connectivity issues with “pypi.python.org” throughout the day and night time hours. When “pypi.python.org” goes down hard for a prolonged period of time, you’ll end up with all of the nodes in your infrastructure simultaneously bombarding you with alerts about not being able to contact “pypi.python.org”.

Solution

The solution for this problem is to run a local caching proxy for “pypi.python.org” within your data center. The Python community has developed proxy packages like pypiserver, chishop, devpi, and others specifically for this use case. After extensive research and trying several of them out, I’ve found devpi to be the most resilient as well as the most actively developed as of this writing.

One can either install devpi as a Python package (see instructions on their website) or via a Docker container. Since our infrastructure has been making the move to “All Docker Everything” I’ll write up the steps I took to setup the Docker container running devpi and how I configured our clients to use it.

Devpi Server Installation

Here’s some sample Puppet code for how to download & run the “scrapinghub/devpi” container with an nginx proxy in front of it. (I discussed why having an nginx proxy in front is advantageous in Private Docker Registry w/Nginx Proxy for Stats Collection)

You’ll want to change “DEVPI_PASSWORD” and the hostname for the Nginx vhost below.

# devpi server & nginx configuration
docker::image { 'scrapinghub/devpi': }
docker::run { 'devpi':
    image => 'scrapinghub/devpi',
    ports => ['3141:3141',],
    use_name => true,
    env => ['DEVPI_PASSWORD=1234',],
}

nginx::resource::upstream { 'pypi_app':
    members => ['localhost:3141',],
}
nginx::resource::vhost { 'vi-pypi.lab.vi.local':
    proxy => 'http://pypi_app',
}

Once your container is running you can run “docker logs” to see what it is up to. You can see the “devpi” proxy saving your bacon when “pypi.python.org” occasionally becomes unavailable via log statements like this:

172.17.42.1 - - [22/Jul/2014 22:12:09] "GET /root/public/+simple/argparse/ HTTP/1.0" 200 4316
2014-07-22 22:12:09,251 [INFO ] requests.packages.urllib3.connectionpool: Resetting dropped connection: pypi.python.org
2014-07-22 22:12:09,301 [INFO ] devpi_server.filestore: cache-streaming: https://pypi.python.org/packages/source/a/argparse/argparse-1.2.
1.tar.gz, target root/pypi/+f/2fb/ef8cb61e506c706957ab6e135840c/argparse-1.2.1.tar.gz
2014-07-22 22:12:09,301 [INFO ] devpi_server.filestore: starting file iteration: root/pypi/+f/2fb/ef8cb61e506c706957ab6e135840c/argparse-1.2.1.tar.gz (size 69297)

Python Client Configuration

On the client side we need to configure both “pip” and “easy_install” to use the devpi container we just instantiated. This requires creating a special configuration file for each of those Python package managers. The configuration file tells those package managers to use your devpi proxy server for their package index URL.

You’ll want to change the URL to point to the hostname you use within your own infrastructure.

# ~/.pip/pip.conf

[global]
index-url = http://vi-pypi.lab.vi.local/root/public/
# ~/.pydistutils.cfg

[easy_install]
index_url = http://vi-pypi.lab.vi.local/root/public/

But Wait There’s More – Uploading Your Own Python Packages

One of the additional benefits to running a local PyPI proxy is that it becomes a distribution point for your private Python packages. Instead of clumsily checking out SCM repos full of your own custom Python scripts to each machine in your infrastructure, you can install your Python scripts as first-order Python packages, the same way you would install packages from PyPI. This lets you properly version your packages and define dependency requirements between your packages.

Creating a “setup.py” file for each of your Python projects is outside the scope of this post, but details can be found online. Once your Python project has its “setup.py” file, uploading your versioned package to your local devpi instance requires just a few simple commands. From our Jenkins job which publishes a new version of a Python package upon a git push:

# from the cwd of "setup.py"
devpi use http://vi-pypi.lab.vi.local/root/public/
devpi login root --password 1234 
devpi upload

More details at: http://doc.devpi.net/latest/quickstart-releaseprocess.html

Conclusion

By running a local caching proxy of “pypi.python.org” we’re able to improve the reliability of our infrastructure because we are no longer beholden to the availability of an external dependency. We also get the added benefit of having a proper Python package distribution point, which allows us to have better development & deployment practices. Finally, this local caching proxy provides better performance for installing packages, as local network copies are significantly faster than downloading from an external website.