Automating Linux Security Best Practices with Ansible

Suppose you’re setting up a new Linux server in Amazon AWS, Digital Ocean, or even within your private network. What are some of the low-hanging fruits one can knock-out to improve the security of a Linux system? One of my favorite posts on this topic is “My First 5 Minutes On A Server; Or, Essential Security for Linux Servers“.

In that post Bryan Kennedy provides several tips, including:

  • Installing fail2ban – a daemon that monitors login attempts to a server and blocks suspicious activity as it occurs.
  • Modifying the configuration of “sshd” to permit neither “root” logins, nor password-based logins (ie. allow authentication with key-pairs only).
  • Configuring a software-based firewall with “iptables”.
  • Enabling automatic software security updates through the OS’s package manager.

These all seem like sane things to do on any new server, but how can one automate these steps for all of the machines in their infrastructure for speed, review-ability, and consistency? How can one do this in a way that allows them to easily modify the above rules for the different types of machines in their environment (ex. web servers should have port 80 open, but DB servers should have port 5432 open)? What’re some gotchas to be aware of when doing this? Here are my notes on how to automatically bring these security best practices to the Linux machines in your infrastructure, using Ansible.

Community-Contributed Roles which Implement the Best Practices

The first rule of automating anything is: Don’t write a single line of code. Instead, see if someone else has already done the work for you. It turns out that Ansible Galaxy serves us well here.

Ansible Galaxy is full of community-contributed roles, and as it relates this this post there are existing roles which implement each of the best practices that were named above:

Combining these well-maintained, open source roles, we can automate the implementation of the best practices from Bryan Kennedy’s blog post.

Making Security Best-Practices your Default

The pattern that I would use to automatically incorporate these third-party roles into your infrastructure would be to make these roles dependancies of a “common” role that gets applied to all machines in your inventory. A “common” role is a good place to put all sorts of default-y configuration, like setting up your servers to use NTP, install VMware Tools, etc.

Assuming that your common role is called “my-company.common” making these third-party roles dependancies is as simple as the following:

# requirements.yml
---
- src: franklinkim.ufw
- src: jnv.unattended-upgrades
- src: tersmitten.fail2ban
- src: willshersystems.sshd

# roles/my-company.common/meta/main.yml
---
dependencies:
  - { role: franklinkim.ufw }
  - { role: jnv.unattended-upgrades }
  - { role: tersmitten.fail2ban }
  - { role: willshersystems.sshd }

Wow, that was easy! All of Bryan Kennedy’s best practices can be implemented with just two files: One for downloading the required roles from Ansible Galaxy, and the other for including those roles for execution.

Creating Your Own Default Configuration for Security-Related Roles

Each of those roles comes with their own respective default configuration. That configuration can be seen by looking through the “defaults/main.yml” directory in each role’s respective GitHub repo.

Suppose you wanted to provide your own configuration for these roles which overrides the author’s defaults. How would you do that? A good place to put your own defaults for the configuration of your infrastructure is in the “group_vars/all” file in your Ansible repo. The “group_vars/all” file defines variables which will take precedence and override the  variables from the roles themselves.

The variable names and structure can be obtained by reading the docs for each respective role. Here’s an example of what your custom default security configuration might look like:

# group_vars/all
---
# Configure "franklinkim.ufw"
ufw_default_input_policy: DROP
ufw_default_output_policy: ACCEPT
ufw_default_forward_policy: DROP
ufw_rules:
  - { ip: '127.0.0.1/8', rule: allow }
  - { port: 22, rule: allow }

# Configure "jnv.unattended-upgrades"
unattended_automatic_reboot: false
unattended_package_blacklist: [jenkins, nginx]
unattended_mail: 'it-alerts@your-company.com'
unattended_mail_only_on_error: true

# Configure "tersmitten.fail2ban"
fail2ban_bantime: 600
fail2ban_maxretry: 3
fail2ban_services:
  - name: ssh
    enabled: true
    port: ssh
    filter: sshd
    logpath: /var/log/auth.log
    maxretry: 6
    findtime: 600

# Configure "willshersystems.sshd"
sshd_defaults:
  Port: 22
  Protocol: 2
  HostKey:
    - /etc/ssh/ssh_host_rsa_key
    - /etc/ssh/ssh_host_dsa_key
    - /etc/ssh/ssh_host_ecdsa_key
    - /etc/ssh/ssh_host_ed25519_key
  UsePrivilegeSeparation: yes
  KeyRegenerationInterval: 3600
  ServerKeyBits: 1024
  SyslogFacility: AUTH
  LogLevel: INFO
  LoginGraceTime: 120
  PermitRootLogin: no
  PasswordAuthentication: no
  StrictModes: yes
  RSAAuthentication: yes
  PubkeyAuthentication: yes
  IgnoreRhosts: yes
  RhostsRSAAuthentication: no
  HostbasedAuthentication: no
  PermitEmptyPasswords: no
  ChallengeResponseAuthentication: no
  X11Forwarding: no
  PrintMotd: yes
  PrintLastLog: yes
  TCPKeepAlive: yes
  AcceptEnv: LANG LC_*
  Subsystem: "sftp {{ sshd_sftp_server }}"
  UsePAM: yes

The above should be tailored to your needs, but hopefully you get the idea. By placing the above content in your “group_vars/all” file, it will provide the default security configuration for every machine in your infrastructure (ie. every member of the group “all”).

Overriding Your Own Defaults for Sub-Groups

The above configuration provides a good baseline of defaults for any machine in your infrastructure, but what about machines that need modifications or exceptions to these rules? Ex: Your HTTP servers need to have firewall rules that allow for traffic on ports 80 & 443, PostgreSQL servers need firewall rules for port 5432, or maybe there’s some random machine that needs X11 forwarding over SSH turned on. How can we override our own defaults?

We can continue to use the power of “group_vars” and “host_vars” to model the configuration for subsets of machines. If, for example, we wanted our web servers to have ports 80 &443 open, but PostgreSQL servers have port 5432 open, we could override the variables from “group_vars/all” with respective variables in “group_vars/webservers” and “group_vars/postgresql-servers”. Ex:

# "group_vars/webservers"
---
ufw_rules:
  - { ip: '127.0.0.1/8', rule: allow }
  - { port: 22, rule: allow }
  - { port: 80, rule: allow }
  - { port: 443, rule: allow }

===
# "group_vars/postgresql-servers"
---
ufw_rules:
  - { ip: '127.0.0.1/8', rule: allow }
  - { port: 22, rule: allow }
  - { port: 5432, rule: allow }

Rules for individual hosts can be set using “host_vars/<hostname>”. Ex: “host_vars/nat-proxy1.your-company.com”

# host_vars/nat-proxy1.your-company.com
---
# Allow IP traffic forwarding for our NAT proxy
ufw_default_forward_policy: ACCEPT

Ansible’s “group_vars”, “host_vars”, and variable precedence are a pretty powerful features. For additional readings, see:

Gotchas

There are a couple of “gotchas” to be aware of with the aforementioned scheme:

  1. Variable replacing vs. merging – When Ansible is applying the precedence of variables from different contexts, it will use variable “replacement” as the default behavior. This makes sense in the context of scalar variables like strings and integers, but may surprise you in the context of hashes where you might have expected the sub-keys to be automatically merged. If you want your hashes to be merged then you need to set “hash_behaviour=merge” in your “ansible.cfg” file. Be warned that this is a global setting though.
  2. Don’t ever forget to leave port 22 open for SSH – Building upon the first gotcha: Knowing that variables are replaced and not merged, you must to remember to always include an “allow” rule for port 22 when overriding the “ufw_rules” variable. Ex:
    ufw_rules:
      - { ip: '127.0.0.1/8', rule: allow }
      - { port: 22, rule: allow }

    Omitting the rule for port 22 may leave your machine unreachable, except via console.

  3. Unattended upgrades for daemons – Unattended security upgrades are a great way to keep you protected from the next #HeartBleed or #ShellShock security vulnerability. Unfortunately unattended upgrades can themselves cause outages.
    Imagine, for example, that you have unattended upgrades enabled on a machine with Jenkins and/or nginx installed: When the OS’s package manager goes to check for potential upgrades and finds that a new version of Jenkins/nginx is available, it will merrily go ahead and upgrade your package, which in-turn will likely cause the package’s install/upgrade script to restart your service, thus causing a temporary and unexpected downtime. Oops!
    You can prevent specific packages from bring automatically upgraded by listing them in the variable, “unattended_package_blacklist”. This is especially useful for daemons. Ex:

    unattended_package_blacklist: [jenkins, nginx]
    

Conclusion

Ansible and the community-contributed code at Ansible Galaxy make it quick and easy to implement the best-practices described at “My First 5 Minutes On A Server; Or, Essential Security for Linux Servers“. Ansible’s notion of “group_vars”, “host_vars”, and variable precedence provides a powerful way for one to configure these security best-practices for groups of machines in their infrastructure. Using all of these together one can automate & codify the security configuration of their machines for speed of deployment, review-ability, and consistency.

What other sorts of best-practices would be good to implement across your infrastructure? Feel free to leave comments below. Thanks!

ansible_logo_black_square

Succeeding through Laziness and Open Source

Back in mid-2014 I was in the midst of Docker-izing the build process at Virtual Instruments. As part of that work I’d open sourced one component of that system, the Docker-in-Docker Jenkins build slave which I’d created.

docker-meme

While claiming that I was driven by altruistic motivations when posting this code to GitHub (GH) would make for a great ex-post narrative, I have to admit that the real reasons for making the code publicly available were much more practical:

  • At the time the Docker image repositories on the Docker Hub Registry had to be tied to a GitHub repo (They’ve added Bitbucket support since then).
  • I was too cheap to pay for a private GitHub repo.

… And thus the code for the Docker-in-Docker Jenkins slave became open source! 😀

Unfortunately, making this image publicly available presented some challenges soon thereafter: Folks started linking their blog posts to it, people I’d never met emailed me asking for help in getting set up w/this system, others started filing issues against me on either GH or the Docker Hub Registry, and I started receiving pull-requests (PRs) to my GH repo.

Having switched employers just a few months after posting the code to GH, dealing with the issues and PRs was a bit of a challenge: My new employer didn’t have a Dockerized build system (yet), and short of setting up my own personal Jenkins server and Dockerized build slaves, there was no way for me to verify issues/fixes/PRs for this side-project. And so “tehranian/dind-jenkins-slave” stagnated on GH with relatively little participation from me.

Having largely forgotten about this project, I was quite surprised a few weeks ago when perusing the GH repo for Disqus. I accidentally discovered that the engineering team at Disqus had forked my repo and had been actively committing changes to their fork!

Their changes had:

  • Optimized the container’s layers to make it smaller in size,
  • Updated the image to work with new versions of Docker,
  • And also modified some environment variable names to avoid collisions with names that popular frameworks would use.

Prompted by this, I went back to my own GH repo, looked at the graph of all other forks, and saw that several others had forked my GH repo as well.

One such fork had updated my image to work with Docker Swarm and also to be able to easily use SSH keys for authenticating with the build slave instead of using password-based auth.

“How cool!”, I thought. I’d put an idea into the public domain a year ago, others had found it, and improved it in ways that I couldn’t have imagined. Further, their improvements were now available for myself and others to use!

My Delphix colleague Michael Coyle summed this all up very nicely, saying “As a software developer I can only realistically work for one organization at a time. Open source allows developers from different organizations to collaborate with each other without boundaries. In that way one actually can contribute to more than one organization at once.”

In hindsight I’m absolutely delighted that my unwillingness to purchase a private GitHub repo led to me contributing the Docker-in-Docker Jenkins slave to the public domain. There was nothing proprietary that Virtual Instruments could have used in its product, and by making it available other organizations like Disqus, CloudBees have been able to benefit, along with software developers on the other side of the planet. How exciting!