Using create_specs to refactor Puppet

After writing this it was pointed out to me that Corey Osman has written another tool that auto-generates Rspec code called Retrospec, which is also worth having a look at.

In this post I document a new method for complex Puppet code refactoring, which involves a simple tool that I wrote, create_specs.

I have been using this method a while now; I find it easier than catalog-diff and consider it to be safer as well.

The tool create_specs automatically generates Rspec test cases to test all aspects of the compiled catalog that is passed to it as an input. Of course, most Puppet modules can compile an infinite number of catalogs, unless they are very simple. Therefore, to have confidence in a real refactoring effort, we would need to compile a representative set of these catalogs and apply the method I describe here to each of those. This will be out of scope for today, but it is trivial to extend the method.

Here, I provide a simple Puppet module that manages an NTP service in a single class, and then I refactor it to split the module into several classes. I then show how this method proves with certainty that the refactoring did not introduce bugs.

I assume the reader already understands how to set up Rspec-puppet; if not, have a look at my earlier post.

Sample code

The sample code is a simple Puppet class that installs and configures NTP.

(Note: all of the code for this blog post is available at Github here. The reader can step through the revision history to see the examples before and after the refactoring.)

class ntp ( Array $servers, ) { package { 'ntp': ensure => installed, } file { '/etc/ntp.conf': content => template("${module_name}/ntp.conf.erb"), require => Package['ntp'], } service { 'ntp': ensure => running, enable => true, subscribe => File['/etc/ntp.conf'], } } read more

Introducing programmatic editing of Hiera YAML files

Introduction

If you have ever maintained a complicated, multi-team deployment of Hiera, you have probably seen data keys repeated in flagrant violation of the Don’t Repeat Yourself principle.

To an extent, this is avoidable. It is possible to declare variables in Hiera and look them up from elsewhere in Hiera by calling the hiera function from within Hiera. It is also possible to define aliases in order to look up complex data from elsewhere within Hiera.

Meanwhile, the hiera_hash function can eliminate the need to repeat Hash keys at multiple levels of the hierarchy, although Puppet 3’s automatic parameter lookup will not return merged hash lookups.

On the other hand, many Puppet users don’t know about these features, and even when they do, tight project deadlines tempt the best of us to take shortcuts.

Bulk updating of Hiera data

The problem that arises can be stated as follows: Given many Hiera files, possibly in separate Git repos and maintained in separate teams, how would you update a similar block of Hiera data in all of these files?

I spent several hours on a Friday afternoon writing a simple Ruby script to double-check that I’d manually updated ~ 10 YAML files with changes to what were essentially the same data keys, and I wondered if there is a better way.

Python and ruamel.yaml

To my surprise, I discovered that it is simply impossible to programmatically update human-edited YAML files in Ruby because its parser cannot preserve commenting and formatting.

Mike Pastore states in his comment at Ruby-Forums.com:

Most YAML libraries I’ve worked with don’t preserve formatting or comments. Some quick research turns up only one that does—and it’s for Python (ruamel.yaml). In my experience, YAML is great for human-friendly, machine-readable configuration files and not much else. It loses its allure the second you bring machine-writeability into the picture.

So to the Ruby community: someone needs to write a YAML parser that preserves commenting and formatting!

In the meantime, all power to Anthon van der Neut, who has forked the PyYAML project and solved a good 80% of the problem of preserving the commenting and formatting. He also proved to be incredibly helpful in answering questions about the parser on Stack Overflow, and in responding to bug reports.

hiera-bulk-edit.py

I realised that a script that could execute snippets of arbitrary Python code on the YAML files in memory would provide a powerful and flexible interface for bulk editing of Hiera files. In the remainder of the post, I’ll show how various data editing – and viewing – problems can be solved using my new tool.

Installing the script

To install the script, just clone my Git repository and install the Python dependencies with PIP:

$ git clone https://github.com/alexharv074/hiera-bulk-edit $ cd hiera-bulk-edit $ pip install -r requirements.txt read more

Verifying file contents in a puppet catalog

One of the most useful applications of Rspec-puppet I have found is in the verification of generated ERB file content. However, it is not always obvious how to actually do this.

I discovered the verify_contents method one day when pondering a question at Ask.puppet.com (ref). An undocumented feature of the Puppetlabs_spec_helper, it is used in a few Forge modules to allow testers to say, “the catalog should contain a file X, whose contents should contain lines A, B, ..”. For example, in the Haproxy module here.

In this post I’m going to document how I’ve used the verify_contents method and improved upon it when testing ERB generated file content.

Basic usage

The basic usage of verify_contents is as follows:

spec/spec_helper.rb:

require 'puppetlabs_spec_helper/module_spec_helper' read more

Mocking with rspec-puppet-utils

This post came out of a question that I answered at ask.puppet.com.

I decided to write some Rspec-puppet tests for a class that used the razorsedge/network module, and along the way decided to mock some of the functions that are normally delivered by it.

It was an excuse to try out Tom Poulton‘s rspec-puppet-utils project.

In this post I’m going to show how to use Tom’s project to mock functions; how to mock the Hiera function; how to test template logic; and also how to validate Hiera data directly.

If you’d like to follow along, I have this code at Github here.

The problem

In the example, the question that prompted me to set all this up involved use of the network::hiera interface of the razorsedge/network module.

So, we have the following class:

manifests/init.pp:

class profiles::network { include network::hiera } read more

Integration testing using Ansible and Test Kitchen

Introduction

I recently wrote a master-slave BIND 9 solution using Ansible and in this post I describe a multi-node integration testing approach for the solution using Test Kitchen and Neill Turner‘s Kitchen-Ansible extensions.

To be sure, Test Kitchen lacks proper support for multi-node integration testing, and its maintainers have explained the reasons for the lack of multi-node support in this thread here. Suffice to say, Puppet’s Beaker had multi-node support some five or six years ago, as did Puppet’s earlier, now retired, Rspec-system project.

This lack of multi-node support is, indeed, a quite serious limitation in an otherwise excellent framework. I am encouraged that the team has an issue open to track adding multi-node support here.

General approach

The aforementioned limitation aside, it is still possible to do rudimentary integration testing, as long as we tolerate a few manual steps and design our tests so that all testing on the first node can be completed before testing on the subsequent nodes begins.

In the case of my BIND 9 solution, this means that I’ll write one test suite for the DNS master, a second suite for the first DNS slave, and a third suite for the second DNS slave.  The first suite will prove that the DNS master has the BIND 9 packages installed, zone files and other files in place, that the BIND 9 service runs, and that name resolution works.  The second suite will prove that a DNS slave is build, and receives a zone transfer as soon as it comes online.  The third suite simply proves that more than one DNS slave can be handled by the solution.

The approach would fall short if we had a requirement, say, to add a new DNS record after the master was created, update its serial number, and see that all the slaves received the update.  But as I say, it’s a lot better than nothing.

I must acknowledge Maxim Chernyak for documenting the Kitchen hack that this work is based on.

BIND 9 solution

The Ansible role that we will be testing configures a simple BIND 9 system with a single master that is also a master for all of its zones, and one or more slaves that receive the zone tranfers and respond to recursive DNS queries.

The following figure shows the high-level architecture:

Screen Shot 2016-06-12 at 7.56.37 pm

Ansible role

The code for this solution is available online at Github here.  It’s not my intention here to discuss the Ansible code itself, except where it is relevant to the integration testing procedure.

Kitchen config

To learn more about my Kitchen config, please see my earlier post where I described the general config.

The .kitchen.yml file

The .kitchen.yml I have for the role is as follows:

--- driver: name: vagrant platforms: - name: centos-7.2 driver_plugin: vagrant driver_config: box: puppetlabs/centos-7.2-64-nocm provisioner: name: ansible_playbook hosts: test-kitchen ansible_verbose: false ansible_verbosity: 2 require_ansible_repo: false require_ansible_omnibus: true require_chef_for_busser: false verifier: name: serverspec bundler_path: '/usr/local/bin' rspec_path: '/usr/local/bin' suites: - name: master verifier: patterns: - roles/ansible-bind/test/integration/master/serverspec/master_spec.rb driver_config: network: - ['private_network', {ip: '10.0.0.10'}] - name: slave1 verifier: patterns: - roles/ansible-bind/test/integration/slave1/serverspec/slave1_spec.rb driver_config: network: - ['private_network', {ip: '10.0.0.11'}] - name: slave2 verifier: patterns: - roles/ansible-bind/test/integration/slave2/serverspec/slave2_spec.rb driver_config: network: - ['private_network', {ip: '10.0.0.12'}] read more

Testing an Ansible role using Test Kitchen

Updated with thanks to Bill Wang for his feedback and pull request.

I have recently experimented with using Test Kitchen and Neill Turner‘s Kitchen Ansible extension to set up automated testing for Ansible roles, and in this post I document the working configuration that I ended up with.

Acknowledgements go to Neill for writing all of these extensions, as well as Martin Etmajer of Dynatrace for his DevOpsDays presentation on this topic, and the Zufallsheld blog post that I’ve borrowed a graphic from.

Kitchen CI architecture

test kitchen-2

At a high level I have used an architecture that includes Test Kitchen with the kitchen-docker driver, the kitchen-ansible provisioner, which in turn calls Neill’s omnibus-ansible as well as his kitchen-verifier-serverspec test runner to run the Serverspec tests.  Use of the kitchen-verifier-serverspec means that I am not dependent on busser runner and therefore have no need to have Chef Omnibus in the picture just to run the tests, which was the case in earlier incarnations of this stack.

How to set this up

I assume that we already have an Ansible role that we want to test, and in my case, I forked a Galaxy module for Sonatype Nexus by jhinrichsen, and added the Kitchen CI configuration.  My code is available at Github here.

Prerequisites

I assume we have installed the following:

  • Git
  • Docker
  • Ruby
  • Ruby Gems
  • Bundler

How to use and install these is out of scope for today, but here’s what I have before we start:

$ git --version git version 2.5.4 (Apple Git-61) $ docker -v Docker version 1.11.1, build 5604cbe $ ruby -v ruby 2.0.0p481 (2014-05-08 revision 45883) [universal.x86_64-darwin14] $ gem -v 2.0.14 $ bundler -v Bundler version 1.10.5 read more