puppet: configure many machines the easy way

Repetition is the sysadmin’s bane. That’s why we have Puppet, an ingenious system for configuring multiple machines at once.

Why do this?

  • Automate repetitive jobs
  • Quickly roll out large- scale deployments
  • Learn a vital tool for the brave new world of cloud computing

Puppet is a configuration management utility which has been designed to aid in the automation of many tasks across various systems. Configuration management manifest files are created using Puppet’s own language syntax and then applied to a Linux (or Unix, Mac or Windows) system. This allows for system administration tasks to be automated, reducing the tedium and time spent on repetitive tasks – the ultimate sysadmin goal.

While these manifests can be run on systems locally to perform said tasks, storing these files on a central server running the aptly titled Puppet Master service allows the management of a whole host of machines. Farming out configurations to an entire estate drastically simplifies the management of servers and workstations across entire networks.

Take the scenario of a company running 50 servers all having statically assigned IP details. A new DNS server is brought online, so each of the 50 servers requires a change to the /etc/resolv.conf. Without configuration management tools this would mean either SSH sessions to all servers and editing the files, or copying the files to each using scp, which would take an inordinate amount of time. With Puppet a small manifest ordering all the connected Puppet clients (or agents) to copy the configuration file takes care of your required configuration change the next time they check in.

As mentioned previously there are two elements to a Puppet configuration management system: the Puppet Master where all the manifests are stored, and the Puppet agents, which run on the client servers, or workstations. The agents poll the master on a given schedule (by default every 30 minutes) and check for differences in the configuration manifests.


RPM packages can be found at yum.puppetlabs.com whereas Deb packages live at apt.puppetlabs.com.

In this guide we will walk through the installation and setup of a Puppet Master and connected agents resulting in the application of a shared configuration to said agents. We will use CentOS 7 as the distribution here, but Puppet is readily available on most, if not all, distributions.

Before we get to the nitty gritty there are a few prerequisites to installing puppet:

  • Hostnames configured This will ensure the correct information is transferred when configuring clients.
  • DNS As with most projects, DNS or host file entries are a useful element to ensure nodes can communicate using friendly names rather than IP addresses.
  • Puppet agents out of the box look for the Puppet Master server on the network using the hostname ‘puppet’ – while this can be configured on the client to look for a different hostname it’s far easier to have a DNS or host file entry for Puppet.
  • NTP Accurate time is vital for Puppet to correctly work, mainly due to the master server also acting as certificate authority. If there is a discrepancy in time between the Puppet Master and the agents, then certificates could seem to be expired and therefore policies not applied.


We’re starting Puppet from systemd. Try not to get carried away when you’re called the Puppet Master.

For this guide let’s assume we have three servers each with a CentOS 7 minimal install, one of which will run the Puppet Master service and the others running the agent (we will assume hostnames of server1, server2 and server3 with IP addresses, and respectively).

Append the following to the host files on all three machines: server1.localdomain server1 puppet server2.localdomain server2 server3.localdomain server3

Now we need to install the packages on the above servers. PuppetLabs, the people behind the software, provide software repositories with the very latest version. They also provide an Enterprise edition of Puppet, which is not to be confused with the open source offering that we’re using here.

Let’s install the PuppetLabs repository where we will get the puppet packages from

yum localinstall http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm

Once that is installed we can grab the software

yum install puppet-server

This will install the software required to create a Puppet Master server and its dependencies.

The first thing that needs to occur now we have the software installed is to generate an SSL certificate. This certificate is used during the operation of Puppet to ensure secure communication between the master and its agents, the Puppet Master will sign the certificate requests from agents when they initially connect, and this initial generation is the first step in this process. There are multiple ways to generate the certificate dependant upon the desired configuration; for example, if you have multiple Puppet Masters on the same network, however we are building a simple setup with a single master, so the process is straightforward. We need to launch Puppet Master non-daemonised:

puppet master --verbose --no-daemonize

You will see something along the lines of:

[root@server1 ~]# puppet master --verbose --no-daemonize Info: Creating a new SSL key for ca Info: Creating a new SSL certificate request for ca Info: Certificate Request fingerprint (SHA256): EF:E8:17:9D:FD:DA:40:38:D8:96:74:BE:CD:1C:45:7C:14:51:1C:F9:D9:D6:40:3F:1B:B7:9D:D4:D8:0C:F0:36 Notice: Signed certificate request for ca Info: Creating a new certificate revocation list Info: Creating a new SSL key for server1.localdomain Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for server1.localdomain Info: Certificate Request fingerprint (SHA256): F0:D0:94:C6:76:17:14:14:B1:99:D7:C4:04:93:BD:A3:63:E8:DD:3B:63:63:E2:F5:0B:7E:9F:90:D4:D3:0B:A0 Notice: server1.localdomain has a waiting certificate request Notice: Signed certificate request for server1.localdomain Notice: Removing file Puppet::SSL::CertificateRequest server1.localdomain at ‘/var/lib/puppet/ssl/ca/requests/server1.localdomain.pem’ Notice: Removing file Puppet::SSL::CertificateRequest server1.localdomain at ‘/var/lib/puppet/ssl/certificate_requests/server1.localdomain.pem’ Notice: Starting Puppet master version 3.7.4

Once you see the notice that the Puppet master is being started the certificate generation is complete and we can now continue. You now need to hit Ctrl+C to kill the process so we can enable and launch the master service as a daemon

systemctl start puppetmaster systemctl enable puppetmaster

We need to ensure the firewall is open to allow agents to connect

firewall-cmd --add-port=8140/tcp --permanent firewall-cmd --reload

For the purposes of this guide we will be disabling SELinux to ensure that doesn’t stand in our way; run these two commands to disable it:

sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config setenforce 0

The Puppet Master service is now installed and running on server1, and a similar process can be followed on server2 and server3 to install the agent.

First of all it’s a good idea to watch the syslog on the master server to ensure we see any inbound connections from the agents:

Let’s watch the logs on the master server for any inbound requests from agents:

tailf /var/log/messages

On each server2 and server3:

yum -y localinstall http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm

to install the PuppetLabs repository configuration, then we can install the puppet agent software:

yum -y install puppet

The next step is to start and enable the agent service:

systemctl start puppet systemctl enable puppet

We can also start and enable the puppet service on the Puppet Master server server1; after all it will be a server within the bounds of requiring configuration.

The first time the agent is started it will send a certificate request to the Puppet Master. As described earlier this is all part of ensuring the communication between master and agents is nice and secure. When the agent is started and the certificate request is sent you should see syslog entries appear on server1 detailing these happenings.

Feb 2 14:11:07 puppet puppet-master[20580]: server2.localdomain has a waiting certificate request

Notice we haven’t had to perform any configuration on the agent machines. This is due to the previously added host file entry of puppet as an alias to server1, which the puppet agent will default to, making the whole process so much more simple.

Once you have installed and started the agent service on both server2 and server3 we can head back to the master server and look at those certificate requests (you may need to open another terminal if you do not wish to close the syslog tail).

Running the command:

puppet cert list

will show any pending certificate requests which can be signed using:

puppet cert sign fqdn

fqdn being the fully qualified domain name of the server requesting a certificate to be signed – this will show up when the list command is given. We should see two requests waiting for us for server2 and server3, so let’s go ahead and sign them

puppet cert sign server2.localdomain puppet cert sign server3.localdomain

All certificates, both signed and unsigned, can be seen by issuing the command

puppet cert list --all

Signing the certificates is the last step in this simple configuration in getting Puppet up and running. We can now go ahead and start pushing configurations to the agents.

On a CentOS system the configurations are stored at /etc/puppet on the master server. Within this directory are several sub-directories; of importance to us for this guide are the manifests and modules folders. The puppet configuration catalog that the agent pulls always starts within the manifests directory with a file called site.pp. In this file we can declare the agents that will be connecting, which are defined as nodes. The configurations that these nodes will retrieve are defined as classes.

There are two node definitions we will concern ourselves with here: the default node definition and hostname-based node definition. The default node definition acts as a catch all to nodes that haven’t been declared specifically. The hostname-based definition is where a node definition targets a host specifically. Let’s take a look at a simple /etc/manifests/site.pp:

node default { include resolvconf } node ‘server2.localdomain’ include resolvconf include test }

This site.pp manifest includes two node definitions, the default and one for server2. Within these definitions are the configuration classes which will be applied to these nodes. So for our sample environment server1 and server3 recieve the default configuration class, resolvconf, as they haven’t been explicitly defined. Server2 receives a unique configuration which contains the class resolvconf and the additional class test.

Let’s look at the classes we have defined for our nodes. These classes will define the configuration received and can be placed inside a module. A Puppet module is a good way to bundle Puppet configuration manifests and associated data together. Taking our example class of test, we can create a module for this configuration element and place our manifest inside it. Best practise for Puppet states that nearly all manifests should belong inside modules with the sole exception of site.pp, which we saw earlier. Modules are placed as subdirectories within the /etc/puppet/modules directory, under which various subdirectories are created for the various elements of the module.

Manifests associated with modules reside in a manifests directory within the module and start at the init.pp file, which will contain the class definitions (class name must match the module name). Our test example would have a manifest file here:


Let’s look at a class definition:

class test{ }

Here we have defined the class test. At this point it doesn’t actually perform any functions; for this we need to introduce resources. A resource describes an aspect of the system you are planning to configure ie a package to be installed, a service to control or a file to modify. In order for us to manage a resource on a node we need to declare it within our class. For our test class we are looking to send a simple notification, so we need to use the notify type of resource. A notification message would look something like:

notify {“I’m notifying you.”:}

Completing our test module with the notify resource type would look like this:

class test{ notify {“I’m notifying you.”:} }

When the Puppet agent on server2 polls the master, a notification would then be found in the resultant downloaded catalog, although you would need to view the syslog to see this notification.

Let’s try this out: on the master edit /etc/puppet/manifests/site.pp to contain

node ‘server2.localdomain’ { include test } mkdir /etc/puppet/modules/test/manifests -p edit /etc/puppet/modules/test/manifests/init.pp class test{ notify {“ test notification “:} }

Run the test command to perform a manual poll on server2 by running the command

puppet agent -t

You should see something similar to the following with the notify message being present:

[root@server2 ~]# puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Caching catalog for server2.localdomain Info: Applying configuration version ‘1426284530’ Notice: test notification Notice: /Stage[main]/Test/Notify[ test notification ]/message: defined ‘message’ as ‘ test notification ‘ Notice: Finished catalog run in 0.05 seconds

Now we have the basics of creating a manifest we can start to do useful things. One of the classes mentioned in our initial site.pp manifest was include resolvconf. Let’s create this to create/modify the
/etc/resolv.conf file on our servers. To do this we will use the file resource type, which will instruct our agents to download a file from what is known as a file bucket. A file bucket is a directory which is stored inside the module directory alongside the manifests directory. In our case we will store a complete resolv.conf file in a file bucket for our resolvconf module. The directory structure for this would look like this:

/etc/puppet/modules/resolvconf - manifests/init.pp - files/resolv.conf

For this module our manifest file will contain the details for the resolvconf class, and point to the resolv.conf that the agent needs to download.

class resolvconf { file { “/etc/resolv.conf”: ensure => file, source => ‘puppet:///modules/resolvconf/resolv.conf’, path => “/etc/resolv.conf”, owner => root, group => root, mode => 644, } }

This manifest tells our agents to download the resolv.conf file from our file bucket, store it at
/etc/resolv.conf, apply ownership permissions (owner, group and mode) and ensure the file exists. The manifest will ensure our resolv.conf on all our servers will remain correctly configured, and changes to the local version of the file will be overwritten on the next agent poll.

The contents of the /etc/puppet/modules/resolvconf/files/resolv.conf file will be

search localdomain nameserver

To ensure this module is applied we can revert to the first site.pp mentioned to include the class resolvconf in both the default and specific node definitions. Re-running the command

puppet agent -t

should see this configuration apply to all nodes including the class in their definition and non-defined nodes due to the default node definition containing the class.

We have barely scratched the surface with our configuration manifests here. There’s so much more to Puppet, allowing deployment of packages, files, and control of services. It covers pretty much every component of every sysadmin task, allowing automation of mundane repetitive jobs but also allowing the orchestration of software stacks to aid in quick deployments, which is key in today’s world of cloud services and scalable systems.

Jon Archer is a Fedora ambassador, founder of RossLUG, and local government IT chap in rainy Lancashire.