sampledoc

Probes

At times you need to gather information from a client machine before you can generate its configuration. For example, if some of your machines have both a local scratch disk and a system disk while others only have the system disk, you would want to know this information to correctly generate an /etc/auto.master autofs config file for each type. Here we will look at how to do this.

Probes also allow dynamic group assignment for clients, see Dynamic Group Assignment.

First, create a Probes directory in our toplevel repository location:

mkdir /var/lib/bcfg2/Probes

This directory will hold any small scripts we want to use to grab information from client machines. These scripts can be in any scripting language; the shebang line (the #!/usr/bin/env some_interpreter_binary line at the very top of the script) is used to determine the script’s interpreter.

Note

Bcfg2 uses python mkstemp to create the Probe scripts on the client. If your /tmp directory is mounted noexec, you will likely need to modify the TMPDIR environment variable so that the bcfg2 client creates the temporary files in a directory from which it can execute.

Note

New in version 1.3.0.

A probe script must exit with a return value of 0. If it exits with a non-0 return value, the client will abort its run. This behavior can be disabled by setting exit_on_probe_failure = 0 in the [client] section of bcfg2.conf.

Now we need to figure out what exactly we want to do. In this case, we want to hand out an /etc/auto.master file that looks like:

/software  /etc/auto.software --timeout 3600
/home      /etc/auto.home --timeout 3600
/hometest  /etc/auto.hometest --timeout 3600
/nfs       /etc/auto.nfs --timeout 3600
/scratch   /etc/auto.scratch --timeout 3600

for machines that have a scratch disk. For machines without an extra disk, we want to get rid of that last line:

/software  /etc/auto.software --timeout 3600
/home      /etc/auto.home --timeout 3600
/hometest  /etc/auto.hometest --timeout 3600
/nfs       /etc/auto.nfs --timeout 3600

So, from the Probes standpoint we want to create a script that counts the number of SCSI disks in a client machine. To do this, we create a very simple Probes/scratchlocal script:

grep -c Vendor /proc/scsi/scsi

Running this on a node with n disks will return the number n+1, as it also counts the controller as a device. To differentiate between the two classes of machines we care about, we just need to check the output of this script for numbers greater than 2. We do this in the template.

Note

This example uses Cheetah Templates, but Cheetah templates are not required in order for Probes to operate properly.

For the template we will want to create a Cfg/etc/auto.master directory to hold the template of the file in question. Inside of this template we will need to check the result of the Probe script that got run and act accordingly. The Cfg/etc/auto.master/auto.master.cheetah file looks like:

/software  /etc/auto.software --timeout 3600
/home      /etc/auto.home --timeout 3600
/hometest  /etc/auto.hometest --timeout 3600
/nfs       /etc/auto.nfs --timeout 3600
#if int($self.metadata.Probes["scratchlocal"]) > 2
/scratch   /etc/auto.scratch --timeout 3600
#end if

Any Probe script you run will store its output in $self.metadata.Probes["scriptname"], so we get to our scratchlocal script’s output as seen above. (See Handling Probe Output, below, for more information on how this is done.) Note that we had to wrap the output in an int() call; the script output is treated as a string, so it needs to be converted before it can be tested numerically.

With all of these pieces in place, the following series of events will happen when the client is run:

  1. Client runs
  2. Server hands down our scratchlocal probe script
  3. Client runs the scratchlocal probe script and hands its output back up to the server
  4. Server generates /etc/auto.master from its template, performing any templating substitutions/actions needed in the process.
  5. Server hands /etc/auto.master down to the client
  6. Client puts file contents in place.

Now we have a nicely dynamic /etc/auto.master that can gracefully handle machines with different numbers of disks. All that’s left to do is to add the /etc/auto.master to a Bundle:

<Path name='/etc/auto.master'/>

Dynamic Group Assignment

The output lines of the probe matching “group:” are used to dynamically assign hosts to groups. These dynamic groups need not already exist in Metadata/groups.xml. If a dynamic group is defined in Metadata/groups.xml, clients that include this group will also get all included groups and bundles.

Consider the following output of a probe:

group:debian-wheezy
group:amd64

This assigns the client to the groups debian-wheezy and amd64.

To prevent clients from manipulating the probe output and choosing unexpected groups (and receiving their potential sensitive files) you can use the allowed_groups option in the [probes] section of bcfg2.conf on the server. This whitespace-separated list of anchored regular expressions (must match the complete group name) controls dynamic group assignments. Only matching groups are allowed. The default allows all groups.

New in version 1.3.4.

Example:

[probes]
allowed_groups = debian-(squeeze|wheezy|sid) i386

This allows the groups debian-squeeze, debian-wheezy, debian-sid and i386. With the probe output from above, this setting would disallow the group amd64.

Handling Probe Output

Bcfg2 stores output from probes in the Probes property of a client’s metadata object. To access this data in Genshi Templates, for instance, you could do:

${metadata.Probes['script-name']}

This is not the full output of the probe; any lines that start with “group:” have been stripped from the output. The data is a string-like object that has some interesting and salient features:

  • If the data is a valid XML document, then metadata.Probes['script-name'].xdata will be an lxml.etree._Element object representing the XML data.
  • If the data is a valid JSON document, and either the Python json or simplejson module is installed, then metadata.Probes['script-name'].json will be a data structure representing the JSON data.
  • If the data is a valid YAML document, and either the Python yaml or syck module is installed, then metadata.Probes['script-name'].yaml will be a data structure representing the YAML data.

If these conditions are not met, then the named properties will be None. In all other fashions, the probe data objects should act like strings.

Host- and Group-Specific probes

Bcfg2 has the ability to alter probes based on client hostname and group membership. These files work similarly to files in Cfg.

If multiple files with the same basename apply to a client, the most specific one is used. Only one instance of a probe is served to a given client, so if a host-specific version and generic version apply, only the client-specific one will be used.

If you want to to detect information about the client operating system, the Ohai plugin can help.

Data Storage

New in version 1.3.0.

The Probes plugin stores the output of client probes locally on the Bcfg2 server in order to ensure that probe data and groups are available on server startup (rather than having to wait until all probes have run every time the server is restarted) and to bcfg2-info and related tools. There are two options for storing this data: Probes/probed.xml, a plain XML file stored in the Bcfg2 specification; or in a database.

Advantages and disadvantages of using the database:

  • The database is easier to query from other machines, for instance if you run bcfg2-info or bcfg2-test on a machine that is not your Bcfg2 server.
  • The database allows multiple Bcfg2 servers to share probe data.
  • The database is likely to handle probe data writes (which happen on every client run) more quickly, since it can only write the probes whose data has changed.
  • The database is likely to handle probe data reads (which happen only on server startup) more slowly, since it must query a database rather than the local filesystem. Once the data has been read in initially (from XML file or from the database) it is kept in memory.

To use the database-backed storage model, set use_database in the [probes] section of bcfg2.conf to true. You will also need to configure the Global Database Settings.

The file-based storage model is the default, although that is likely to change in future versions of Bcfg2.

Table Of Contents

Previous topic

Plugin Roles

Next topic

current-kernel

This Page