Introduction

opConfig 3.1.1 introduces a new capability: plugins (written in perl) can now be used to collect or transform configuration data. This document describes this feature.

General Requirements

All plugins have to be written in perl (they may run other programs, but the general scaffolding must be perl).

Plugins are only considered if they're valid perl, and if they fulfil the requirements for a 'Perl Module':

It's strongly recommended that the plugin have a version declaration right after the package line, e.g. our $VERSION = "1.2.3";

The plugin may load extra Perl modules with 'use', but it must not use any package-level global variables. All its variables and any objects that it might create must have local scope.

If opConfig encounters invalid plugins it will ignore these and log a message about the problem.

 

The global configuration option 'opconfig_plugin_timeout' (default: 20 seconds) sets the maximum execution time for any opConfig plugin function. 

The configuration option 'opconfig_raise_alert_events' (default: true) controls whether opConfig sends any alert events to NMIS.

Collecting device configuration data with a plugin

To collect device data using a plugin you need a plugin that provides the necessary collection functionality, and you need to configure one or more commands to be delegated to that plugin.

How to delegate collection to a plugin

To delegate data collection the configuration for the command in question has to have the property "use_collection_plugin" set to the name of the desired plugin package (ie. "X", not "X.pm").

This can be done for an individual command, or for all commands belonging to the command set in question (if given in the 'scheduling_info' section). The specification for an individual command overrides any setting in scheduling_info.

Data collection can be delegated to exactly one plugin. If collection is delegated, then opConfig does not connect to the node in any way! Instead the plugin has to do anything and everything that is required to procure configuration data for opConfig.

Here is a minimal example command set file ( for conf/command_sets.d/) that demonstrates how a command is delegated:

%hash = (
    'collect_with_plugin' => {
        'os_info' => {
            'os' => '/some special device/',
        },
        # ...omitted scheduling info etc.
        'commands' => [
            {
                'command' => "external command",
                use_collection_plugin => "SpecialDeviceHelper",
            },
# ...
        ],
    }
);

 

What is expected of a plugin for configuration data collection

A plugin can be used for collection if it offers the function collect_configuration.

This collect_configuration function will be invoked with the following named arguments:

The  function must not modify any of the arguments that are passed to it.

The function must return a hash reference with the following supported keys:

if ignore is 1, then the command is ignored altogether and opConfig does not save anything.
if success is 1, then and only then the returned configuration_data is processed and stored by opConfig.
The error response property is ignored if success is 1, otherwise the error message is logged and reported.

Example Collection Plugin

Here is a minimal collection plugin which uses an external program and reports its output as configuration data back to opConfig:

 

sub collect_configuration
{
    my (%args) = @_;
    
    my ($node, $node_info, $command, $credential_set, $logger, $opconfig)
            = @args{qw(node node_info command credential_set logger opconfig)};
    $logger->info("Plugin ".__PACKAGE__." about to collect data for $node, command $command->{command}");
    # maybe we need to shell out to some program?
    open(P, "-|", "/usr/local/bin/complicated_operation", $command->{command}, $node_info->{host})
            or return { error => "failed to start complicated_operation: $!" };
    my $goodies = <P>;
    close P;
    return { error => "complicated_operation failed: $?" } if ($?);
    return { success => 1, configuration_data => $goodies };
}
1;

How to filter or transform configuration data with a plugin

To transform, filter, analyse or otherwise process configuration data further, you need one or more plugins that provide the desired processing functionality and the command in question needs to be  configured so that those plugins are invoked.

How to activate processing plugins

One or more processing plugins can be applied to a command's configuration data. The selected plugins will be invoked in a pipeline sequence, i.e. each plugin is passed the results of the previous plugin for further work. At the end of this pipeline opConfig continues with the processing of the final results.

To apply a plugin the command in question has to set the property "use_processing_plugins" to the list of the desired plugin package names (ie. "X", not "X.pm").

This can be done for an individual command, or for all commands belonging to a command set (i.e. if given in the 'scheduling_info' section). The specification for an individual command overrides the setting in scheduling_info.

Here is a minimal command set example that configures the command "show running configuration" for plugin processing:

%hash = (
    'process_with_plugin' => {
        # ...omitted os selector, scheduling info etc.
        'commands' => [
            {
                'command' => "show running configuration",
                'use_processing_plugins' => [ "FilterPlugin", "TranslatorPlugin" ], 
            },
# ...
        ],
    }
);

What is expected of a processing plugin

A plugin can be used for configuration data processing if it offers the function process_configuration. This function is expected to transform a newly collected command output (or configuration

data) before opConfig handles change detection and storage.

The process_configuration function will be called with the following named argments:

The function must return a hash reference with the following supported keys:

If and only if success is 1 will configuration_data, derived_info, alerts and conditions be processed by opConfig.

If a plugin signals an error, then the error message is logged and reported first, then opConfig continues with other plugins in the pipeline.
Note that any data that was returned by the failed plugin is ignored!

If configuration_data is returned, then it replaces the original input and is passed to the next pipeline stage. Otherwise the original input is used.

If derived info, alerts or conditions are returned, then they're merged with any already existing information. In other words, each processing plugin can only add to these, not overwrite what a previous plugin has reported.

How to filter or transform configuration data

TBD - just return new data, show example

Example Filtering Plugin

How to raise alerts and open (or close) NMIS events with a plugin

TBD - return alerts, point to values

Example Alerting Plugin

How to use a plugin to prepare derived information (or knowledge) for opConfig

TBD - what is this all about

Example Plugin

Plugin Input Argument Structures

node_info

contains the same data that you get when running ./bin/opnode_admin.pl act=export node=XYZ, ie. the node's configuration, connection information and os information.

credential_set

holds the credential set details that was configured for this node.
The credential set structure carries the same properties as the  credential set editing gui screen, with the following keys:

command

contains the command in question, plus some meta data.

This is structured similar to the command set definition, but with some extended or expanded properties:

logger

refers to an OMK::Log instance, a subclass of Mojo::Log.

you can use the following methods to log information, in order of severity: fatal error warn info debug (and debug2 to debug9).

your information is logged if the global verbosity level is at least as verbose as your chosen method. opConfig's default level is 'info'. e.g. $logger->warn("this is pretty bad")

will very likely end up in the opConfig log, while $logger->debug5("nuisance") will almost certainly be suppressed.

opconfig

refers to an OMK::opConfig instance, which can be used to schedule command execution, retrieve other command revisions and the like.

please consult the opConfig API documentation at FIXME/TBA for details.

Plugin Response/Result Structures

alerts

must be a hashref if present. In the simplest case, the key is the alert/event name, and the value is 0 or 1. In this case opConfig raises (value 1) or closes (value 0) an event with the given event name for the current node.

As an alternative, the value may be a hash with any of the following keys instead of 0/1:

conditions

must be a hashref if present, and must be two levels deep.

The outer key is not displayed by opConfig at this time but  separates conditions signalled by different plugins, so we recommend  that each plugin picks a unique outer key.
The data structure behind that outer key must be a hashref again.

Its key must be he name of the condition that is to be signalled,  and the value must be one of 0, 1 or undef. 
0 means a bad state for this condition, 1 means a good state for the condition, undef signals that the state of this condition is not known.

derived_info

must be a hashref if present, and can have any depth.

Like for conditions and alerts, the outer key is used to separate information reported by different plugins (or different kinds of information);  a suitably unique key needs to be chosen.

The following keys have special meaning and should not be used for other purposes in your inner datastructure: type, tag, value, title, labels, rows.

All derived_info data is stored, but at this time the opConfig gui will only display derived info entries whose inner structure conforms to the following layout: