VL_SIMPLENN - Evaluate a SimpleNN network.

RES = VL_SIMPLENN(NET, X) evaluates the convnet NET on data X. RES = VL_SIMPLENN(NET, X, DZDY) evaluates the convnent NET and its derivative on data X and output derivative DZDY (foward+bacwkard pass). RES = VL_SIMPLENN(NET, X, [], RES) evaluates the NET on X reusing the structure RES. RES = VL_SIMPLENN(NET, X, DZDY, RES) evaluates the NET on X and its derivatives reusing the structure RES.

This function process networks using the SimpleNN wrapper format. Such networks are 'simple' in the sense that they consist of a linear sequence of computational layers. You can use the dagnn.DagNN wrapper for more complex topologies, or write your own wrapper around MatConvNet computational blocks for even greater flexibility.

The format of the network structure NET and of the result structure RES are described in some detail below. Most networks expect the input data X to be standardized, for example by rescaling the input image(s) and subtracting a mean. Doing so is left to the user, but information on how to do this is usually contained in the net.meta field of the NET structure (see below).

The NET structure needs to be updated as new features are introduced in MatConvNet; use the VL_SIMPLENN_TIDY() function to make an old network current, as well as to cleanup and check the structure of an existing network.

Networks can run either on the CPU or GPU. Use VL_SIMPLENN_MOVE() to move the network parameters between these devices.

To print or obtain summary of the network structure, use the VL_SIMPLENN_DISPLAY() function.

VL_SIMPLENN(NET, X, DZDY, RES, 'OPT', VAL, ...) takes the following options:

The result format

SimpleNN returns the result of its calculations in the RES structure array. RES(1) contains the input to the network, while RES(2), RES(3), ... contain the output of each layer, from first to last. Each entry has the following fields:

The network format

The network is represented by the NET structure, which contains two fields:

SimpleNN is aware of the following layers: