Skip to main content

NeuralNetworkTrain

NeuralNetworkTrain [/Q/Z] [keyword = value ] ...

warning

This operation is deprecated.

The NeuralNetworkTrain operation trains a three-layer neural network. The training produces two 2D waves that store the interconnection weights between the network neurodes. Once you obtain the weights, you can use them with NeuralNetworkRun.

Flags

/QSuppresses printing information in the History area.
/ZNo error reporting.

Parameters

keyword is one of the following:

Input=inWaveSpecifies the input patterns for training. inWave is a 2D wave where each row corresponds to a single training event and each column corresponds to the input values. The number of rows in inWave (the number of training sets) and in the output wave must be equal. inWave must be single or double precision and all entries must be in the range [0,1].
Iterations=numSpecifies the number of iterations. The default is 10000.
MinError=valTerminates training when the total error drops below val (the default is 1e-8). The total error is normalized, and is defined as the sum of the squared errors divided by the number of training sets times outputs.
Momentum=valSpecifies a coefficient used in the back-propagation algorithm. This coefficient adds to the change in a particular weight a contribution proportional to the error in a previous iteration. The default momentum is 0.075.
NHidden=numSpecifies the number of hidden neurodes. You do not need to use the Structure keyword with NHidden because the network is completely specified by the training waves and NHidden.
NReport=numSpecifies over how many iterations (default is 1000) to print the global RMS error to the history area. Ignored with /Q.
Output=outWaveSpecifies the expected outputs corresponding to the entries in the input wave. The number of rows in outWave (the number of training sets) and in the input wave must be equal. outWave must be single or double precision and all entries must be in the range [0,1].
LearningRate=valSets the network learning rate, which is used in the backpropagation calculation. The default is 0.15.
RestartAllows specification of your own set of weights as the starting values. Use this to run the training and feed the output weights of one training session as the input for the next.
Structure={Ni, Nh, No}Specifies the structure of the network. Ni is the number of neurodes at the input, Nh is the number of hidden neurodes, and No is the number of output neurodes. Structure is unnecessary when using NHidden is because the remaining numbers are determined by the sizes of the input and output waves.
WeightsWave1=w1Specifies the weights for propagation from the first layer to the second. The 2D wave must be double precision and the dimensions must match the specified neurodes with the same numbers of rows and inputs and with matching numbers of columns and hidden neurodes.
WeightsWave2=w2Specifies the weights for propagation from the second to the third layer. The 2D wave must be double precision and the dimensions must match the specified neurodes with the same numbers of rows and hidden neurodes and with matching numbers of columns and outputs.

Details

NeuralNetworkTrain is the first half of the implementation of a three-layer neural network in which both in inputs and outputs are taken as normalized quantities in the range [0,1]. Network training is based on back-propagation to iteratively minimize the error between the output and the expected output for any given training set. Training creates in two 2D waves that contain the interconnection weights between the neurodes. M_Weights1 contains the weights between the input layer and the hidden layer and M_Weights2 contains the weights between the hidden layer and the output layer. During the iteration stage, global error information can be printed in the history area.

The algorithm computes the output of the kth neurode by

Vk=[1+exp(i=1nwisi)]1,\displaystyle V_{k}=\left[1+\exp \left(-\sum_{i=1}^{n} w_{i} s_{i}\right)\right]^{-1},

where wi is the weight corresponding to input i, si is the signal corresponding to that input, and n is the number of inputs connected to the neurode.

The total error is defined as the sum (over all training sets and all outputs) of the squared differences between the network outputs and the expected values. The sum is normalized by the product of the number of training sets and the number of outputs. The history reports (see NReport parameter) the square root of the total error (RMS error). The square root of the error computed at the end of the last iteration is stored in the variable V_rms.

See Also

NeuralNetworkRun