8.1. Modifications of the Growing Neural Network
Algorithm

The original
algorithm was modified in several points, which were partly introduced and described
in [28] and which are partly introduced in this work,
to fit better to the needs of the calibration of sensor data sets and to improve
the generalization ability of the networks built:

1.Not
only neurons with two input links and one output link but also neurons with
one input link and one output link can be added to the network. In addition,
links can be added between any neuron and a neuron of a proceeding layer ensuring
that practically any feedforward network topology can be built. In contrast
to the stepwise algorithms, the addition of neurons with two input links takes
interactions of 2 variables during the addition step into account. Higher interactions
can be modeled later by the addition of additional links.

2.The estimation of the reduction of the calibration
error was replaced by temporarily inserting a network element then training
the network and subsequently predicting a monitor data set. This procedure is
repeated for all possible locations and all possible elements. The type and
the location where to insert the new element are decided by the maximum reduction
of the prediction error of the monitor data not used for training. This ensures
that the neural network not only approximates the calibration data well, but
also primarily generalizes well. Using different data subsets for the calibration
of the model (training data) and for the building of the model (monitor data)
prevents the introduction of a bias demonstrated by Kupinsky et al. [11].
The change of the network topology by adding a network element between two training
steps helps to escape local training minima similar to a random mutation of
genetic algorithms.

3.The stopping criterion of an absolute error limit
for the algorithm was replaced by a stopping criterion of a relative minimal
error decrease, which is independent from the scaling of the data sets. Thereby
the insertion of the network elements is repeated until the insertion of a new
network element improves the error of prediction less than this prescribed
relative limit.

4.The algorithm can start with nearly any arbitrary
network topology, not only with an empty network. As the current implementation
of the algorithm only supports networks with 1 output neuron, a separate
network has to be used for each analyte.