2.8.9. Topology Optimization by Genetic Algorithms
The application
of genetic algorithms for evolving neural networks is not only limited to the
selection of input variables but can also be used for optimizing the complete
topology of the neural networks [118]-[121].
The different approaches found in literature can be classified according to
their encoding mechanism as direct and indirect methods. In the direct methods,
all the information about the structure is directly represented in the chromosome.
The most common way is the representation of the connections in a matrix (connectivity
matrix) and then linking this matrix row by row into the chromosome. The indirect
encoding methods are also called grammatically encoding methods as the chromosome
contains development rules, which have to be interpreted to build the corresponding
net. This allows a compression of the topology resulting in a smaller length
of the chromosome and thus a better scalability.
A promising
approach of genetic algorithms for the optimization of neural network topologies
was proposed by Boozarjohehry et al. [145] using a grammatically encoding procedure.
This algorithm is also applied to real world data of a neutralization process
besides of simple benchmark problems. Yet, a problem of this approach is that
the solutions found by the algorithm are randomly depending on the initial weight
initialization and on the parameters. Another very complex approach for evolving
neural networks by genetic algorithms using a direct encoding was proposed by
Braun et al. [146],[147].
The corresponding software ENZO is available for free [148]
and has been applied to several real world problems [149]-[151].
The problem of this approach is its complexity with more than 100 parameters,
which can be adjusted by the user. Although the default settings work well in
many cases, the excellent results demonstrated in the references mentioned before
need adjustments of these parameters rendering a general application of this
approach with only little input by the analyst virtually impossible.
In general
all approaches of optimizing the topology of neural networks by genetic algorithms
are faced by a poor scalability [152] and by
complex genetic operations [147]. An example is the structural
mapping causing problems to the crossover operator. For two networks with an
identical topology, the contributions of the hidden neurons to the overall solution
may be internally permuted (only visible by a permutation of the weights). If
a crossover operator is applied to these networks, one offspring is created
with partly doubled internal contributions and one offspring is created with
partly missing internal contributions. In most cases, the optimization of the
neural network topology has been used only for simple benchmarks like the XOR
problem [152]. Another general problem is that similar
to the pruning algorithms the networks cannot be bigger and more complex than
a largest possible reference network predefined by the user (see also section
2.8.8). Due to these quite complex problems, the genetic algorithms are
used only for a variable selection and not for the optimization of the topology
in this work.