Previous Topic Back Forward Next Topic
Print Page Dr. Frank Dieterle
Ph. D. ThesisPh. D. Thesis 2. Theory  Fundamentals of the Multivariate Data Analysis 2. Theory Fundamentals of the Multivariate Data Analysis 2.8. Too Much Information Deteriorates Calibration2.8. Too Much Information Deteriorates Calibration 2.8.9. Topology Optimization by Genetic Algorithms2.8.9. Topology Optimization by Genetic Algorithms
About Me
Ph. D. Thesis
  Table of Contents
  1. Introduction
  2. Theory Fundamentals of the Multivariate Data Analysis
    2.1. Overview of the Multivariate Quantitative Data Analysis
    2.2. Experimental Design
    2.3. Data Preprocessing
    2.4. Data Splitting and Validation
    2.5. Calibration of Linear Relationships
    2.6. Calibration of Nonlinear Relationships
    2.7. Neural Networks Universal Calibration Tools
    2.8. Too Much Information Deteriorates Calibration
      2.8.1. Overfitting, Underfitting and Model Complexity
      2.8.2. Neural Networks and the Complexity Problem
      2.8.3. Brute Force Variable Selection
      2.8.4. Variable Selection by Stepwise Algorithms
      2.8.5. Variable Selection by Genetic Algorithms
      2.8.6. Variable Selection by Simulated Annealing
      2.8.7. Variable Compression by Principal Component Analysis
      2.8.8. Topology Optimization by Pruning Algorithms
      2.8.9. Topology Optimization by Genetic Algorithms
      2.8.10. Topology Optimization by Growing Neural Network Algorithms
    2.9. Measures of Error and Validation
  3. Theory Quantification of the Refrigerants R22 and R134a: Part I
  4. Experiments, Setups and Data Sets
  5. Results Kinetic Measurements
  6. Results Multivariate Calibrations
  7. Results Genetic Algorithm Framework
  8. Results Growing Neural Network Framework
  9. Results All Data Sets
  10. Results Various Aspects of the Frameworks and Measurements
  11. Summary and Outlook
  12. References
  13. Acknowledgements
Research Tutorials
Site Map
Print this Page Print this Page

2.8.9.   Topology Optimization by Genetic Algorithms

The application of genetic algorithms for evolving neural networks is not only limited to the selection of input variables but can also be used for optimizing the complete topology of the neural networks [118]-[121]. The different approaches found in literature can be classified according to their encoding mechanism as direct and indirect methods. In the direct methods, all the information about the structure is directly represented in the chromosome. The most common way is the representation of the connections in a matrix (connectivity matrix) and then linking this matrix row by row into the chromosome. The indirect encoding methods are also called grammatically encoding methods as the chromosome contains development rules, which have to be interpreted to build the corresponding net. This allows a compression of the topology resulting in a smaller length of the chromosome and thus a better scalability.

A promising approach of genetic algorithms for the optimization of neural network topologies was proposed by Boozarjohehry et al. [145] using a grammatically encoding procedure. This algorithm is also applied to real world data of a neutralization process besides of simple benchmark problems. Yet, a problem of this approach is that the solutions found by the algorithm are randomly depending on the initial weight initialization and on the parameters. Another very complex approach for evolving neural networks by genetic algorithms using a direct encoding was proposed by Braun et al. [146],[147]. The corresponding software ENZO is available for free [148] and has been applied to several real world problems [149]-[151]. The problem of this approach is its complexity with more than 100 parameters, which can be adjusted by the user. Although the default settings work well in many cases, the excellent results demonstrated in the references mentioned before need adjustments of these parameters rendering a general application of this approach with only little input by the analyst virtually impossible.

In general all approaches of optimizing the topology of neural networks by genetic algorithms are faced by a poor scalability [152] and by complex genetic operations [147]. An example is the structural mapping causing problems to the crossover operator. For two networks with an identical topology, the contributions of the hidden neurons to the overall solution may be internally permuted (only visible by a permutation of the weights). If a crossover operator is applied to these networks, one offspring is created with partly doubled internal contributions and one offspring is created with partly missing internal contributions. In most cases, the optimization of the neural network topology has been used only for simple benchmarks like the XOR problem [152]. Another general problem is that similar to the pruning algorithms the networks cannot be bigger and more complex than a largest possible reference network predefined by the user (see also section 2.8.8). Due to these quite complex problems, the genetic algorithms are used only for a variable selection and not for the optimization of the topology in this work.

Page 38 © Dr. Frank Dieterle, 14.08.2006 Navigation