Previous Topic Back Forward Next Topic
Print Page Dr. Frank Dieterle
Ph. D. ThesisPh. D. Thesis 2. Theory – Fundamentals of the Multivariate Data Analysis 2. Theory – Fundamentals of the Multivariate Data Analysis 2.7. Neural Networks – Universal Calibration Tools2.7. Neural Networks – Universal Calibration Tools 2.7.2. Topology of Neural Networks2.7.2. Topology of Neural Networks
About Me
Ph. D. Thesis
  Table of Contents
  1. Introduction
  2. Theory – Fundamentals of the Multivariate Data Analysis
    2.1. Overview of the Multivariate Quantitative Data Analysis
    2.2. Experimental Design
    2.3. Data Preprocessing
    2.4. Data Splitting and Validation
    2.5. Calibration of Linear Relationships
    2.6. Calibration of Nonlinear Relationships
    2.7. Neural Networks – Universal Calibration Tools
      2.7.1. Principles of Neural Networks
      2.7.2. Topology of Neural Networks
      2.7.3. Training of Neural Networks
    2.8. Too Much Information Deteriorates Calibration
    2.9. Measures of Error and Validation
  3. Theory – Quantification of the Refrigerants R22 and R134a: Part I
  4. Experiments, Setups and Data Sets
  5. Results – Kinetic Measurements
  6. Results – Multivariate Calibrations
  7. Results – Genetic Algorithm Framework
  8. Results – Growing Neural Network Framework
  9. Results – All Data Sets
  10. Results – Various Aspects of the Frameworks and Measurements
  11. Summary and Outlook
  12. References
  13. Acknowledgements
Research Tutorials
Site Map
Print this Page Print this Page

2.7.2.   Topology of Neural Networks

All networks of this work are fully connected except of the non-uniform growing neural networks introduced in chapter 8. Fully connected means that a neuron is connected to all neurons of the proceeding layer. All networks except of the growing neural networks contain one layer of hidden neurons. If no special optimization technique is used, the number of hidden neurons is optimized by a gradient algorithm. Starting with 1 hidden neuron this algorithm adds fully connected neurons to the hidden layer until the error of prediction does not improve any more. For the hidden neurons, the hyperbolic tangent was used as activation function, which has some advantages referring to the convergence speed of learning in contrast to other nonlinear functions [59]. The activation function of the output neurons is a linear function. The combination of linear and nonlinear activation functions allows an effective modeling of both, nonlinear and linear data sets.

In principle, neural networks can model several responses simultaneously. Therefore, it is possible either to use a neural network with as many output neurons as responses to model or to use a separate neural model with one single output neuron for each response. In congruence with Despagne et al. [8] and Moore et al. [60] several tests showed that for the calibration and prediction single networks with one output are superior in terms of lower errors of prediction. Thus, for all calibrations networks with single outputs are used if not stated differently. For the optimization of networks, like a variable selection, the choice of network type signifi­cantly influences the results as single output networks select variables, which are most predictive for one individual response whereas multi output networks select the variables, which model the ensemble of responses best. This issue is further discussed in section 10.1.

Page 27 © Dr. Frank Dieterle, 14.08.2006 Navigation