Multilevel Data Classification and Function Approximation Using Hierarchical Neural Networks

Author(s):  
M. Alper Selver ◽  
Cüneyt Güzeliş
Author(s):  
Mohammed Sadiq Al-Rawi ◽  
Kamal R. Al-Rawi

In this chapter, we study the equivalence between multilayer feedforward neural networks referred as Ordinary Neural Networks (ONNs) that contain only summation (Sigma) as activation units, and multilayer feedforward Higher order Neural Networks (HONNs) that contains Sigma and product (PI) activation units. Since the time they were introduced by Giles and Maxwell (1987), HONNs have been used in many supervised classification and function approximation. Up to the date of writing this chapter, the most cited HONN article by ISI Thomson Web of Knowledge is the work of Kosmatopoulos et al., (1995) by which they introduced a recurrent HONN modeling. A simple comparison with ONNs is usually performed in order to demonstrate the performance of some newly introduced HONN architecture. Is it true that HONNs outperform ONNs, how much do they differ? And how much do they commute? Does equivalence exists between a HONN and an ONN? Is it possible to convert a HONN to an equivalent ONN? And how neural network equivalence is defined? This chapter tries to answer most of these questions. Due to the existence of huge neural networks architectures in the literature, the authors of this work are concerned and think that equivalence studies are necessary to give abstract definitions and unified approaches which might help in better understanding of HONNs performance and their respective design. On contrary to most of the previous works were HONN weights are non-negative integers, HONNs are given in this chapter in a form such that weights are adjustable real-valued numbers. In doing that, HONNs might have more expressive power and there is an increase probability of having complex valued neuron outputs. To enable the use of the real-valued weights that may result in a complex valued neuron output we introduce normalization to the input data as well as a modification to neuron activation functions. Using simple mathematics and the proposed normalization to input data, we showed that HONNs are equivalent to ONNs. The converted equivalent ONN posses the features of HONN and they have exactly the same functionality and output. The proposed conversion of HONN to ONN would permit using the huge amount of optimization algorithms to speed up the convergence of HONN and/or finding better topology. Recurrent HONNs, cascaded correlation HONNs, or any other complicated HONN can be simply defined via their equivalent ONNs and then trained with backpropagation, scaled conjugate gradient, Lavenberg-Marqudat algorithm, brain damage algorithms (Duda et al., 2000), etc. Using the developed equivalency model, this chapter also gives an easy bottom-up approach to convert a HONN to its equivalent ONN. Results on XOR and function approximation problems showed that ONNs obtained from their corresponding HONNs converged well to a solution. Different optimization training algorithms have been tested equivalent ONNs having feedforward structure and/or cascade correlation where the later have shown outstanding function approximation results.


Author(s):  
Shuzhi S. Ge ◽  
Chang C. Hang ◽  
Tong H. Lee ◽  
Tao Zhang

2005 ◽  
Vol 5 (1) ◽  
pp. 45-50 ◽  
Author(s):  
Sameh Ghwanmeh ◽  
Riyad Al-Shalabi . ◽  
Ghassan Kanan . ◽  
Luai Alnemi .

Sign in / Sign up

Export Citation Format

Share Document