On the Equivalence between Ordinary Neural Networks and Higher Order Neural Networks

Author(s):  
Mohammed Sadiq Al-Rawi ◽  
Kamal R. Al-Rawi

In this chapter, we study the equivalence between multilayer feedforward neural networks referred as Ordinary Neural Networks (ONNs) that contain only summation (Sigma) as activation units, and multilayer feedforward Higher order Neural Networks (HONNs) that contains Sigma and product (PI) activation units. Since the time they were introduced by Giles and Maxwell (1987), HONNs have been used in many supervised classification and function approximation. Up to the date of writing this chapter, the most cited HONN article by ISI Thomson Web of Knowledge is the work of Kosmatopoulos et al., (1995) by which they introduced a recurrent HONN modeling. A simple comparison with ONNs is usually performed in order to demonstrate the performance of some newly introduced HONN architecture. Is it true that HONNs outperform ONNs, how much do they differ? And how much do they commute? Does equivalence exists between a HONN and an ONN? Is it possible to convert a HONN to an equivalent ONN? And how neural network equivalence is defined? This chapter tries to answer most of these questions. Due to the existence of huge neural networks architectures in the literature, the authors of this work are concerned and think that equivalence studies are necessary to give abstract definitions and unified approaches which might help in better understanding of HONNs performance and their respective design. On contrary to most of the previous works were HONN weights are non-negative integers, HONNs are given in this chapter in a form such that weights are adjustable real-valued numbers. In doing that, HONNs might have more expressive power and there is an increase probability of having complex valued neuron outputs. To enable the use of the real-valued weights that may result in a complex valued neuron output we introduce normalization to the input data as well as a modification to neuron activation functions. Using simple mathematics and the proposed normalization to input data, we showed that HONNs are equivalent to ONNs. The converted equivalent ONN posses the features of HONN and they have exactly the same functionality and output. The proposed conversion of HONN to ONN would permit using the huge amount of optimization algorithms to speed up the convergence of HONN and/or finding better topology. Recurrent HONNs, cascaded correlation HONNs, or any other complicated HONN can be simply defined via their equivalent ONNs and then trained with backpropagation, scaled conjugate gradient, Lavenberg-Marqudat algorithm, brain damage algorithms (Duda et al., 2000), etc. Using the developed equivalency model, this chapter also gives an easy bottom-up approach to convert a HONN to its equivalent ONN. Results on XOR and function approximation problems showed that ONNs obtained from their corresponding HONNs converged well to a solution. Different optimization training algorithms have been tested equivalent ONNs having feedforward structure and/or cascade correlation where the later have shown outstanding function approximation results.

Author(s):  
Yuehui Chen ◽  
Peng Wu ◽  
Qiang Wu

Artificial Neural Networks (ANNs) have become very important in making stock market predictions. Much research on the applications of ANNs has proven their advantages over statistical and other methods. In order to identify the main benefits and limitations of previous methods in ANNs applications, a comparative analysis of selected applications is conducted. It can be concluded from analysis that ANNs and HONNs are most implemented in forecasting stock prices and stock modeling. The aim of this chapter is to study higher order artificial neural networks for stock index modeling problems. New network architectures and their corresponding training algorithms are discussed. These structures demonstrate their processing capabilities over traditional ANNs architectures with a reduction in the number of processing elements. In this chapter, the performance of classical neural networks and higher order neural networks for stock index forecasting is evaluated. We will highlight a novel slide-window method for data forecasting. With each slide of the observed data, the model can adjusts the variable dynamically. Simulation results show the feasibility and effectiveness of the proposed methods.


Author(s):  
Junichi Murata

A Pi-Sigma higher order neural network (Pi-Sigma HONN) is a type of higher order neural network, where, as its name implies, weighted sums of inputs are calculated first and then the sums are multiplied by each other to produce higher order terms that constitute the network outputs. This type of higher order neural networks have good function approximation capabilities. In this chapter, the structural feature of Pi-Sigma HONNs is discussed in contrast to other types of neural networks. The reason for their good function approximation capabilities is given based on pseudo-theoretical analysis together with empirical illustrations. Then, based on the analysis, an improved version of Pi-Sigma HONNs is proposed which has yet better function approximation capabilities.


2020 ◽  
Vol 12 (5) ◽  
pp. 52-72
Author(s):  
Noor Aida Husaini ◽  
◽  
Rozaida Ghazali ◽  
Nureize Arbaiy ◽  
Ayodele Lasisi

The standard method to train the Higher Order Neural Networks (HONN) is the well-known Backpropagation (BP) algorithm. Yet, the current BP algorithm has several limitations including easily stuck into local minima, particularly when dealing with highly non-linear problems and utilise computationally intensive training algorithms. The current BP algorithm is also relying heavily on the initial weight values and other parameters picked. Therefore, in an attempt to overcome the BP drawbacks, we investigate a method called Modified Cuckoo Search-Markov chain Monté Carlo for optimising the weights in HONN and boost the learning process. This method, which lies in the Swarm Intelligence area, is notably successful in optimisation task. We compared the performance with several HONN-based network models and standard Multilayer Perceptron on four (4) time series datasets: Temperature, Ozone, Gold Close Price and Bitcoin Closing Price from various repositories. Simulation results indicate that this swarm-based algorithm outperformed or at least at par with the network models with current BP algorithm in terms of lower error rate.


Sign in / Sign up

Export Citation Format

Share Document