Artificial Higher Order Neural Networks for Modeling and Simulation
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781466621756, 9781466621763

Author(s):  
Hiromi Miyajima ◽  
Noritaka Shigei ◽  
Shuji Yatsuki

This chapter presents macroscopic properties of higher order neural networks. Randomly connected Neural Networks (RNNs) are known as a convenient model to investigate the macroscopic properties of neural networks. They are investigated by using the statistical method of neuro-dynamics. By applying the approach to higher order neural networks, macroscopic properties of them are made clear. The approach establishes: (a) there are differences between stability of RNNs and Randomly connected Higher Order Neural Networks (RHONNs) in the cases of the digital state -model and the analog state model; (b) there is no difference between stability of RNNs and RHONNs in the cases of the digital state -model and the analog state -model; (c) with neural networks with oscillation, there are large differences between RNNs and RHONNs in the cases of the digital state -model and the analog state -model, that is, there exists complex dynamics in each model for ; (d) behavior of groups composed of RHONNs are represented as a combination of the behavior of each RHONN.


Author(s):  
Shuxiang Xu

An Extreme Learning Machine (ELM) randomly chooses hidden neurons and analytically determines the output weights (Huang, et al., 2005, 2006, 2008). With the ELM algorithm, only the connection weights between hidden layer and output layer are adjusted. The ELM algorithm tends to generalize better at a very fast learning speed: it can learn thousands of times faster than conventionally popular learning algorithms (Huang, et al., 2006). Artificial Neural Networks (ANNs) have been widely used as powerful information processing models and adopted in applications such as bankruptcy prediction, predicting costs, forecasting revenue, forecasting share prices and exchange rates, processing documents, and many more. Higher Order Neural Networks (HONNs) are ANNs in which the net input to a computational neuron is a weighted sum of products of its inputs. Real life data are not usually perfect. They contain wrong, incomplete, or vague data. Hence, it is usual to find missing data in many information sources used. Missing data is a common problem in statistical analysis (Little & Rubin, 1987). This chapter uses the Extreme Learning Machine (ELM) algorithm for HONN models and applies it in several significant business cases, which involve missing datasets. The experimental results demonstrate that HONN models with the ELM algorithm offer significant advantages over standard HONN models, such as faster training, as well as improved generalization abilities.


Author(s):  
Madan M. Gupta ◽  
Ivo Bukovsky ◽  
Noriyasu Homma ◽  
Ashu M. G. Solo ◽  
Zeng-Guang Hou

In this chapter, the authors provide fundamental principles of Higher Order Neural Units (HONUs) and Higher Order Neural Networks (HONNs) for modeling and simulation. An essential core of HONNs can be found in higher order weighted combinations or correlations between the input variables and HONU. Except for the high quality of nonlinear approximation of static HONUs, the capability of dynamic HONUs for the modeling of dynamic systems is shown and compared to conventional recurrent neural networks when a practical learning algorithm is used. In addition, the potential of continuous dynamic HONUs to approximate high dynamic order systems is discussed, as adaptable time delays can be implemented. By using some typical examples, this chapter describes how and why higher order combinations or correlations can be effective for modeling of systems.


Author(s):  
Ming Zhang

Real world financial data is often discontinuous and non-smooth. Accuracy will be a problem, if we attempt to use neural networks to simulate such functions. Neural network group models can perform this function with more accuracy. Both Polynomial Higher Order Neural Network Group (PHONNG) and Trigonometric polynomial Higher Order Neural Network Group (THONNG) models are studied in this chapter. These PHONNG and THONNG models are open box, convergent models capable of approximating any kind of piecewise continuous function to any degree of accuracy. Moreover, they are capable of handling higher frequency, higher order nonlinear, and discontinuous data. Results obtained using Polynomial Higher Order Neural Network Group and Trigonometric polynomial Higher Order Neural Network Group financial simulators are presented, which confirm that PHONNG and THONNG group models converge without difficulty, and are considerably more accurate (0.7542% - 1.0715%) than neural network models such as using Polynomial Higher Order Neural Network (PHONN) and Trigonometric polynomial Higher Order Neural Network (THONN) models.


Author(s):  
Mehdi Fallahnezhad ◽  
Hashem Yousefi

Precise insertion of a medical needle as an end-effecter of a robotic or computer-aided system into biological tissue is an important issue and should be considered in different operations, such as brain biopsy, prostate brachytherapy, and percutaneous therapies. Proper understanding of the whole procedure leads to a better performance by an operator or system. In this chapter, the authors use a 0.98 mm diameter needle with a real-time recording of force, displacement, and velocity of needle through biological tissue during in-vitro insertions. Using constant velocity experiments from 5 mm/min up to 300 mm/min, the data set for the force-displacement graph of insertion was gathered. Tissue deformation with a small puncture and a constant velocity penetration are the two first phases in the needle insertion process. Direct effects of different parameters and their correlations during the process is being modeled using a polynomial neural network. The authors develop different networks in 2nd and 3rd order to model the two first phases of insertion separately. Modeling accuracies were 98% and 86% in phase 1 and 2, respectively.


Author(s):  
Michel Lopez-Franco ◽  
Alma Y. Alanis ◽  
Nancy Arana-Daniel ◽  
Carlos Lopez-Franco

In this chapter, a Recurrent Higher Order Neural Network (RHONN) is used to identify the plant model of discrete time nonlinear systems, under the assumption that all the state is available for measurement. Then the Extended Kalman Filter (EKF) is used to train the RHONN. The applicability of this scheme is illustrated by identification for an electrically driven nonholonomic mobile robot. Traditionally, modeling of mobile robots only considers its kinematics. It has been well known that the actuator dynamics is an important part of the design of the complete robot dynamics. However, most of the reported results in literature do not consider all parametric uncertainties for mobile robots at the actuator level. This is due to the modeling problem becoming extremely difficult as the complexity of the system dynamics increases, and the mobile robot model includes the uncertainties of the actuator dynamics as well as the uncertainties of the robot kinematics and dynamics.


Author(s):  
Mehdi Fallahnezhad ◽  
Salman Zaferanlouei

Considering high order correlations of selected features next to the raw features of input can facilitate target pattern recognition. In artificial intelligence, this is being addressed by Higher Order Neural Networks (HONNs). In general, HONN structures provide superior specifications (e.g. resolving the dilemma of choosing the number of neurons and layers of networks, better fitting specs, quicker, and open-box specificity) to traditional neural networks. This chapter introduces a hybrid structure of higher order neural networks, which can be generally applied in various branches of pattern recognition. Structure, learning algorithm, and network configuration are introduced, and structure is applied either as classifier (where is called HHONC) to different benchmark statistical data sets or as functional behavior approximation (where is called HHONN) to a heat and mass transfer dilemma. In each structure, results are compared with previous studies, which show its superior performance next to other mentioned advantages.


Author(s):  
George S. Eskander ◽  
Amir Atiya

This chapter reviews a recent HONN-like model called Symbolic Function Network (SFN). This model is designed with the goal to impart more flexibility than both traditional and HONNs neural networks. The main idea behind this scheme is the fact that different functional forms suit different applications and that no specific architecture is best for all. Accordingly, the model is designed as an evolving network that can discover the best functional basis, adapt its parameters, and select its structure simultaneously. Despite the high modeling capability of SFN, it is considered as a starting point for developing more powerful models. This chapter aims to open a door for researchers to propose new formulations and techniques that impart more flexibility and result in sparser and more accurate models. Through this chapter, the theoretical basis of SFN is discussed. The model optimization computations are deeply illustrated to enable researchers to easily implement and test the model.


Author(s):  
Yuxin Ding

Traditional Hopfield networking has been widely used to solve combinatorial optimization problems. However, high order Hopfiled networks, as an expansion of traditional Hopfield networks, are seldom used to solve combinatorial optimization problems. In theory, compared with low order networks, high order networks have better properties, such as stronger approximations and faster convergence rates. In this chapter, the authors focus on how to use high order networks to model combinatorial optimization problems. Firstly, the high order discrete Hopfield Network is introduced, then the authors discuss how to find the high order inputs of a neuron. Finally, the construction method of energy function and the neural computing algorithm are presented. In this chapter, the N queens problem and the crossbar switch problem, which are NP-complete problems, are used as examples to illustrate how to model practical problems using high order neural networks. The authors also discuss the performance of high order networks for modeling the two combinatorial optimization problems.


Author(s):  
Jean X. Zhang

This chapter proposes a nonlinear artificial Higher Order Neural Network (HONN) model to study the relation between manager compensation and performance in the governmental sector. Using a HONN simulator, this study analyzes city manager compensation as a function of local government performance, and compares the results with those from a linear regression model. This chapter shows that the nonlinear model generated from HONN has a smaller Root Mean Squared Error (Root MSE) of 0.0020 as compared to 0.06598 from a linear regression model. This study shows that artificial HONN is an effective tool in modeling city manager compensation.


Sign in / Sign up

Export Citation Format

Share Document