Artificial Higher Order Neural Networks for Computer Science and Engineering
Latest Publications


TOTAL DOCUMENTS

22
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By IGI Global

9781615207114, 9781615207121

Author(s):  
Abhijit Das ◽  
Frank L. Lewis ◽  
Kamesh Subbarao

The dynamics of a quadrotor is a simplified form of helicopter dynamics that exhibit the same basic problems of strong coupling, multi-input/multi-output design, and unknown nonlinearities. The Lagrangian model of a typical quadrotor that involves four inputs and six outputs results in an underactuated system. There are several design techniques are available for nonlinear control of mechanical underactuated system. One of the most popular among them is backstepping. Backstepping is a well known recursive procedure where underactuation characteristic of the system is resolved by defining ‘desired’ virtual control and virtual state variables. Virtual control variables is determined in each recursive step assuming the corresponding subsystem is Lyapunov stable and virtual states are typically the errors of actual and desired virtual control variables. The application of the backstepping even more interesting when a virtual control law is applied to a Lagrangian subsystem. The necessary information to select virtual control and state variables for these systems can be obtained through model identification methods. One of these methods includes Neural Network approximation to identify the unknown parameters of the system. The unknown parameters may include uncertain aerodynamic force and moment coefficients or unmodeled dynamics. These aerodynamic coefficients generally are the functions of higher order state polynomials. In this chapter we will discuss how we can implement linear in parameter first order neural network approximation methods to identify these unknown higher order state polynomials in every recursive step of the backstepping. Thus the first order neural network eventually estimates the higher order state polynomials which is in fact a higher order like neural net (HOLNN). Moreover, when these NN placed into a control loop, they become dynamic NN whose weights are tuned only. Due to the inherent characteristics of the quadrotor, the Lagrangian form for the position dynamics is bilinear in the controls, which is confronted using a bilinear inverse kinematics solution. The result is a controller of intuitively appealing structure having an outer kinematics loop for position control and an inner dynamics loop for attitude control. The stability of the control law is guaranteed by a Lyapunov proof. The control approach described in this chapter is robust since it explicitly deals with unmodeled state dependent disturbances without needing any prior knowledge of the same. A simulation study validates the results such as decoupling, tracking etc obtained in the paper.


Author(s):  
Yiannis S. Boutalis ◽  
M. A. Christodoulou ◽  
Dimitris C. Theodoridis

A new definition of adaptive dynamic fuzzy systems (ADFS) is presented in this chapter for the identification of unknown nonlinear dynamical systems. The proposed scheme uses the concept of adaptive fuzzy systems operating in conjunction with high order neural networks (HONN’s). Since the plant is considered unknown, we first propose its approximation by a special form of an adaptive fuzzy system and in the sequel the fuzzy rules are approximated by appropriate HONN’s. Thus the identification scheme leads up to a recurrent high order neural network, which however takes into account the fuzzy output partitions of the initial ADFS. Weight updating laws for the involved HONN’s are provided, which guarantee that the identification error reaches zero exponentially fast. Simulations illustrate the potency of the method and comparisons on well known benchmarks are given.


Author(s):  
Satchidananda Dehuri ◽  
Sung-Bae Cho

In this chapter, the primary focus is on theoretical and empirical study of functional link neural networks (FLNNs) for classification. We present a hybrid Chebyshev functional link neural network (cFLNN) without hidden layer with evolvable particle swarm optimization (ePSO) for classification. The resulted classifier is then used for assigning proper class label to an unknown sample. The hybrid cFLNN is a type of feed-forward neural networks, which have the ability to transform the non-linear input space into higher dimensional space where linear separability is possible. In particular, the proposed hybrid cFLNN combines the best attribute of evolvable particle swarm optimization (ePSO), back-propagation learning (BP-Learning), and Chebyshev functional link neural networks (CFLNN). We have shown its effectiveness of classifying the unknown pattern using the datasets obtained from UCI repository. The computational results are then compared with other higher order neural networks (HONNs) like functional link neural network with a generic basis functions, Pi-Sigma neural network (PSNN), radial basis function neural network (RBFNN), and ridge polynomial neural network (RPNN).


Author(s):  
Madan M. Gupta ◽  
Noriyasu Homma ◽  
Zeng-Guang Hou ◽  
Ashu M. G. Solo ◽  
Ivo Bukovsky

In this chapter, we provide fundamental principles of higher order neural units (HONUs) and higher order neural networks (HONNs). An essential core of HONNs can be found in higher order weighted combinations or correlations between the input variables. By using some typical examples, this chapter describes how and why higher order combinations or correlations can be effective.


Author(s):  
Edgar N. Sanchez ◽  
Diana V. Urrego ◽  
Alma Y. Alanis ◽  
Salvador Carlos-Hernandez

In this chapter we propose the design of a discrete-time neural observer which requires no prior knowledge of the model of an anaerobic process, for estimate biomass, substrate and inorganic carbon which are variables difficult to measure and very important for anaerobic process control in a completely stirred tank reactor (CSTR) with biomass filter; this observer is based on a recurrent higher order neural network, trained with an extended Kalman filter based algorithm.


Author(s):  
Janti Shawash ◽  
David R. Selviah

Previous research suggested Artificial Neural Network (ANN) operation in a limited precision environment was particularly sensitive to the precision and could not take place below a certain threshold level of precision. This study investigates by simulation the training of networks using Back Propagation (BP) and Levenberg-Marquardt algorithms in limited precision to achieve high overall calculation accuracy, using on-line training, a new type of Higher Order Neural Network (HONN) known as the Correlation HONN (CHONN), discrete XOR and continuous optical waveguide sidewall roughness datasets to find the precision at which the training and operation is feasible. The BP algorithm converged to a precision beyond which the performance did not improve. The results support previous findings in literature for ANN operation that discrete datasets require lower precision than continuous datasets. The importance of our findings is that they demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware such as Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) to achieve high overall operational accuracy.


Author(s):  
David R. Selviah ◽  
Janti Shawash

This chapter celebrates 50 years of first and higher order neural network (HONN) implementations in terms of the physical layout and structure of electronic hardware, which offers high speed, low latency, compact, low cost, low power, mass produced systems. Low latency is essential for practical applications in real time control for which software implementations running on CPUs are too slow. The literature review chapter traces the chronological development of electronic neural networks (ENN) discussing selected papers in detail from analog electronic hardware, through probabilistic RAM, generalizing RAM, custom silicon Very Large Scale Integrated (VLSI) circuit, Neuromorphic chips, pulse stream interconnected neurons to Application Specific Integrated circuits (ASICs) and Zero Instruction Set Chips (ZISCs). Reconfigurable Field Programmable Gate Arrays (FPGAs) are given particular attention as the most recent generation incorporate Digital Signal Processing (DSP) units to provide full System on Chip (SoC) capability offering the possibility of real-time, on-line and on-chip learning.


Author(s):  
Luis J. Ricalde ◽  
Edgar N. Sanchez ◽  
Alma Y. Alanis

This Chapter presents the design of an adaptive recurrent neural observer-controller scheme for nonlinear systems whose model is assumed to be unknown and with constrained inputs. The control scheme is composed of a neural observer based on Recurrent High Order Neural Networks which builds the state vector of the unknown plant dynamics and a learning adaptation law for the neural network weights for both the observer and identifier. These laws are obtained via control Lyapunov functions. Then, a control law, which stabilizes the tracking error dynamics is developed using the Lyapunov and the inverse optimal control methodologies . Tracking error boundedness is established as a function of design parameters.


Author(s):  
Junichi Murata

A Pi-Sigma higher order neural network (Pi-Sigma HONN) is a type of higher order neural network, where, as its name implies, weighted sums of inputs are calculated first and then the sums are multiplied by each other to produce higher order terms that constitute the network outputs. This type of higher order neural networks have good function approximation capabilities. In this chapter, the structural feature of Pi-Sigma HONNs is discussed in contrast to other types of neural networks. The reason for their good function approximation capabilities is given based on pseudo-theoretical analysis together with empirical illustrations. Then, based on the analysis, an improved version of Pi-Sigma HONNs is proposed which has yet better function approximation capabilities.


Author(s):  
Zhao Lu ◽  
Gangbing Song ◽  
Leang-san Shieh

As a general framework to represent data, the kernel method can be used if the interactions between elements of the domain occur only through inner product. As a major stride towards the nonlinear feature extraction and dimension reduction, two important kernel-based feature extraction algorithms, kernel principal component analysis and kernel Fisher discriminant, have been proposed. They are both used to create a projection of multivariate data onto a space of lower dimensionality, while attempting to preserve as much of the structural nature of the data as possible. However, both methods suffer from the complete loss of sparsity and redundancy in the nonlinear feature representation. In an attempt to mitigate these drawbacks, this article focuses on the application of the newly developed polynomial kernel higher order neural networks in improving the sparsity and thereby obtaining a succinct representation for kernel-based nonlinear feature extraction algorithms. Particularly, the learning algorithm is based on linear programming support vector regression, which outperforms the conventional quadratic programming support vector regression in model sparsity and computational efficiency.


Sign in / Sign up

Export Citation Format

Share Document